| { |
| "paper_id": "P07-1016", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:50:10.131621Z" |
| }, |
| "title": "", |
| "authors": [], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Words of foreign origin are referred to as borrowed words or loanwords. A loanword is usually imported to Chinese by phonetic transliteration if a translation is not easily available. Semantic transliteration is seen as a good tradition in introducing foreign words to Chinese. Not only does it preserve how a word sounds in the source language, it also carries forward the word's original semantic attributes. This paper attempts to automate the semantic transliteration process for the first time. We conduct an inquiry into the feasibility of semantic transliteration and propose a probabilistic model for transliterating personal names in Latin script into Chinese. The results show that semantic transliteration substantially and consistently improves accuracy over phonetic transliteration in all the experiments.", |
| "pdf_parse": { |
| "paper_id": "P07-1016", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Words of foreign origin are referred to as borrowed words or loanwords. A loanword is usually imported to Chinese by phonetic transliteration if a translation is not easily available. Semantic transliteration is seen as a good tradition in introducing foreign words to Chinese. Not only does it preserve how a word sounds in the source language, it also carries forward the word's original semantic attributes. This paper attempts to automate the semantic transliteration process for the first time. We conduct an inquiry into the feasibility of semantic transliteration and propose a probabilistic model for transliterating personal names in Latin script into Chinese. The results show that semantic transliteration substantially and consistently improves accuracy over phonetic transliteration in all the experiments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The study of Chinese transliteration dates back to the seventh century when Buddhist scriptures were translated into Chinese. The earliest bit of Chinese translation theory related to transliteration may be the principle of \"Names should follow their bearers, while things should follow Chinese.\" In other words, names should be transliterated, while things should be translated according to their meanings. The same theory still holds today.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Transliteration has been practiced in several ways, including phonetic transliteration and phonetic-semantic transliteration. By phonetic transliteration, we mean rewriting a foreign word in native grapheme such that its original pronunciation is preserved. For example, London becomes \u4f26\u6566 /Lun-Dun/ 1 which does not carry any clear connotations. Phonetic transliteration represents the common practice in transliteration. Phonetic-semantic transliteration, hereafter referred to as semantic transliteration for short, is an advanced translation technique that is considered as a recommended translation practice for centuries. It translates a foreign word by preserving both its original pronunciation and meaning. For example, Xu Guangqi 2 translated geo-in geometry into Chinese as \u51e0\u4f55 /Ji-He/, which carries the pronunciation of geo-and expresses the meaning of \"a science concerned with measuring the earth\".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Many of the loanwords exist in today's Chinese through semantic transliteration, which has been well received (Hu and Xu, 2003; Hu, 2004) by the people because of many advantages. Here we just name a few. (1) It brings in not only the sound, but also the meaning that fills in the semantic blank left by phonetic transliteration. This also reminds people that it is a loanword and avoids misleading;", |
| "cite_spans": [ |
| { |
| "start": 110, |
| "end": 127, |
| "text": "(Hu and Xu, 2003;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 128, |
| "end": 137, |
| "text": "Hu, 2004)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "(2) It provides etymological clues that make it easy to trace back to the root of the words. For example, a transliterated Japanese name will maintain its Japanese identity in its Chinese appearance; (3) It evokes desirable associations, for example, an English girl's name is transliterated with Chinese characters that have clear feminine association, thus maintaining the gender identity. 1 Hereafter, Chinese characters are also denoted in Pinyin romanization system, for ease of reference. 2 Xu Quangqi (1562 -1633 translated The Original Manuscript of Geometry to Chinese jointly with Matteo Ricci. Unfortunately, most of the reported work in the area of machine transliteration has not ventured into semantic transliteration yet. The Latin-scripted personal names are always assumed to homogeneously follow the English phonic rules in automatic transliteration (Li et al., 2004) . Therefore, the same transliteration model is applied to all the names indiscriminatively. This assumption degrades the performance of transliteration because each language has its own phonic rule and the Chinese characters to be adopted depend on the following semantic attributes of a foreign name.", |
| "cite_spans": [ |
| { |
| "start": 392, |
| "end": 393, |
| "text": "1", |
| "ref_id": null |
| }, |
| { |
| "start": 495, |
| "end": 513, |
| "text": "2 Xu Quangqi (1562", |
| "ref_id": null |
| }, |
| { |
| "start": 514, |
| "end": 519, |
| "text": "-1633", |
| "ref_id": null |
| }, |
| { |
| "start": 868, |
| "end": 885, |
| "text": "(Li et al., 2004)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "(1) Language of origin: An English word is not necessarily of pure English origin. In English news reports about Asian happenings, an English personal name may have been originated from Chinese, Japanese or Korean. The language origin affects the phonic rules and the characters to be used in transliteration 3 . For example, a Japanese name Matsumoto should be transliterated as \u677e\u672c /Song-Ben/, instead of \u9a6c\u8328\u83ab\u6258 /Ma-Ci-Mo-Tuo/ as if it were an English name.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Transliteration of Personal Names", |
| "sec_num": null |
| }, |
| { |
| "text": "(2) Gender association: A given name typically implies a clear gender association in both the source and target languages. For example, the Chinese transliterations of Alice and Alexandra are \u7231\u4e3d\u4e1d /Ai-Li-Si/ and \u4e9a\u5386\u5c71\u5927 /Ya-Li-Shan-Da/ respectively, showing clear feminine and masculine characteristics. Transliterating Alice as \u57c3 \u91cc \u65af /Ai-Li-Si/ is phonetically correct, but semantically inadequate due to an improper gender association.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Transliteration of Personal Names", |
| "sec_num": null |
| }, |
| { |
| "text": "(3) Surname and given name: The Chinese name system is the original pattern of names in Eastern Asia such as China, Korea and Vietnam, in which a limited number of characters 4 are used for surnames while those for given names are less restrictive. Even for English names, the character set for given name transliterations are different from that for surnames.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Transliteration of Personal Names", |
| "sec_num": null |
| }, |
| { |
| "text": "Here are two examples of semantic transliteration for personal names. George Bush and Yamamoto Akiko are transliterated into \u4e54\u6cbb \u5e03 \u4ec0 and \u5c71 \u672c \u4e9a \u559c \u5b50 that arouse to the following associations: \u4e54 \u6cbb /Qiao-Zhi/ -male given name, English origin; \u5e03 \u4ec0 /Bu-Shi/surname, English origin; \u5c71 \u672c /Shan-Ben/surname, Japanese origin; \u4e9a \u559c \u5b50 /Ya-Xi-Zi/female given name, Japanese origin.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Transliteration of Personal Names", |
| "sec_num": null |
| }, |
| { |
| "text": "In Section 2, we summarize the related work. In Section 3, we discuss the linguistic feasibility of semantic transliteration for personal names. Section 4 formulates a probabilistic model for semantic transliteration. Section 5 reports the experiments. Finally, we conclude in Section 6.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Transliteration of Personal Names", |
| "sec_num": null |
| }, |
| { |
| "text": "In general, computational studies of transliteration fall into two categories: transliteration modeling and extraction of transliteration pairs. In transliteration modeling, transliteration rules are trained from a large, bilingual transliteration lexicon (Lin and Chen, 2002; Oh and Choi, 2005) , with the objective of translating unknown words on the fly in an open, general domain. In the extraction of transliterations, data-driven methods are adopted to extract actual transliteration pairs from a corpus, in an effort to construct a large, upto-date transliteration lexicon Sproat et al., 2006) .", |
| "cite_spans": [ |
| { |
| "start": 256, |
| "end": 276, |
| "text": "(Lin and Chen, 2002;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 277, |
| "end": 295, |
| "text": "Oh and Choi, 2005)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 580, |
| "end": 600, |
| "text": "Sproat et al., 2006)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Phonetic transliteration can be considered as an extension to the traditional grapheme-to-phoneme (G2P) conversion (Galescu and Allen, 2001 ), which has been a much-researched topic in the field of speech processing. If we view the grapheme and phoneme as two symbolic representations of the same word in two different languages, then G2P is a transliteration task by itself. Although G2P and phonetic transliteration are common in many ways, transliteration has its unique challenges, especially as far as E-C transliteration is concerned. E-C transliteration is the conversion between English graphemes, phonetically associated English letters, and Chinese graphemes, characters which represent ideas or meanings. As a Chinese transliteration can arouse to certain connotations, the choice of Chinese characters becomes a topic of interest (Xu et al., 2006) .", |
| "cite_spans": [ |
| { |
| "start": 115, |
| "end": 139, |
| "text": "(Galescu and Allen, 2001", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 842, |
| "end": 859, |
| "text": "(Xu et al., 2006)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Semantic transliteration can be seen as a subtask of statistical machine translation (SMT) with monotonic word ordering. By treating a letter/character as a word and a group of letters/characters as a phrase or token unit in SMT, one can easily apply the traditional SMT models, such as the IBM generative model (Brown et al., 1993) or the phrase-based translation model (Crego et al., 2005) to transliteration. In transliteration, we face similar issues as in SMT, such as lexical mapping and alignment. However, transliteration is also different from general SMT in many ways. Unlike SMT where we aim at optimizing the semantic transfer, semantic transliteration needs to maintain the phonetic equivalence as well.", |
| "cite_spans": [ |
| { |
| "start": 312, |
| "end": 332, |
| "text": "(Brown et al., 1993)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 371, |
| "end": 391, |
| "text": "(Crego et al., 2005)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In computational linguistic literature, much effort has been devoted to phonetic transliteration, such as English-Arabic, English-Chinese (Li et al., 2004) , English-Japanese (Knight and Graehl, 1998) and English-Korean. In G2P studies, Font Llitjos and Black 2001showed how knowledge of language of origin may improve conversion accuracy. Unfortunately semantic transliteration, which is considered as a good tradition in translation practice (Hu and Xu, 2003; Hu, 2004) , has not been adequately addressed computationally in the literature. Some recent work Xu et al., 2006) has attempted to introduce preference into a probabilistic framework for selection of Chinese characters in phonetic transliteration. However, there is neither analytical result nor semantic-motivated transliteration solution being reported.", |
| "cite_spans": [ |
| { |
| "start": 138, |
| "end": 155, |
| "text": "(Li et al., 2004)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 175, |
| "end": 200, |
| "text": "(Knight and Graehl, 1998)", |
| "ref_id": null |
| }, |
| { |
| "start": 444, |
| "end": 461, |
| "text": "(Hu and Xu, 2003;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 462, |
| "end": 471, |
| "text": "Hu, 2004)", |
| "ref_id": null |
| }, |
| { |
| "start": 560, |
| "end": 576, |
| "text": "Xu et al., 2006)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "A Latin-scripted personal name is written in letters, which represent the pronunciations closely, whereas each Chinese character represents not only the syllables, but also the semantic associations. Thus, character rendering is a vital issue in transliteration. Good transliteration adequately projects semantic association while an inappropriate one may lead to undesirable interpretation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feasibility of Semantic Transliteration", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Is semantic transliteration possible? Let's first conduct an inquiry into the feasibility of semantic transliteration on 3 bilingual name corpora, which are summarizied in Table 1 and will be used in experiments. E-C corpus is an augmented version of Xinhua English to Chinese dictionary for English names (Xinhua, 1992) . J-C corpus is a romanized Japanese to Chinese dictionary for Japanese names. The C-C corpus is a Chinese Pinyin to character dictionary for Chinese names. The entries are classified into surname, male and female given name categories. The E-C corpus also contains some entries without gender/surname labels, referred to as unclassified. Phonetic transliteration has not been a problem as Chinese has over 400 unique syllables that are enough to approximately transcribe all syllables in other languages. Different Chinese characters may render into the same syllable and form a range of homonyms. Among the homonyms, those arousing positive meanings can be used for personal names. As discussed elsewhere (Sproat et al., 1996) , out of several thousand common Chinese characters, a subset of a few hundred characters tends to be used overwhelmingly for transliterating English names to Chinese, e.g. only 731 Chinese characters are adopted in the E-C corpus. Although the character sets are shared across languages and genders, the statistics in Table 2 show that each semantic attribute is associated with some unique characters. In the C-C corpus, out of the total of 4,507 characters, only 776 of them are for surnames. It is interesting to find that female given names are represented by a smaller set of characters than that for male across 3 corpora. Note that the overlap of Chinese characters usage across genders is higher than that across languages. For instance, there is a 44.2% overlap across gender for the transcribed English names; but only 19.2% overlap across languages for the surnames.", |
| "cite_spans": [ |
| { |
| "start": 306, |
| "end": 320, |
| "text": "(Xinhua, 1992)", |
| "ref_id": null |
| }, |
| { |
| "start": 1028, |
| "end": 1049, |
| "text": "(Sproat et al., 1996)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 172, |
| "end": 179, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 1369, |
| "end": 1376, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Feasibility of Semantic Transliteration", |
| "sec_num": "3" |
| }, |
| { |
| "text": "E-C J-C 5 C-C 6 Surname (S) 12,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feasibility of Semantic Transliteration", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In summary, the semantic attributes of personal names are characterized by the choice of characters, and therefore their n-gram statistics as well. If the attributes are known in advance, then the semantic transliteration is absolutely feasible. We may obtain the semantic attributes from the context through trigger words. For instance, from \"Mr Tony Blair\", we realize \"Tony\" is a male given name while \"Blair\" is a surname; from \"Japanese Prime Minister Koizumi\", we resolve that \"Koizumi\" is a Japanese surname. In the case where contextual trigger words are not available, we study detecting the semantic attributes from the personal names themselves in the next section.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feasibility of Semantic Transliteration", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Let S and T denote the name written in the source and target writing systems respectively. Within a probabilistic framework, a transliteration system produces the optimum target name, T * , which yields the highest posterior probability given the source name, S, i.e.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formulation of Transliteration Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": ") | ( max arg * S T P T T S T \u2208 =", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Formulation of Transliteration Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "where S", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formulation of Transliteration Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "T is the set of all possible transliterations for the source name, S. The alignment between S and T is assumed implicit in the above formulation. In a standard phonetic transliteration system,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formulation of Transliteration Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": ") | ( S T P", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formulation of Transliteration Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": ", the posterior probability of the hypothesized transliteration, T, given the source name, S, is directly modeled without considering any form of semantic information. On the other hand, semantic transliteration described in this paper incorporates language of origin and gender information to capture the semantic structure. To do so,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formulation of Transliteration Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": ") | ( S T P is rewritten as ( | ) P T S = \u2211 \u2208 \u2208 G L G L S G L T P , ) | , , (", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Formulation of Transliteration Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "= \u2211 \u2208 \u2208 G L G L S G L P G L S T P , ) | , ( ) , , | (", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Formulation of Transliteration Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "where ( | , , ) P T S L G is the transliteration probability from source S to target T, given the language of origin (L) and gender (G) labels. L and G denote the sets of languages and genders respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formulation of Transliteration Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": ") | , ( S G L P", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formulation of Transliteration Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "is the probability of the language and the gender given the source, S.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formulation of Transliteration Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Given the alignment between S and T, the transliteration probability given L and G may be written as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formulation of Transliteration Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": ") , , | ( G L S T P = 1 1 1 1 ( | , ) I i i i i P t T S \u2212 = \u220f (4) \u2248 1 1 1 ( | , , ) I i i i i i P t t s s \u2212 \u2212 = \u220f (5)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formulation of Transliteration Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "where i s and i t are the i th token of S and T respectively and I is the total number of tokens in both S and T. k j S and k j T represent the sequence of tokens", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formulation of Transliteration Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "( ) 1 , , , j j k s s s + K and ( ) 1 , , , j j k t t t + K", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formulation of Transliteration Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "respectively. Eq.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formulation of Transliteration Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "(4) is in fact the n-gram likelihood of the token pair , i i t s \u2329 \u232a sequence and Eq. (5) approximates this probability using a bigram language model. This model is conceptually similar to the joint sourcechannel model (Li et al., 2004) where the target token i t depends on not only its source token i s but also the history 1 i t \u2212 and 1 i s \u2212 . Each character in the target name forms a token. To obtain the source tokens, the source and target names in the training data are aligned using the EM algorithm. This yields a set of possible source tokens and a mapping between the source and target tokens. During testing, each source name is first segmented into all possible token sequences given the token set. These source token sequences are mapped to the target sequences to yield an N-best list of transliteration candidates. Each candidate is scored using an n-gram language model given by Eqs. (4) or (5).", |
| "cite_spans": [ |
| { |
| "start": 219, |
| "end": 236, |
| "text": "(Li et al., 2004)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formulation of Transliteration Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "As in Eq. (3), the transliteration also greatly depends on the prior knowledge,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formulation of Transliteration Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": ") | , ( S G L P .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formulation of Transliteration Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "When no prior knowledge is available, a uniform probability distribution is assumed. By expressing", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formulation of Transliteration Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": ") | , ( S G L P in the following form, ) | ( ) , | ( ) | , ( S L P S L G P S G L P =", |
| "eq_num": "(6)" |
| } |
| ], |
| "section": "Formulation of Transliteration Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "prior knowledge about language and gender may be incorporated. For example, if the language of S is known as s L , we have", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formulation of Transliteration Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "1 ( | ) 0 s s L L P L S L L = \u23a7 = \u23a8 \u2260 \u23a9 (7)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formulation of Transliteration Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Similarly, if the gender information for S is known as s", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formulation of Transliteration Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "G , then, 1 ( | , ) 0 s s G G P G L S G G = \u23a7 = \u23a8 \u2260 \u23a9 (8)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formulation of Transliteration Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Note that personal names have clear semantic associations. In the case where the semantic attribute information is not available, we propose learning semantic information from the names themselves. Using Bayes' theorem, we have ) (", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formulation of Transliteration Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": ") , ( ) , | ( ) | , ( S P G L P G L S P S G L P =", |
| "eq_num": "(9)" |
| } |
| ], |
| "section": "Formulation of Transliteration Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "( | , ) P S L G can be modeled using an n-gram language model for the letter sequence of all the Latin-scripted names in the training set. The prior probability, ) , ( G L P , is typically uniform.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formulation of Transliteration Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "does not depend on L and G, thus can be omitted. Incorporating", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ") (S P", |
| "sec_num": null |
| }, |
| { |
| "text": ") | , ( S G L P", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ") (S P", |
| "sec_num": null |
| }, |
| { |
| "text": "into Eq. (3) can be viewed as performing a soft decision of the language and gender semantic attributes. By contrast, hard decision may also be performed based on maximum likelihood approach:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ") (S P", |
| "sec_num": null |
| }, |
| { |
| "text": "arg max ( | ) s L L PS L \u2208 = L (10) arg max ( | , ) s G G PS LG \u2208 = G (11)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ") (S P", |
| "sec_num": null |
| }, |
| { |
| "text": "where Table 9 in Section 5.3).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 6, |
| "end": 13, |
| "text": "Table 9", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": ") (S P", |
| "sec_num": null |
| }, |
| { |
| "text": "If we are unable to model the prior knowledge of semantic attributes", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ") (S P", |
| "sec_num": null |
| }, |
| { |
| "text": ") | , ( S G L P", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ") (S P", |
| "sec_num": null |
| }, |
| { |
| "text": ", then a more general model will be used for ( | , , ) P T S L G by dropping the dependency on the information that is not available. For example, Eq. (3) is reduced to", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ") (S P", |
| "sec_num": null |
| }, |
| { |
| "text": "( | , ) ( | ) L P T S L P L S \u2208 \u2211 L", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ") (S P", |
| "sec_num": null |
| }, |
| { |
| "text": "if the gender information is missing. Note that when both language and gender are unknown, the system simplifies to the baseline phonetic transliteration system.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ") (S P", |
| "sec_num": null |
| }, |
| { |
| "text": "This section presents experiments on database of 3 language origins (Japanese, Chinese and English) and gender information (surname 7 , male and female). In the experiments of determining the language origin, we used the full data set for the 3 languages as in shown in Table 1 . The training and test data for semantic transliteration are the subset of Table 1 comprising those with surnames, male and female given names labels. In this paper, J, C and E stand for Japanese, Chinese and English; S, M and F represent Surname, Male and Female given names, respectively. Table 3 summarizes the number of unique 8 name entries used in training and testing. The test sets were randomly chosen such that the amount of test data is approximately 10-20% of the whole corpus. There were no overlapping entries between the training and test data. Note that the Chinese surnames are typically single characters in a small set; we assume there is no unseen surname in the test set. All the Chinese surname entries are used for both training and testing.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 270, |
| "end": 277, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 354, |
| "end": 361, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 570, |
| "end": 577, |
| "text": "Table 3", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "For each language of origin, a 4-gram language model was trained for the letter sequence of the source names, with a 1-letter shift. Table 4 : Language detection accuracies (%) using a 4-gram language model for the letter sequence of the source name in Latin script. Table 4 shows the language detection accuracies for all the 3 languages using Eq. (10). The overall detection accuracy is 94.81%. The corresponding Equal Error Rate (EER) 9 is 4.52%. The detection results may be used directly to infer the semantic information for transliteration. Alternatively, the language model likelihood scores may be incorporated into the Bayesian framework to improve the transliteration performance, as described in Section 4.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 133, |
| "end": 140, |
| "text": "Table 4", |
| "ref_id": null |
| }, |
| { |
| "start": 267, |
| "end": 274, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Language of Origin", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Similarly, gender detection 10 was performed by training a 4-gram language model for the letter sequence of the source names for each language and gender pair. Table 5 : Gender detection accuracies (%) using a 4-gram language model for the letter sequence of the source name in Latin script. Table 5 summarizes the gender detection accuracies using Eq. (11) assuming language of origin is known,", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 160, |
| "end": 167, |
| "text": "Table 5", |
| "ref_id": null |
| }, |
| { |
| "start": 292, |
| "end": 299, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Gender Association", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "arg max ( | , ) s s G G PS L L G \u2208 = = G .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Gender Association", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The overall detection accuracies are 87.03%, 66.52% and 73.62% for Japanese, Chinese and English respectively. The corresponding EER are 13.1%, 21.8% and 19.3% respectively. Note that gender detection is generally harder than language detection. This is because the tokens (syllables) are shared very much across gender categories, while they are quite different from one language to another.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Gender Association", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The performance was measured using the Mean Reciprocal Rank (MRR) metric (Kantor and Voorhees, 2000), a measure that is commonly used in information retrieval, assuming there is precisely one correct answer. Each transliteration system generated at most 50-best hypotheses for each word when computing MRR. The word and character accuracies of the top best hypotheses are also reported.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Transliteration", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We used the phonetic transliteration system as the baseline to study the effects of semantic transliteration. The phonetic transliteration system was trained by pooling all the available training data from all the languages and genders to estimate a language model for the source-target token pairs. Table 6 compares the MRR performance of the baseline system using unigram and bigram language models for the source-target token pairs. Table 7 : The effect of language and gender information on the overall MRR performance of transliteration (L=Language, G=Gender, =unknown, =known, =soft decision).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 300, |
| "end": 307, |
| "text": "Table 6", |
| "ref_id": null |
| }, |
| { |
| "start": 436, |
| "end": 443, |
| "text": "Table 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Semantic Transliteration", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Next, the scenarios with perfect language and/or gender information were considered. This com-parison is summarized in Table 7 . All the MRR results are based on transliteration systems using bigram language models. The table clearly shows that having perfect knowledge, denoted by \" \", of language and gender helps improve the MRR performance; detecting semantic attributes using soft decision, denoted by \" \", has a clear win over the baseline, denoted by \" \", where semantic information is not used. The results strongly recommend the use of semantic transliteration for personal names in practice. Next let's look into the effects of automatic language and gender detection on the performance. Table 8 compares the MRR performance of the semantic transliteration systems with different prior information, using bigram language models. Soft decision refers to the incorporation of the language model scores into the transliteration process to improve the prior knowledge in Bayesian inference. Overall, both hard and soft decision methods gave similar MRR performance of approximately 0.5750, which was about 17.5% relatively improvement compared to the phonetic transliteration system with 0.4895 MRR. The hard decision scheme owes its surprisingly good performance to the high detection accuracies (see Table 4 ). Table 9 : The effect of gender detection schemes on MRR using bigram language models with perfect language information.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 119, |
| "end": 126, |
| "text": "Table 7", |
| "ref_id": null |
| }, |
| { |
| "start": 698, |
| "end": 705, |
| "text": "Table 8", |
| "ref_id": "TABREF10" |
| }, |
| { |
| "start": 1308, |
| "end": 1315, |
| "text": "Table 4", |
| "ref_id": null |
| }, |
| { |
| "start": 1319, |
| "end": 1326, |
| "text": "Table 9", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Semantic Transliteration", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Similarly, the effect of various gender detection methods used to obtain the prior information is shown in Table 9 . The language information was assumed known a-priori. Due to the poorer detection accuracy for the Chinese male given names (see Table 5 ), hard decision of gender had led to deterioration in MRR performance of the male names compared to the case where no prior information was assumed. Soft decision of gender yielded further gains of 17.1% and 13.9% relative improvements for male and female given names respectively, over the hard decision method. Table 10 : Overall transliteration performance using bigram language model with various language and gender information.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 107, |
| "end": 114, |
| "text": "Table 9", |
| "ref_id": null |
| }, |
| { |
| "start": 245, |
| "end": 252, |
| "text": "Table 5", |
| "ref_id": null |
| }, |
| { |
| "start": 567, |
| "end": 575, |
| "text": "Table 10", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Semantic Transliteration", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Finally, Table 10 compares the performance of various semantic transliteration systems using bigram language models. The baseline phonetic transliteration system yielded 36.87% and 58.39% accuracies at word and character levels respectively; and 0.4895 MRR. It can be conjectured from the results that semantic transliteration is substantially superior to phonetic transliteration. In particular, knowing the language information improved the overall MRR performance to 0.5952; and with additional gender information, the best performance of 0.6812 was obtained. Furthermore, both hard and soft decision of semantic information improved the performance, with the latter being substantially better. Both the word and character accuracies improvements were consistent and have similar trend to that observed for MRR. The performance of the semantic transliteration using soft decisions (last row of Table 10 ) achieved 25.1%, 33.9%, 18.5% relative improvement in MRR, word and character accuracies respectively over that of the phonetic transliteration (first row of Table 10 ). In addition, soft decision also presented 5.1%, 4.9% and 3.5% relative improvement over hard decision in MRR, word and character accuracies respectively.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 9, |
| "end": 17, |
| "text": "Table 10", |
| "ref_id": null |
| }, |
| { |
| "start": 897, |
| "end": 905, |
| "text": "Table 10", |
| "ref_id": null |
| }, |
| { |
| "start": 1065, |
| "end": 1073, |
| "text": "Table 10", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Semantic Transliteration", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "It was found that the performance of the baseline phonetic transliteration may be greatly improved by incorporating semantic information such as the language of origin and gender. Furthermore, it was found that the soft decision of language and gender outperforms the hard decision approach. The soft decision method incorporates the semantic scores ( , | ) P L G S with transliteration scores ( | , , ) P T S L G , involving all possible semantic specific models in the decoding process.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussions", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "In this paper, there are 9 such models (3 languages \u00d7 3 genders). The hard decision relies on Eqs. (10) and (11) to decide language and gender, which only involves one semantic specific model in the decoding. Neither soft nor hard decision requires any prior information about the names. It provides substantial performance improvement over phonetic transliteration at a reasonable computational cost. If the prior semantic information is known, e.g. via trigger words, then semantic transliteration attains its best performance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussions", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "Transliteration is a difficult, artistic human endeavor, as rich as any other creative pursuit. Research on automatic transliteration has reported promising results for regular transliteration, where transliterations follow certain rules. The generative model works well as it is designed to capture regularities in terms of rules or patterns. This paper extends the research by showing that semantic transliteration of personal names is feasible and provides substantial performance gains over phonetic transliteration. This paper has presented a successful attempt towards semantic transliteration using personal name transliteration as a case study. It formulates a mathematical framework that incorporates explicit semantic information (prior knowledge), or implicit one (through soft or hard decision) into the transliteration model. Extending the framework to machine transliteration of named entities in general is a topic for further research.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In the literature(Knight and Graehl,1998;Qu et al., 2003), translating romanized Japanese or Chinese names to Chinese characters is also known as back-transliteration. For simplicity, we consider all conversions from Latin-scripted words to Chinese as transliteration in this paper.4 The 19 most common surnames cover 55.6% percent of the Chinese population(Ning and Ning 1995).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.cjk.org 6 http://technology.chtsai.org/namelist", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In this paper, surnames are treated as a special class of gender. Unlike given names, they do not have any gender association. Therefore, they fall into a third category which is neither male nor female.8 By contrast,Table 1shows the total number of name examples available. For each unique entry, there may be multiple examples.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "EER is defined as the error of false acceptance and false rejection when they are equal.10 In most writing systems, the ordering of surname and given name is known. Therefore, gender detection is only performed for male and female classes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "The Mathematics of Statistical Machine Translation: Parameter Estimation, Computational Linguistics", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Peter", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [ |
| "Della" |
| ], |
| "last": "Brown", |
| "suffix": "" |
| }, |
| { |
| "first": "Vincent", |
| "middle": [ |
| "J Della" |
| ], |
| "last": "Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [ |
| "L" |
| ], |
| "last": "Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mercer", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "", |
| "volume": "19", |
| "issue": "", |
| "pages": "263--311", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter F. Brown and Stephen Della Pietra and Vincent J. Della Pietra and Robert L. Mercer. 1993, The Mathe- matics of Statistical Machine Translation: Parameter Estimation, Computational Linguistics, 19(2), pp. 263-311.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "N-gram-based versus Phrasebased Statistical Machine Translation", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "M" |
| ], |
| "last": "Crego", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "R" |
| ], |
| "last": "Costa-Jussa", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "B" |
| ], |
| "last": "Mario", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "A R" |
| ], |
| "last": "Fonollosa", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proc. of IWSLT", |
| "volume": "", |
| "issue": "", |
| "pages": "177--184", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. M. Crego, M. R. Costa-jussa and J. B. Mario and J. A. R. Fonollosa. 2005, N-gram-based versus Phrase- based Statistical Machine Translation, In Proc. of IWSLT, pp. 177-184.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Knowledge of language origin improves pronunciation accuracy of proper names", |
| "authors": [ |
| { |
| "first": "Alan", |
| "middle": [ |
| "W" |
| ], |
| "last": "Ariadna Font Llitjos", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Black", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proc. of Eurospeech", |
| "volume": "", |
| "issue": "", |
| "pages": "1919--1922", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ariadna Font Llitjos, Alan W. Black. 2001. Knowledge of language origin improves pronunciation accuracy of proper names. In Proc. of Eurospeech, Denmark, pp 1919-1922.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Bidirectional Conversion between Graphemes and Phonemes using a Joint N-gram Model", |
| "authors": [ |
| { |
| "first": "Lucian", |
| "middle": [], |
| "last": "Galescu", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [ |
| "F" |
| ], |
| "last": "Allen", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proc. 4th ISCA Tutorial and Research Workshop on Speech Synthesis", |
| "volume": "", |
| "issue": "", |
| "pages": "103--108", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lucian Galescu and James F. Allen. 2001, Bi- directional Conversion between Graphemes and Pho- nemes using a Joint N-gram Model, In Proc. 4th ISCA Tutorial and Research Workshop on Speech Synthesis, Scotland, pp. 103-108.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Semantic Transliteration: A Good Tradition in Translating Foreign Words into Chinese Babel", |
| "authors": [ |
| { |
| "first": "Qingping", |
| "middle": [], |
| "last": "Hu", |
| "suffix": "" |
| }, |
| { |
| "first": "Jun", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "International Journal of Translation", |
| "volume": "49", |
| "issue": "4", |
| "pages": "310--326", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Qingping Hu and Jun Xu, 2003, Semantic Translitera- tion: A Good Tradition in Translating Foreign Words into Chinese Babel: International Journal of Transla- tion, Babel, 49(4), pp. 310-326.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "The TREC-5 Confusion Track: Comparing Retrieval Methods for Scanned Text. Informational Retrieval", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Paul", |
| "suffix": "" |
| }, |
| { |
| "first": "Ellen", |
| "middle": [ |
| "M" |
| ], |
| "last": "Kantor", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Voorhees", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "", |
| "volume": "2", |
| "issue": "", |
| "pages": "165--176", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Paul B. Kantor and Ellen M. Voorhees, 2000, The TREC-5 Confusion Track: Comparing Retrieval Methods for Scanned Text. Informational Retrieval, 2, pp. 165-176.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Learning Transliteration Lexicons from the Web", |
| "authors": [ |
| { |
| "first": "J.-S", |
| "middle": [], |
| "last": "Kuo", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Y.-K", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proc. of 44 th ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "1129--1136", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J.-S. Kuo, H. Li and Y.-K. Yang. 2006. Learning Trans- literation Lexicons from the Web, In Proc. of 44 th ACL, pp. 1129-1136.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "A Joint Source Channel Model for Machine Transliteration", |
| "authors": [ |
| { |
| "first": "Haizhou", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Min", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jian", |
| "middle": [], |
| "last": "Su", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proc. of 42 nd ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "159--166", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Haizhou Li, Min Zhang and Jian Su. 2004. A Joint Source Channel Model for Machine Transliteration, In Proc. of 42 nd ACL, pp. 159-166.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Advances in Chinese Spoken Language Processing", |
| "authors": [ |
| { |
| "first": "Haizhou", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Shuanhu", |
| "middle": [], |
| "last": "Bai", |
| "suffix": "" |
| }, |
| { |
| "first": "Jin-Shea", |
| "middle": [], |
| "last": "Kuo", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "341--364", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Haizhou Li, Shuanhu Bai, and Jin-Shea Kuo, 2006, Transliteration, In Advances in Chinese Spoken Lan- guage Processing, C.-H. Lee, et al. (eds), World Sci- entific, pp. 341-364.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Backward machine transliteration by learning phonetic similarity", |
| "authors": [ |
| { |
| "first": "Wei-Hao", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Hsin-Hsi", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proc. of CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "139--145", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wei-Hao Lin and Hsin-Hsi Chen, 2002, Backward ma- chine transliteration by learning phonetic similarity, In Proc. of CoNLL , pp.139-145.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Chinese Personal Names", |
| "authors": [ |
| { |
| "first": "Yegao", |
| "middle": [], |
| "last": "Ning", |
| "suffix": "" |
| }, |
| { |
| "first": "Yun", |
| "middle": [], |
| "last": "Ning", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yegao Ning and Yun Ning, 1995, Chinese Personal Names, Federal Publications, Singapore.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "An Ensemble of Grapheme and Phoneme for Machine Transliteration", |
| "authors": [ |
| { |
| "first": "Jong-Hoon", |
| "middle": [], |
| "last": "Oh", |
| "suffix": "" |
| }, |
| { |
| "first": "Key-Sun", |
| "middle": [], |
| "last": "Choi", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proc. of IJCNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "450--461", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jong-Hoon Oh and Key-Sun Choi. 2005, An Ensemble of Grapheme and Phoneme for Machine Translitera- tion, In Proc. of IJCNLP, pp.450-461.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Automatic Transliteration for Japanese-to-English Text Retrieval", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Qu", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Grefenstette", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [ |
| "A" |
| ], |
| "last": "Evans", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proc. of 26 th ACM SIGIR", |
| "volume": "", |
| "issue": "", |
| "pages": "353--360", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Y. Qu, G. Grefenstette and D. A. Evans, 2003, Auto- matic Transliteration for Japanese-to-English Text Re- trieval. In Proc. of 26 th ACM SIGIR, pp. 353-360.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "A stochastic Finite-state Word-segmentation Algorithm for Chinese", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Sproat", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Chih", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Gale", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Computational Linguistics", |
| "volume": "22", |
| "issue": "3", |
| "pages": "377--404", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard Sproat, C. Chih, W. Gale, and N. Chang. 1996. A stochastic Finite-state Word-segmentation Algo- rithm for Chinese, Computational Linguistics, 22(3), pp. 377-404.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Named Entity Transliteration with Comparable Corpora", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Sproat", |
| "suffix": "" |
| }, |
| { |
| "first": "Tao", |
| "middle": [], |
| "last": "Tao", |
| "suffix": "" |
| }, |
| { |
| "first": "Chengxiang", |
| "middle": [], |
| "last": "Zhai", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proc. of 44 th ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "73--80", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard Sproat, Tao Tao and ChengXiang Zhai. 2006. Named Entity Transliteration with Comparable Cor- pora, In Proc. of 44 th ACL, pp. 73-80.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Chinese Transliteration of Foreign Personal Names", |
| "authors": [], |
| "year": 1992, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xinhua News Agency, 1992, Chinese Transliteration of Foreign Personal Names, The Commercial Press.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Modeling Impression in Probabilistic Transliteration into Chinese", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Fujii", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Ishikawa", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proc. of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "242--249", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "L. Xu, A. Fujii, T. Ishikawa, 2006 Modeling Impression in Probabilistic Transliteration into Chinese, In Proc. of EMNLP 2006, Sydney, pp. 242-249.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF0": { |
| "content": "<table><tr><td>for Infocomm Research</td><td>\u2020Chung-Hwa Telecom Laboratories</td></tr><tr><td>Singapore 119613</td><td>Taiwan</td></tr><tr><td>{hli,kcsim,mhdong}@i2r.a-star.edu.sg</td><td>jskuo@cht.com.tw</td></tr></table>", |
| "text": "", |
| "num": null, |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF3": { |
| "content": "<table/>", |
| "text": "Chinese character usage in 3 corpora. The numbers in brackets indicate the percentage of characters that are shared by at least 2 corpora.", |
| "num": null, |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF6": { |
| "content": "<table/>", |
| "text": "Number of unique entries in training and test sets, categorized by semantic attributes", |
| "num": null, |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF10": { |
| "content": "<table><tr><td>: The effect of language detection</td></tr><tr><td>schemes on MRR using bigram language models</td></tr><tr><td>and unknown gender information (hereafter,</td></tr><tr><td>=unknown, =known, =hard decision, =soft</td></tr><tr><td>decision).</td></tr></table>", |
| "text": "", |
| "num": null, |
| "type_str": "table", |
| "html": null |
| } |
| } |
| } |
| } |