ACL-OCL / Base_JSON /prefixY /json /Y12 /Y12-1036.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y12-1036",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:45:50.286404Z"
},
"title": "A Machine Translation Approach for Chinese Whole-Sentence Pinyin-to-Character Conversion *",
"authors": [
{
"first": "Shaohua",
"middle": [],
"last": "Yang",
"suffix": "",
"affiliation": {
"laboratory": "MOE-Microsoft Key Laboratory for Intelligent Computing and Intelligent Systems",
"institution": "Shanghai Jiao Tong University",
"location": {
"addrLine": "800 Dongchuan Road",
"postCode": "200240",
"settlement": "Shanghai",
"country": "China"
}
},
"email": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {
"laboratory": "MOE-Microsoft Key Laboratory for Intelligent Computing and Intelligent Systems",
"institution": "Shanghai Jiao Tong University",
"location": {
"addrLine": "800 Dongchuan Road",
"postCode": "200240",
"settlement": "Shanghai",
"country": "China"
}
},
"email": "zhaohai@cs.sjtu.edu.cn"
},
{
"first": "Bao-Liang",
"middle": [],
"last": "Lu",
"suffix": "",
"affiliation": {
"laboratory": "MOE-Microsoft Key Laboratory for Intelligent Computing and Intelligent Systems",
"institution": "Shanghai Jiao Tong University",
"location": {
"addrLine": "800 Dongchuan Road",
"postCode": "200240",
"settlement": "Shanghai",
"country": "China"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper introduces a new approach to solve the Chinese Pinyin-to-character (PTC) conversion problem. The conversion from Chinese Pinyin to Chinese character can be regarded as a transformation between two different languages (from the Latin writing system of Chinese Pinyin to the character form of Chinese,Hanzi), which can be naturally solved by machine translation framework. PTC problem is usually regarded as a sequence labeling problem, however, it is more difficult than any other general sequence labeling problems, since it requires a large label set of all Chinese characters for the labeling task. The essential difficulty of the task lies in the high degree of ambiguities of Chinese characters corresponding to Pinyins. Our approach is novel in that it effectively combines the features of continuous source sequence and target sequence. The experimental results show that the proposed approach is much faster, besides, we got a better result and outperformed the existing sequence labeling approaches.",
"pdf_parse": {
"paper_id": "Y12-1036",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper introduces a new approach to solve the Chinese Pinyin-to-character (PTC) conversion problem. The conversion from Chinese Pinyin to Chinese character can be regarded as a transformation between two different languages (from the Latin writing system of Chinese Pinyin to the character form of Chinese,Hanzi), which can be naturally solved by machine translation framework. PTC problem is usually regarded as a sequence labeling problem, however, it is more difficult than any other general sequence labeling problems, since it requires a large label set of all Chinese characters for the labeling task. The essential difficulty of the task lies in the high degree of ambiguities of Chinese characters corresponding to Pinyins. Our approach is novel in that it effectively combines the features of continuous source sequence and target sequence. The experimental results show that the proposed approach is much faster, besides, we got a better result and outperformed the existing sequence labeling approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "There are more than twenty thousand different Chinese characters adopted by Chinese language so that it is a difficult task to type the Chinese character directly from a Latin-style keyboard. Chinese Pinyin is such an encoding scheme that can map the Chinese character to a group of Latin letters so that each character usually has an unique Pinyin representation 1 . Pinyin is originally designed as the phonetic symbol of a Chinese character. For example, Pinyin for the Chinese character \"\u6211 \u6211 \u6211\"(I,me) is \"wo\". As one of the most important topic in Chinese natural language process, Pinyin-to-character(PTC) problem refers to the automatic transformation from Chinese Pinyin sequence to Chinese character sequence. It plays an important or even key role in areas such as speech recognition, Chinese keyboard input method and etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are five different tones for Chinese pronunciation. In Chinese Pinyin system, tone is represented as an accent symbol over Latin letters, which is not convenient to input and thus usually ignored in most Chinese keyboard input methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The Chinese PTC problem can be very challenging for the following reasons: there are about 410 Pinyins(without considering five different tones), however, there are ten thousands Chinese characters, even the most popular accounts for about 5,000. So it is quite common to see the phenomenon that different Chinese characters have the same Pinyin. On the average, there are about ten or more Chinese characters which are corresponding to one Pinyin.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "When longer Pinyin sequence is given, number of the corresponding legal character sequences will be heavily reduced. Thus, to alleviate the ambiguity zi ran yu yan chu li \u5b57 \u7136 \u7136 \u7136 \u4e0e \u4e25 \u51fa \u7406 \u7406 \u7406 \u5b50 \u67d3 \u8bed \u8bed \u8bed \u773c \u9664 \u79bb \u81ea \u81ea \u81ea \u71c3 \u4e8e \u70df \u5904 \u5904 \u5904 \u529b \u7d2b \u5189 \u9c7c \u8a00 \u8a00 \u8a00 \u521d \u674e \u8d44 \u9aef \u96e8 \u6f14 \u89e6 \u5229 Table 1 : One Pinyin can be mapped to multiple Chinese character (the bolded characters are the correct choices corresponding to the Pinyin sequence.",
"cite_spans": [],
"ref_spans": [
{
"start": 255,
"end": 262,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "and speedup the process, in a typical Chinese (Latin) keyboard input method, one always try to type as long Pinyin sequence as possible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we consider such a typical PTC task when a whole sentence of Pinyin sequence is given, and we attempt to recover its original character sequence. In detail, the object of the PTC is to find correct character sequence C = c 1 , c 2 , ..., c n given a Pinyin sequence S = s 1 , s 2 , ..., s n of which s i refers to the Pinyin character and c i refers to the Chinese character. For example, Table 1 illustrates the Pinyin sequence \"zi ran yu yan chu li\" (\u81ea\u7136\u8bed \u8a00\u5904\u7406, natural language processing) and its corresponding Chinese character sequence. From this table we can observe that one Pinyin can be aligned to too many Chinese characters, though only the underlined bolded Chinese characters are the sequence that we actually intent to get. For example, the Pinyin \"zi\" can be mapped to Chinese characters include \"\u5b57\",\"\u5b50\" and etc. Even in this simple example, we can also see that there are 5 6 possible Chinese sequences which can be generated. It is easy to show that the number of the possible sequence is exponential to the length of source or target sequence.",
"cite_spans": [],
"ref_spans": [
{
"start": 404,
"end": 411,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Formulated as a sequence labeling task, PTC will require a much larger label set to work on than any other traditional sequence labeling tasks such as named entity recognition (NER) or part-of-speech (POS) tagging. In machine learning, sequence labeling is a type of pattern recognition task that involves the algorithmic assignment of a categorical label to each member of a sequence of observed values. Sequence labeling can be treated as a set of independent classification tasks,one per element of the sequence. Typically, the latter have dozens of la-bels, while the former will have thousands of ones. A too large label set makes the sequence labeling inefficient and low-performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a new approach by formulating the PTC problem as a machine translation task. Considering the obvious constraint that the target Chinese sequence's order keeps the same as the source Pinyins' order, there exists no reordering step in the translation procedure. It greatly alleviates the difficulty of training such a machine translation system. In this sense, this approach is similar to a monotone SMT, which means that we can decode the source sentence from left to right without any reordering. At the same time, we can also make a full use of the phrase-based features in the machine translation framework and effective parameter estimation method. The motivation for our works lie in the phenomenon that the whole sentence pinyin input method is far more mature and even for the typical input method, there are also many conversion errors which need people to correct manually, this way heavily reduces the efficiency of people's work efficiency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows: Section 2 describes previous relevant works about PTC problem. Section 3 introduces the proposed approach. Experimental results are given in Section 4. Then a discussion about the experiment result are given in Section 5. We reach our conclusion in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Similar with the task of PTC, the graphemeto-phoneme or phoneme-to-grapheme conversion problem has also developed many different approaches. For example, (Chen, 2003) introduces several models for grapheme-to-phoneme conversion, including a joint conditional maximum entropy model, a joint maximum n-gram model and a joint maximum n-gram model with syllabification.",
"cite_spans": [
{
"start": 154,
"end": 166,
"text": "(Chen, 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "To effectively solve the PTC problem, many natural language processing techniques have been applied. By and large, these methods can be separated into two main categories: rule-based methods and statistical methods. the rule-based methods can make use of concrete linguistic information to understand language meanwhile plentiful features and automatic learning and prediction can be integrate to the statistical one effectively. Wang et al. (2004) put forward a rough set approach to extract a number of rules from the corpus. (Zhang et al., 2006) presented an error correction post-processing approach based on grammatical and semantic rules. However, natural languages are so sophisticated that the rule-base methods can not effectively tackle all the situations. Recently, most works turn to statistical learning methods.",
"cite_spans": [
{
"start": 430,
"end": 448,
"text": "Wang et al. (2004)",
"ref_id": "BIBREF20"
},
{
"start": 528,
"end": 548,
"text": "(Zhang et al., 2006)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "One of the earliest attempts to address this problem is to make use of language models. Now, many Chinese Pinyin input methods are still based on this model. (chen and Lee, 2000) successfully applied language models to the Chinese Pinyin input method. (Lee, 2003) extended language models further to disambiguate the Chinese homophone. (Liu and Wang, 2002 ) built a machine learning approach to solve Chinese Pinyin-to-character for small memory application. Their approach lied on iterative new word identification and word frequency increasing that results in more accurate segmentation of Chinese character gradually. Their work can be applied to many small-memory platform such as Personal Digital Assistant(PDA) and etc. (Zhao. and Sun, 1998) presented a wordself-made Chinese Phonetic-Character Conversion(CPCC) algorithm based on the Chinese Character Bigram which combined the advantages of CPCC based on Chinese character N-gram and advantages of CPCC based on Chinese word N-gram.",
"cite_spans": [
{
"start": 168,
"end": 178,
"text": "Lee, 2000)",
"ref_id": "BIBREF6"
},
{
"start": 252,
"end": 263,
"text": "(Lee, 2003)",
"ref_id": "BIBREF13"
},
{
"start": 336,
"end": 355,
"text": "(Liu and Wang, 2002",
"ref_id": "BIBREF17"
},
{
"start": 726,
"end": 747,
"text": "(Zhao. and Sun, 1998)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The paper (Zhang, 2007) presented a way to transform Chinese Pinyins to Chinese characters based on hybrid word lattice and study the related problems with hybrid language model and algorithms to solve the word lattice.",
"cite_spans": [
{
"start": 10,
"end": 23,
"text": "(Zhang, 2007)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In the work of (Zhou et al., 2007) , they utilized a segment-based hidden Markov model for Pinyinto-Chinese conversion compared with the character based hidden Markov model. (Lin and Zhang, 2008) presented a novel Chinese language model and studies their application in Chinese Pinyin-to-character conversion. Their model associate a word with supporting context including the frequent sets of the word's nearby phrases and the distances of phrases to the word.",
"cite_spans": [
{
"start": 15,
"end": 34,
"text": "(Zhou et al., 2007)",
"ref_id": "BIBREF26"
},
{
"start": 174,
"end": 195,
"text": "(Lin and Zhang, 2008)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Support vector machine(SVM) can also be used to deal with PTC problem as PTC can also be regarded as classifying the Pinyin to one of the Chinese characters. SVM replaces minimizing empirical risk in the traditional machine learning methods with minimizing the structure risk principle and shows a satisfied performance. (Jiang et al., 2007) put forward a PTC framework based on the SVM model. It effectively overcomes the drawback that language models cannot conveniently integrate rich features, and achieves a state-of-the-art accuracy of 92.94%. As one of the most frequent tools to the classification and sequence labeling problem, Maximum Entropy(ME) model were also adopted to settle the PTC issue as in (Wang et al., 2006) . A Class-based MEMM model is proposed to address the PTC conversion problem through exploitation of the pinyin constraints. (Li et al., 2009) applied the conditional random field(CRF) model to the PTC problem in order to alleviate the label bias problem that usually occurs in the ME model (Andrew et al., 2001) . (Li et al., 2009 ) made use of the constraint that one Pinyin can only map to limited number of Chinese characters thus greatly reducing the computation cost. However, their results show that CRF model does not outperform ME model (Li et al., 2009) and the CRF training will cost about approximately 200 days.",
"cite_spans": [
{
"start": 321,
"end": 341,
"text": "(Jiang et al., 2007)",
"ref_id": "BIBREF11"
},
{
"start": 711,
"end": 730,
"text": "(Wang et al., 2006)",
"ref_id": "BIBREF21"
},
{
"start": 856,
"end": 873,
"text": "(Li et al., 2009)",
"ref_id": "BIBREF15"
},
{
"start": 1022,
"end": 1043,
"text": "(Andrew et al., 2001)",
"ref_id": "BIBREF0"
},
{
"start": 1046,
"end": 1062,
"text": "(Li et al., 2009",
"ref_id": "BIBREF15"
},
{
"start": 1277,
"end": 1294,
"text": "(Li et al., 2009)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Artificial Immune Network based model is proposed to deal with the task of PTC conversion (Jiang and Pang, 2009) . They propose an online learning approach the problems of sparse data and independent identical distribution.",
"cite_spans": [
{
"start": 90,
"end": 112,
"text": "(Jiang and Pang, 2009)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The PTC problem can also be seen as one kind of machine transliteration which aims to generate a string in target language given a character string in source language. (Li et al., 2004) proposed a joint source-channel model to allow direct orthographical mapping between two different languages. (Hatori and Suzuki, 2011) applied the phrasebased SMT model to predict Japanese Pronunciation, however, the differences between our work and theirs lie in a visual aspects. Both Japanese and Chinese adopt Chinese characters in their writing system, the work of (Hatori and Suzuki, 2011) was approximately a task to predict the pronunciation of a Chinese character, and ours is to predict a Chinese character sequence from a Pinyin(pronunciation) sequence. The task defined in this paper as discussed in the above is a much more difficult disambiguation task than the one in (Hatori and Suzuki, 2011) . That is, a Chinese character seldom has multiple pronunciations, but the same pronunciation may refer to quite a lot of Chinese characters, usually, dozens of characters.",
"cite_spans": [
{
"start": 168,
"end": 185,
"text": "(Li et al., 2004)",
"ref_id": "BIBREF14"
},
{
"start": 296,
"end": 321,
"text": "(Hatori and Suzuki, 2011)",
"ref_id": "BIBREF9"
},
{
"start": 870,
"end": 895,
"text": "(Hatori and Suzuki, 2011)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section, we apply a monotone phrasal SMTbased approach to solve the PTC problem. The whole framework is illustrated in Figure 1 . Firstly, we should prepare a sentence aligned corpus, then do the word alignment process. After this, we need to extract a translation table from the aligned corpus. Then we will use all of the features to train a translation model. The last process is decoding the source sentence.",
"cite_spans": [],
"ref_spans": [
{
"start": 127,
"end": 135,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "PTC Conversion Model",
"sec_num": "3"
},
{
"text": "Our SMT model is based on the discriminative learning framework which contains different realvalued features. In this model, F is a given foreign sentence F =f 1 , f 2 , ..., f J , and needs to be translated into another sentence E=e 1 , e 2 , ..., e I . The real-valued features are defined over F and E as h i (E, F ). The score can be given by a log-linear formulation (Och and Ney, 2004) with respect to a series of weight parameters \u03bb 1 ,...,\u03bb n . For a given source language sentence f , we can obtain the target language sentence e according to the following equation:",
"cite_spans": [
{
"start": 372,
"end": 391,
"text": "(Och and Ney, 2004)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e I 1 = arg max e I 1 p \u03bb m 1 (e I 1 |f J 1 ) = arg max e I 1 exp[ \u2211 M m=1 \u03bb m h m (e I 1 , f J 1 )] \u2211 e I 1 exp[ \u2211 M m=1 \u03bb m h m (e I 1 , f J 1 )] ,",
"eq_num": "(1)"
}
],
"section": "Translation Model",
"sec_num": "3.1"
},
{
"text": "where h m is the m-th feature function and \u03bb m is the m-th feature weight. The most common features used in modern phrased-based machine translation include phrase translation feature, language model feature, reordering model feature and word penalty feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Model",
"sec_num": "3.1"
},
{
"text": "As usual, to train the SMT model parameters, we adopt the minimum error rate training(MERT) (Och, 2003) , which obtained the model towards getting the highest score corresponding to the concrete evaluation metric. For the sequence decoding, we use a stack decoder (Germann et al., 2001 ).",
"cite_spans": [
{
"start": 92,
"end": 103,
"text": "(Och, 2003)",
"ref_id": "BIBREF19"
},
{
"start": 264,
"end": 285,
"text": "(Germann et al., 2001",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Model",
"sec_num": "3.1"
},
{
"text": "The following real-valued features are adopted for learning, the bidirectional phrase translation probabilities, p(\u00ea|f ) and p(f |\u00ea), the bidirectional lexical weighting lex(\u00ea|f ) and lex(f |\u00ea), the target Chinese character n-gram probability, p(\u00ea) and the phrase penalty. The estimation of these features requires a training corpus with source and target alignment at the character or word level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "The bidirectional conditional phrase translation probability contain much richer information than the one directional phrase translation probability. When translating the source phrasef into the target phras\u00ea e, we take both p(\u00ea|f ): the target phrase's probability given the source phrase, and p(f |\u00ea): the source phrase's probability given the target phrase. The bidirectional conditional phrase translation probabilities can be estimated by the relative frequency of the phrases extracted from the aligned corpus. Note that the phrase used is not a meaningful word combination any more, it just refers to a series of consequent characters. In practice, a model using both translation directions, with the proper weight setting, often outperforms a model that uses only one direction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "The lexical weighting feature is such a measurement that can be effectively used to estimate whether a phrase pair is reliable or not. Empirically, the lexical weighting (Berger et al., 1994; Brown et al., 1993; Brown et al., 1990 ) is defined as follows:",
"cite_spans": [
{
"start": 170,
"end": 191,
"text": "(Berger et al., 1994;",
"ref_id": "BIBREF1"
},
{
"start": 192,
"end": 211,
"text": "Brown et al., 1993;",
"ref_id": "BIBREF4"
},
{
"start": 212,
"end": 230,
"text": "Brown et al., 1990",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "lex(e|f, a) = length(e) \u220f i=1 1 |{j|(i, j) \u2208 a}| \u2211 \u2200(i,j)\u2208a w(e i |f j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "Here a is an alignment function defining each Chinese character with its corresponding Pinyin and w refers to the lexical conditional probability. The above equation shows that for the phrase pair(f,e), the translation probability can be interpreted as the product of the aligned lexical pairs (f j ,e i ). For the PTC conversion problem, the lexical pair refers to the pinyin-character pair. Based on the alignment we can estimate the possibility of the transformation of phrase pairs from the lexical translation aspect. The phrase penalty is used to estimate the preference towards a sentence which has more segmented phrases or less segmented phrases. Practically, a factor is introduced for each phrase translation. If the factor is less than 1 we would prefer a longer phrase and otherwise shorter phrase is preferred.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "It is natural to formulize PTC as a sequence labeling task, which usually adopt maximum entropy Markov model (Berger et al., 1996) as the standard tool in most existing literatures 2 . Thus we conducted a group of experiments to evaluate the proposed SMT approach with the ME model as the baseline system. The features we use are the most frequent used ones in related works (Wang et al., 2006; Li et al., 2009) .",
"cite_spans": [
{
"start": 109,
"end": 130,
"text": "(Berger et al., 1996)",
"ref_id": "BIBREF2"
},
{
"start": 375,
"end": 394,
"text": "(Wang et al., 2006;",
"ref_id": "BIBREF21"
},
{
"start": 395,
"end": 411,
"text": "Li et al., 2009)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "Firstly, we realize a way to get large Pinyin and Chinese character sentence pairs because to our best knowledge there is no such open dataset available. Given a Chinese character sequence, it is much easier to convert it to a Pinyin sequence because when a Chinese character is put in a context, it usually has an unique Pinyin counterpart. Based on this observation, we label the Chinese text with Pinyins through the forward maximal matching algorithm (kwong Wong and Chan, 1996) incorporated with a word-Pinyin dictionary from Sogou 3 . The data from the People's Daily of 1998 year is used as the training set and the development and test data are token from 1997 year's. The size of datasets is in table 2, the data of 10K and 100K are extracted from the data of 1M. Then we check the auto-labeled data and correct few mistakes.",
"cite_spans": [
{
"start": 455,
"end": 482,
"text": "(kwong Wong and Chan, 1996)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment settings",
"sec_num": "4.1"
},
{
"text": "The sample sentence \"qi \u6c14 wei \u6e29 ye \u4e5f zhou \u9aa4 jiang \u964d dao \u5230 ling \u96f6 xia \u4e0b 17 \uff11 \uff17 she \u6444 shi \u6c0f du \u5ea6 . \u3002(The temperature also dropped abruptly to seventeen below zero centidegrees.)\" is shown in figure2, where the Pinyin and the Chinese character is separated by \" \".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment settings",
"sec_num": "4.1"
},
{
"text": "The implementation of ME model is from the OpenNLP tools 4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Entropy model",
"sec_num": "4.2"
},
{
"text": "We assume the current Pinyin sequence is p 1 , ..., p n and the corresponding Chinese character sequence is c 1 , ..., c n . The current Pinyin is p k . As usually being regarded as an sequence labeling task, we design the feature set for the ME model as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature template",
"sec_num": "4.2.1"
},
{
"text": "\u2022 the current Pinyin itself p k ;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature template",
"sec_num": "4.2.1"
},
{
"text": "\u2022 the suffixes of the Pinyin. For a given Pinyin s which is made of s 1 , ..., s n , the suffix of s refers to the substrings s i , ..., s n (i \u22652);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature template",
"sec_num": "4.2.1"
},
{
"text": "\u2022 the prefixes of the Pinyin. For a given Pinyin s which is made of s 1 , ..., s n , the suffix of s refers to the substrings s 1 , ..., s i (i <2);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature template",
"sec_num": "4.2.1"
},
{
"text": "\u2022 the previous Pinyin p k\u22121 ;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature template",
"sec_num": "4.2.1"
},
{
"text": "\u2022 the Chinese character c k\u22121 with respect to the previous Pinyin p k\u22121 (Markov feature);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature template",
"sec_num": "4.2.1"
},
{
"text": "\u2022 the Pinyin before previous Pinyin p k\u22122 ;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature template",
"sec_num": "4.2.1"
},
{
"text": "\u2022 the Pinyin before previous Pinyin and the previous Pinyin p k\u22122 p k\u22121 ;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature template",
"sec_num": "4.2.1"
},
{
"text": "\u2022 the next Pinyin p k+1 ;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature template",
"sec_num": "4.2.1"
},
{
"text": "http://code.google.com/p/hslinuxextra/downloads/list. 4 The tool can be downloaded from http://incubator.apache.org/opennlp/index.html Figure 3 :",
"cite_spans": [
{
"start": 54,
"end": 55,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 135,
"end": 143,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Feature template",
"sec_num": "4.2.1"
},
{
"text": "The sentences' length distribution. X X X X X X X X X X Model",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature template",
"sec_num": "4.2.1"
},
{
"text": "Dataset 10K 100K 1M ME 0.829 0.891 0.933 SMT 0.947 0.952 0.955 Table 3 : The accuracy for ME model and SMT model on different datasets in terms of words.",
"cite_spans": [],
"ref_spans": [
{
"start": 63,
"end": 70,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Feature template",
"sec_num": "4.2.1"
},
{
"text": "\u2022 the Pinyin after the next one p k+2 ; Figure 2 illustrates a full feature set sample, for the given sample sentence at the upper part of the figure, all related features for Pinyin-character pair, \"zhou \u9aa4(abruptly)\", can be shown in the bottom table of the Figure. Finally, the converted Chinese character sequences are compared to the golden data, the accuracy results can be seen in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 48,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 259,
"end": 266,
"text": "Figure.",
"ref_id": null
},
{
"start": 387,
"end": 394,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Feature template",
"sec_num": "4.2.1"
},
{
"text": "In this experiment, we conduct the process based on Stanford's phrasal (Cer et al., 2010) which is an open source phrase-based machine translation system. For the traditional phrase-based machine translation method, the processing steps are often stated as following:",
"cite_spans": [
{
"start": 71,
"end": 89,
"text": "(Cer et al., 2010)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Translation Framework",
"sec_num": "4.3"
},
{
"text": "\u2022 train an alignment model from the parallel corpus(not needed for our experiments.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Translation Framework",
"sec_num": "4.3"
},
{
"text": "\u2022 extract phrases based on the former alignment model",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Translation Framework",
"sec_num": "4.3"
},
{
"text": "\u2022 minimum error rate training",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Translation Framework",
"sec_num": "4.3"
},
{
"text": "As we have known that it must be an one-to-one alignment for PTC, it is unnecessary to train the alignment model and the phrases can be directly extracted based on the one-to-one alignment of character and Pinyin. Our experiment is based on 3-gram language model and our maximum phrase length is set to 7. The results given by the SMT approach are in Table 3. We get the results based on three different training sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 decoding",
"sec_num": null
},
{
"text": "In this section, we make a detailed experimental analysis to distinguish the result of the SMT model from that of the ME model on the whole sentence accuracy and time cost. Table 3 shows the main results of our experiments. Here the accuracy means the percentage of the correct labels in our decoding results. On all of three training data sets, the results of SMT are much better than ME.",
"cite_spans": [],
"ref_spans": [
{
"start": 173,
"end": 180,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "To illustrate this point concretely, we can see the different result produced by the two models on the sample sentence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Result",
"sec_num": "5.1"
},
{
"text": "\u2022 Pinyin Sequence: zhe yi cheng ji zai quan guo wu da tie lu ju zhong ming lie bang shou",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Result",
"sec_num": "5.1"
},
{
"text": "\u2022 Character sequence: \u8fd9 \u4e00 \u6210 \u7ee9 \u5728 \u5168 \u56fd \u4e94 \u5927 \u94c1\u8def\u5c40\u4e2d\u540d\u5217\u699c\u9996(This result ranks the best among the five biggest railway bureaus all over the country)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Result",
"sec_num": "5.1"
},
{
"text": "ME model outputs \"\u8fd9\u4e00\u6210\u7ee9\u5728\u5168\u56fd\u4e94\u5927\u94c1\u8def\u5c40 \u4e2d\u540d\u5217\u5e2e\u624b\" while the result of SMT model is \"\u8fd9 \u4e00\u6210\u7ee9\u5728\u5168\u56fd\u4e94\u5927\u94c1\u8def\u5c40\u4e2d\u540d\u5217\u699c\u9996\". The ME model makes an error as it translate 'bang shou' into '\u5e2e\u624b'(helper) and the SMT model outputs are the completely equal to the golden sentence, and 'bang shou' has been correctly translated into '\u699c\u9996'(the best on the list). By comparing outputs of these two sentence we can see that the SMT model is much more representative than the ME model. As the features we defined are based on the phrase pairs, the model can deduce that the score of target sentence which is composed of phrase pair(ming lie bang shou,\u540d\u5217\u699c \u9996(who is best on the list)) is greater than the score of sentence which is compose with phrase pair(ming lie,\u540d \u5217(who is)) and phrase pair(bang shou, \u5e2e \u624b(helper)). This result also verifies the effectiveness of the SMT features to capture the local property of the source sentence and the target sentence and can combine longer dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Result",
"sec_num": "5.1"
},
{
"text": "To show how the proposed SMT approach outperforms the ME model, we give a comparison on another metric, the whole sentence accuracy which represents the ratio how many sentences are completely correctly decoded by the system. This metric could be very useful to evaluate a practical Chinese input method. As even one incorrect decoded character may ask human users to pay too many keyboard hits to correct, which user has to backspace the cursor one by one and re-choose the right character candidate one by one, the whole sentence accuracy could be more effective to evaluate user experience of a Chinese input method. Besides, the whole sentence's accuracy also reflects the model's efficiency in another view.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Result",
"sec_num": "5.1"
},
{
"text": "The distribution of sentence length is shown in Figure 3 , from which we can see that most sentences are of length between 20 Chinese characters and 40 Chinese characters. The whole sentence's accuracy for both these two models can be shown in Table 4 . From this table we can see that the results of SMT model is much better than the ME model, which in-X X X X X X X X X X X",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 56,
"text": "Figure 3",
"ref_id": null
},
{
"start": 244,
"end": 251,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Main Result",
"sec_num": "5.1"
},
{
"text": "Dataset 10K 100K 1M ME 0.075 0.169 0.302 SMT 0.402 0.429 0.454 dicates that a SMT decoder for PTC could bring out much better user experience.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "For the training time of these two models, we make a comparison on the biggest training dataset, which has 1M training sentences. It took about a week or so to train a ME model while the training time of our approach cost about within one day which is much faster than the that of ME model. From the description in (Li et al., 2009) we know that the training of CRF would cost much more than ME and the result of CRF is not better than ME. Being a core component of Chinese input method, PTC is sensitive to the computational cost. Thus time cost of decoding for the two models is reported as follows.",
"cite_spans": [
{
"start": 315,
"end": 332,
"text": "(Li et al., 2009)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Time Cost",
"sec_num": "5.2"
},
{
"text": "To make the differences more exactly, the decoding time of the two models is compared on sentences with the same length. The results are shown in Figure 4 . We can see that the decoding time increases when the sentence length becomes larger. However, even when the sentence length is larger than 120 characters, the decoding time is still less than 0.45s. From this graph, it is apparent that the ME model decoding is slightly faster than the SMT model as the sentence is quite long. However, for most sentences with 20 to 40 characters, the SMT model does not decodes slower than the ME model.",
"cite_spans": [],
"ref_spans": [
{
"start": 146,
"end": 154,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Time Cost",
"sec_num": "5.2"
},
{
"text": "We present a novel approach to the problem of Pinyin-to-character conversion(PTC). Motivated by the similarities between machine translation and PTC, we re-formulize the latter as a simplified machine translation problem. In the new formulization, the most computational expensive part of machine translation, alignment learning, could be conveniently ignored by considering that PTC could build one-to-one mapping pairs in the whole text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Meanwhile, the SMT model for PTC maintains the merit that it integrates more effectively helpful features to outperform the baseline system, ME model, which is a standard sequence labeling tool for traditional PTC task. A group of experiments are carried out to verify the effective of the proposed MT model. The results show that MT model outperforms the previous ME model and provides satisfactory performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "A few Chinese characters are pronounced in several different ways, so they may have multiple Pinyin representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Though conditional random field has shown more effective than ME model to solve sequence labeling problem, it is not a practical tool for PTC due to too many labels that PTC requires causing too high computational cost.3 The resource includes 4,083,906 Chinese word and Pinyin pairs, and it can be download from",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Conditional random fields: probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "Mccallum",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Pereira",
"middle": [],
"last": "Fernando",
"suffix": ""
},
{
"first": "Lafferty",
"middle": [],
"last": "John",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Eighteenth International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McCallum Andrew, Pereira Fernando, and Lafferty John. 2001. Conditional random fields: probabilistic mod- els for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Confer- ence on Machine Learning (ICML), pages 282-289, Williamstown, MA.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The candide system for machine translation",
"authors": [
{
"first": "Adam",
"middle": [
"L"
],
"last": "Berger",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "John",
"middle": [
"R"
],
"last": "Giuet",
"suffix": ""
},
{
"first": "John",
"middle": [
"D"
],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
},
{
"first": "Harry",
"middle": [],
"last": "Printz",
"suffix": ""
},
{
"first": "Lubo\u0161",
"middle": [],
"last": "Ure\u0161",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the workshop on Human Language Technology",
"volume": "",
"issue": "",
"pages": "157--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam L. Berger, Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, John R. GiUet, John D. Laf- ferty, Robert L. Mercer, Harry Printz, and Lubo\u0161 Ure\u0161. 1994. The candide system for machine translation. In Proceedings of the workshop on Human Language Technology, pages 157-162. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A maximum entropy approach to natural language processing",
"authors": [
{
"first": "Adam",
"middle": [
"L"
],
"last": "Berger",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A"
],
"last": "Della Pietra",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational linguistics",
"volume": "22",
"issue": "1",
"pages": "39--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam L. Berger, Vincent J. Della Pietra, and Stephen A. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational lin- guistics, 22(1):39-71.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A statistical approach to machine translation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A Della"
],
"last": "Cocke",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Fredrick",
"middle": [],
"last": "Pietra",
"suffix": ""
},
{
"first": "John",
"middle": [
"D"
],
"last": "Jelinek",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"S"
],
"last": "Mercer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Roossin",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational linguistics",
"volume": "16",
"issue": "2",
"pages": "79--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, John Cocke, Stephen A. Della Pietra, Vincent J. Della Pietra, Fredrick Jelinek, John D. Laf- ferty, Robert L. Mercer, and Paul S. Roossin. 1990. A statistical approach to machine translation. Computa- tional linguistics, 16(2):79-85.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The mathematics of statistical machine translation: Parameter estimation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "J",
"middle": [
"Della"
],
"last": "Vincent",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational linguistics",
"volume": "19",
"issue": "2",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estima- tion. Computational linguistics, 19(2):263-311.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Phrasal: A statistical machine translation toolkit for exploring new model features",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the NAACL HLT 2010 Demonstration Session",
"volume": "",
"issue": "",
"pages": "9--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Michel Galley, Daniel Jurafsky, and Christo- pher D.Manning. 2010. Phrasal: A statistical machine translation toolkit for exploring new model features. In Proceedings of the NAACL HLT 2010 Demonstra- tion Session, pages 9-12, Los Angeles, California, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A new statistical approach to chinese pinyin input",
"authors": [
{
"first": "Zheng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Kai-Fu",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 38th Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "241--247",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zheng Chen and Kai-Fu Lee. 2000. A new statistical approach to chinese pinyin input. In Proceedings of the 38th Annual Meeting on Association for Computa- tional Linguistics, pages 241-247, Hong Kong. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Conditional and joint models for grapheme-to-phoneme conversion",
"authors": [
{
"first": "F",
"middle": [],
"last": "Stanley",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2003,
"venue": "Eighth European Conference on Speech Communication and Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stanley F. Chen. 2003. Conditional and joint models for grapheme-to-phoneme conversion. In Eighth Euro- pean Conference on Speech Communication and Tech- nology.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Fast decoding and optimal decoding for machine translation",
"authors": [
{
"first": "Ulrich",
"middle": [],
"last": "Germann",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Jahr",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yamada",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 39th Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "228--235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ulrich Germann, Michael Jahr, Kevin Knight, Daniel marcu, and Kenji Yamada. 2001. Fast decoding and optimal decoding for machine translation. In Proceed- ings of the 39th Annual Meeting on Association for Computational Linguistics, pages 228-235. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Japanese pronunciation prediction as phrasal statistical machine translation",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Hatori",
"suffix": ""
},
{
"first": "Hisami",
"middle": [],
"last": "Suzuki",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of 5th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "120--128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun Hatori and Hisami Suzuki. 2011. Japanese pronun- ciation prediction as phrasal statistical machine trans- lation. In Proceedings of 5th International Joint Con- ference on Natural Language Processing, pages 120- 128. Asian Federation of Natural Language Process- ing.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "An artificial immune network approach for pinyin-to-character conversion",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Xiuli",
"middle": [],
"last": "Pang",
"suffix": ""
}
],
"year": 2009,
"venue": "Virtual Environments, Human-Computer Interfaces and Measurements Systems, 2009. VECIMS'09. IEEE International Conference on",
"volume": "",
"issue": "",
"pages": "27--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Jiang and Xiuli Pang. 2009. An artificial immune network approach for pinyin-to-character conversion. In Virtual Environments, Human-Computer Interfaces and Measurements Systems, 2009. VECIMS'09. IEEE International Conference on, pages 27-32. IEEE.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Pinyin-to-character conversion model based on support vector machines",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Guan",
"suffix": ""
},
{
"first": "Xiaolong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Bingquan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2007,
"venue": "Journal of Chinese information processing",
"volume": "21",
"issue": "2",
"pages": "100--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Jiang, Yi Guan, Xiaolong Wang, and BingQuan Liu. 2007. Pinyin-to-character conversion model based on support vector machines. Journal of Chinese informa- tion processing, 21(2):100-105.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Chinese word segmentation based on maximum matching and word binding force",
"authors": [
{
"first": "Chorkin",
"middle": [],
"last": "Pak Kwong Wong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chan",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 16th conference on Computational linguistics",
"volume": "1",
"issue": "",
"pages": "200--203",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pak kwong Wong and Chorkin Chan. 1996. Chinese word segmentation based on maximum matching and word binding force. In Proceedings of the 16th con- ference on Computational linguistics-Volume 1, pages 200-203. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Task adaptation in stochastic language model for chinese homophone disambiguation",
"authors": [
{
"first": "Yue-Shi",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2003,
"venue": "ACM Transactions on Asian Language Information Processing (TALIP)",
"volume": "2",
"issue": "1",
"pages": "49--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue-Shi Lee. 2003. Task adaptation in stochastic lan- guage model for chinese homophone disambiguation. ACM Transactions on Asian Language Information Processing (TALIP), 2(1):49-62.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A joint source-channel model for machine transliteration",
"authors": [
{
"first": "Haizhou",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "159--166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haizhou Li, Min Zhang, and Jian Su. 2004. A joint source-channel model for machine transliteration. In Proceedings of the 42nd Annual Meeting on Associ- ation for Computational Linguistics, pages 159-166. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A conditional random fields approach to chinese pinyin-to-character conversion",
"authors": [
{
"first": "Lu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xiaolong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yanbing",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of Communication and Computer",
"volume": "6",
"issue": "4",
"pages": "25--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lu Li, Xuan Wang, Xiaolong Wang, and Yanbing Yu. 2009. A conditional random fields approach to chi- nese pinyin-to-character conversion. Journal of Com- munication and Computer, 6(4):25-31.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A novel statistical chinese language model and its application in pinyin-tocharacter conversion",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceeding of the 17th ACM conference on Information and knowledge management",
"volume": "",
"issue": "",
"pages": "1433--1434",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Lin and Jun Zhang. 2008. A novel statistical chi- nese language model and its application in pinyin-to- character conversion. In Proceeding of the 17th ACM conference on Information and knowledge manage- ment, pages 1433-1434. ACM.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "An approach to machine learning of chinese pinyin-to-character conversion for small-memory application",
"authors": [
{
"first": "Bingquan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xaiolong",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings. 2002 International Conference on",
"volume": "3",
"issue": "",
"pages": "1287--1291",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bingquan Liu and Xaiolong Wang. 2002. An approach to machine learning of chinese pinyin-to-character conversion for small-memory application. In Machine Learning and Cybernetics, 2002. Proceedings. 2002 International Conference on, volume 3, pages 1287- 1291. IEEE.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The alignment template approach to statistical machine translation",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2004,
"venue": "Computational Linguistics",
"volume": "30",
"issue": "4",
"pages": "417--449",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2004. The align- ment template approach to statistical machine transla- tion. Computational Linguistics, 30(4):417-449.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting on Association for Computa- tional Linguistics-Volume 1, pages 160-167. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Mining pinyin-to-character conversion rules from large-scale corpus: a rough set approach. Systems, Man, and Cybernetics, Part B: Cybernetics",
"authors": [
{
"first": "Xiaolong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Qingcai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"So"
],
"last": "Yeung",
"suffix": ""
}
],
"year": 2004,
"venue": "IEEE Transactions on",
"volume": "34",
"issue": "2",
"pages": "834--844",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaolong Wang, Qingcai Chen, and Daniel So Yeung. 2004. Mining pinyin-to-character conversion rules from large-scale corpus: a rough set approach. Sys- tems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, 34(2):834-844.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A maximum entropy approach to chinese pin yin-tocharacter conversion",
"authors": [
{
"first": "Xuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Waqas",
"middle": [],
"last": "Wanwar",
"suffix": ""
}
],
"year": 2006,
"venue": "IEEE International Conference on Systems, Man and Cybernetics",
"volume": "4",
"issue": "",
"pages": "2956--2959",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuan Wang, Lu Li, Lin Yao, and Waqas Wanwar. 2006. A maximum entropy approach to chinese pin yin-to- character conversion. In IEEE International Confer- ence on Systems, Man and Cybernetics, volume 4, pages 2956-2959. IEEE.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Rulebased post-processing of pinyin to chinese characters conversion system",
"authors": [
{
"first": "Yan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2006,
"venue": "International Symposium on Chinese Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yan Zhang, Bo Xu, and Chengqing Zong. 2006. Rule- based post-processing of pinyin to chinese characters conversion system. In International Symposium on Chinese Spoken Language Processing.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Solving the pinyin-to-chinesecharacter conversion problem based on hybrid word lattice",
"authors": [
{
"first": "Sen",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2007,
"venue": "CHINESE JOURNAL OF COMPUTERS-CHINESE EDITION",
"volume": "30",
"issue": "7",
"pages": "1145--1153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sen Zhang. 2007. Solving the pinyin-to-chinese- character conversion problem based on hybrid word lattice. CHINESE JOURNAL OF COMPUTERS- CHINESE EDITION-, 30(7):1145-1153.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A word-selfmade chinese phonetic-character conversion algorithm based on chinese character bigram",
"authors": [
{
"first": "Yibao",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Shenghe",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yibao Zhao. and Shenghe Sun. 1998. A word-self- made chinese phonetic-character conversion algorithm based on chinese character bigram [j].",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A segment-based hidden markov model for real-setting pinyin-to-chinese conversion",
"authors": [
{
"first": "Xiaohua",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Xiaohua",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiajiong",
"middle": [],
"last": "Shen",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the sixteenth ACM conference on Conference on information and knowledge management",
"volume": "",
"issue": "",
"pages": "1027--1030",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaohua Zhou, Xiaohua Hu, Xiaodan Zhang, and Xia- jiong Shen. 2007. A segment-based hidden markov model for real-setting pinyin-to-chinese conversion. In Proceedings of the sixteenth ACM conference on Conference on information and knowledge manage- ment, pages 1027-1030. ACM.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "An overview of machine translation system which consists of several phases.",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "The sample training sentence and its ME features.",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "The comparison of decoding time of SMT model and ME model.",
"num": null
},
"TABREF2": {
"type_str": "table",
"html": null,
"text": "The size of different datasets",
"content": "<table/>",
"num": null
},
"TABREF3": {
"type_str": "table",
"html": null,
"text": "The whole sentence accuracy on test dataset.",
"content": "<table/>",
"num": null
}
}
}
}