ACL-OCL / Base_JSON /prefixW /json /wat /2020.wat-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:33:47.392401Z"
},
"title": "Translation of New Named Entities from English to Chinese",
"authors": [
{
"first": "Zizheng",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tokyo Metropolitan University",
"location": {
"addrLine": "6-6 Asahigaoka",
"postCode": "191-0065",
"settlement": "Hino Tokyo",
"country": "Japan"
}
},
"email": ""
},
{
"first": "Tosho",
"middle": [],
"last": "Hirasawa",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tokyo Metropolitan University",
"location": {
"addrLine": "6-6 Asahigaoka",
"postCode": "191-0065",
"settlement": "Hino Tokyo",
"country": "Japan"
}
},
"email": ""
},
{
"first": "Houjing",
"middle": [],
"last": "Wei",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tokyo Metropolitan University",
"location": {
"addrLine": "6-6 Asahigaoka",
"postCode": "191-0065",
"settlement": "Hino Tokyo",
"country": "Japan"
}
},
"email": "houjing@komachi.live"
},
{
"first": "Masahiro",
"middle": [],
"last": "Kaneko",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tokyo Metropolitan University",
"location": {
"addrLine": "6-6 Asahigaoka",
"postCode": "191-0065",
"settlement": "Hino Tokyo",
"country": "Japan"
}
},
"email": "kaneko-masahiro@ed.tmu.ac.jp"
},
{
"first": "Mamoru",
"middle": [],
"last": "Komachi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tokyo Metropolitan University",
"location": {
"addrLine": "6-6 Asahigaoka",
"postCode": "191-0065",
"settlement": "Hino Tokyo",
"country": "Japan"
}
},
"email": "komachi@tmu.ac.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "New things are being created and new words are constantly being added to languages worldwide. However, it is not practical to translate them all manually into a new foreign language. When translating from an alphabetic language such as English to Chinese, appropriate Chinese characters must be assigned, which is particularly costly compared to other language pairs. Therefore, we propose a task of generating and evaluating new translations from English to Chinese focusing on named entities. We defined three criteria for human evaluation-fluency, adequacy of pronunciation, and adequacy of meaning-and constructed evaluation data based on these definitions. In addition, we built a baseline system and analyzed the output of the system.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "New things are being created and new words are constantly being added to languages worldwide. However, it is not practical to translate them all manually into a new foreign language. When translating from an alphabetic language such as English to Chinese, appropriate Chinese characters must be assigned, which is particularly costly compared to other language pairs. Therefore, we propose a task of generating and evaluating new translations from English to Chinese focusing on named entities. We defined three criteria for human evaluation-fluency, adequacy of pronunciation, and adequacy of meaning-and constructed evaluation data based on these definitions. In addition, we built a baseline system and analyzed the output of the system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A machine translation (MT) system is expected to generate the correct translation results for each input. However, new named entities (NEs), such as company names, character names, and product names, are constantly being created worldwide. Therefore, such words must be assigned new translations without referring to any translations in other languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In particular, the translation of NEs between different alphabets, for example, from English (En) to Chinese (Zh) characters, is more difficult than that between other language pairs. It is necessary to select appropriate Chinese characters (Hanzi) in consideration of appropriate fluency, adequacy of pronunciation (hereinafter referred to as \"pronunciation\"), and adequacy of meaning (hereinafter referred to as \"meaning\"). These three dimensions should also be considered in NE translation evaluation. For example, the NE pair of (Curtiss-Wright, \u67ef\u8482\u65af-\u83b1\u7279) is evaluated high in terms of pronunciation. However, its fluency in Chinese is not good because it is not an original Chinese word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although several studies have been conducted on En-Zh MT (Wang et al., 2017; Deng et al., 2018) , NE translation (Chen and Zong, 2011) and transliteration (Wan and Verspoor, 1998; Benites et al., 2020) , no research has been conducted so far on generating and evaluating the translations of brand new NEs in terms of fluency, pronunciation, and meaning. The difficulty in considering these three dimensions makes translating a new NE a challenging task. In Chinese, there can be several Hanzi with similar pronunciations or meanings, and they all can be selected for appropriate NE translation. For instance, the NE pairs in En-Zh of (Blackstone Group, \u9ed1\u77f3\u96c6\u56e2) and (Blackstone Group, \u767e\u4ed5 \u901a), where the former represents the literal translation and the latter represents transliteration, are both correct, and it is difficult to judge which is preferable. Thus, it is first necessary to define the criteria of fluency, pronunciation, and meaning.",
"cite_spans": [
{
"start": 57,
"end": 76,
"text": "(Wang et al., 2017;",
"ref_id": "BIBREF16"
},
{
"start": 77,
"end": 95,
"text": "Deng et al., 2018)",
"ref_id": "BIBREF4"
},
{
"start": 113,
"end": 134,
"text": "(Chen and Zong, 2011)",
"ref_id": "BIBREF3"
},
{
"start": 155,
"end": 179,
"text": "(Wan and Verspoor, 1998;",
"ref_id": "BIBREF15"
},
{
"start": 180,
"end": 201,
"text": "Benites et al., 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Thus, in this paper, we propose a novel task of generating and evaluating brand new NE translations for En-Zh. The main contributions are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose the evaluation criteria for new En-Zh NE (company name) translations-fluency, pronunciation, and meaning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We create a baseline model for NE translations and analyze the results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We provide and release a novel method of evaluation dataset 1 for En-Zh, focusing on company names, which includes both real NE translations and our system output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In terms of NE translation (Chen et al., 1998; Wan and Verspoor, 1998; Oh et al., 2009) , because the two languages use completely different symbolic representations in terms of graphemes and Score",
"cite_spans": [
{
"start": 27,
"end": 46,
"text": "(Chen et al., 1998;",
"ref_id": "BIBREF2"
},
{
"start": 47,
"end": 70,
"text": "Wan and Verspoor, 1998;",
"ref_id": "BIBREF15"
},
{
"start": 71,
"end": 87,
"text": "Oh et al., 2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Fluency Pronunciation Meaning 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Original AND #splitting= 0 Similar AND #syllables are close Translated AND Shortly 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Original AND #splitting= 1 --3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Original AND #splitting= 2 Similar Translated 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Original AND #splitting\u2265 3 --1 Others Others Others phonemes, the English graphemes, phonetically associated English letters, must be converted into Hanzi, which represent ideas and meanings. In terms of literal translation, because Hanzi usually express certain connotations, choosing the appropriate Hanzi should also be considered. In addition, owing to the lack of apparent semantic content on location names and people's names, these words cannot be expressed in Chinese through words equivalent in meaning. Further, it is very likely that the standard translation of these words cannot be found in existing lexical resources, which increases the complexity of the task. A semantic transliteration method (Li et al., 2007) is proposed for the translation of personal names from English to Chinese, which considers the language of origin, gender, and the given or surname information of the source names. The approach aims at maintaining the phonetic equivalence as well as optimizing the semantic transfer.",
"cite_spans": [
{
"start": 710,
"end": 727,
"text": "(Li et al., 2007)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "However, as highlighted in the paper, the research is a case study, and the proposed mathematical framework does not extend to the machine transliteration of NEs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We construct our evaluation data based on the company list of the New York Stock Exchange 2 , in which we select the companies that have both English and Chinese Wikipedia pages as the source and target NEs, respectively; the titles of Chinese pages can be requested for by using the Langlinks API and the English titles. We chose company names because they reflect the corporations' characteristics, providing more information for evaluating fluency, pronunciation, and meaning. Because we focus on the English to Chinese NE translation, companies from Greater China are ignored. In all, 338 En-Zh NE pairs were evaluated by two annotators in terms of the three dimensions mentioned above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3"
},
{
"text": "Tables 1 and 2 list the criteria used for human evaluation as well as some examples of our evaluation data. As a global criterion, we ignore certain common words that do not contribute to the translation of business names, such as Inc., corporation, and group.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "3.1"
},
{
"text": "When evaluating the performance of MT, different types of human judgment including fluency and adequacy are employed. A quantitative and qualitative investigation was conducted by (Tu et al., 2017) . They confirmed that the source and target contexts in neural MT are highly correlated with translation adequacy and fluency, respectively. For our study, this finding may indicate that the more common the translation using existing Chinese expressions, the better is its fluency. As for adequacy, the consistency between the source and target contexts should be prioritized in terms of both pronunciation and meaning.",
"cite_spans": [
{
"start": 180,
"end": 197,
"text": "(Tu et al., 2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "3.1"
},
{
"text": "Fluency measures whether a translation is fluent, regardless of the correct meaning, but it takes the order in the translation highly into account (Snover et al., 2009) . We use a 5-level scale to evaluate the fluency in Chinese, where two dimensions are considered. Considering that the original words or phrases in the target language provide more fluency, we make one dimension as to whether the original Chinese words or phrases are included.",
"cite_spans": [
{
"start": 147,
"end": 168,
"text": "(Snover et al., 2009)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "3.1"
},
{
"text": "Moreover, less semantic splitting provides greater fluency because the more similar the modification relationship among Hanzi, the more likely they are not to be split. Another dimension we consider is the number of semantic splits. In addition, a missing is considered for the semantic orientation of subtokens (the result of semantic splitting). If there is at least one combination between subtokens consisting of (positive word, negative word) or (neutral word, negative word), the fluency score is decreased by 1 to obtain the final fluency score (which should be at least 1). For example, \"\u7f57\u6e23 \u58eb\u901a\u8baf\" is one way to translate \"Rogers Communications\", where it will be split three times to give \"\u7f57\uff5c\u6e23\uff5c\u58eb\uff5c\u901a\u8baf\" , which implies that it will obtain a fluency score of 2. As the subtoken \"\u6e23\" is a negative word, whereas others are neutral, the missing leads to a score of -1 so that the final score for \"\u7f57\u6e23\u58eb\u901a\u8baf\" is 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "3.1"
},
{
"text": "Adequacy measures whether the translation conveys the correct meaning, even if the translation is not completely fluent (Snover et al., 2009) . In this study, we use a three-level scale to measure the translation performance with respect to both pronunciation and meaning because there is no necessity to subdivide further. For the meaning dimension, we consider it being short as a criterion because short meanings are easy to remember, which is essential for a business name. It should be noted that the names of people, places, and so on in the transliteration (pronunciation) should also be evaluated with a high score in the translation (meaning) because they also contribute to the meaning.",
"cite_spans": [
{
"start": 120,
"end": 141,
"text": "(Snover et al., 2009)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "3.1"
},
{
"text": "We evaluated the annotations across two annotators using the kappa coefficient (Landis and Koch, 1977) . The kappa coefficients of fluency, pronunciation, and meaning are approximately 0.68, 0.62, and 0.65, respectively, which indicates that the inter-rater reliability is substantial.",
"cite_spans": [
{
"start": 79,
"end": 102,
"text": "(Landis and Koch, 1977)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Agreement",
"sec_num": "3.2"
},
{
"text": "In the following section, we describe a baseline model for character-based NE translation. Furthermore, we propose filters to remove noisy samples from the Wikititles dataset and demonstrate that the sanitized data could improve the performance of the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Model",
"sec_num": "4"
},
{
"text": "The attention-based encoder-decoder model is a well-known architecture for MT (Bahdanau et al., 2015; Vaswani et al., 2017) . The model tackles MT as a sequence-to-sequence problem.",
"cite_spans": [
{
"start": 78,
"end": 101,
"text": "(Bahdanau et al., 2015;",
"ref_id": "BIBREF0"
},
{
"start": 102,
"end": 123,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4.1"
},
{
"text": "Although the model was first proposed to operate at the level of words, recent papers have proposed character-level neural MT models (Lee et al., 2017) . In the present study, we employed Bahdanau et al. (2015) as our baseline model for English-Chinese NE translation.",
"cite_spans": [
{
"start": 133,
"end": 151,
"text": "(Lee et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 188,
"end": 210,
"text": "Bahdanau et al. (2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4.1"
},
{
"text": "Model The encoder of our model has two layers with 256 hidden dimensions; therefore, the bidirectional GRU has a dimension of 512 and the decoder GRU state has a dimension of 256. The input word embedding and output vector sizes are 256.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.2"
},
{
"text": "For training, we used the Adam (Kingma and Ba, 2014) optimizer with a learning rate of 0.001, clipping gradient norm of 1.0, dropout rate of 0.5, batch size of 512, and early stopping patience of 10. In the evaluation phase, we performed a beam search with a size of 12. We trained three models with different seeds and used character-level BLEU to evaluate the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.2"
},
{
"text": "We train and validate our models on a subset of the Wikititles dataset from which parallel entities are removed, and evaluate them on the Wikititles company dataset. In particular, we randomly split the Wikititles into two parts: 99% for training and 1% for validation. It should be noted that the training and validation sets include NEs from various domains, whereas the test set includes only company names.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": null
},
{
"text": "Further, we trained and evaluated our models on sanitized data, that is, data from which the samples satisfying any of the following four conditions were removed: 1) English and Chinese names are identical, 2) Chinese name does not contain Hanzi,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": null
},
{
"text": "Examples Char (En) Char (Zh) Table 4 : Named entity translation performance of baseline models. Character-level BLEU and chrF scores are reported. \"Vanilla\"/\"Sanitized\" denotes model trained on original/sanitized training data.",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 36,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Split",
"sec_num": null
},
{
"text": "3) English name contains Hanzi, and 4) English or Chinese name is longer than 50 or 20 characters, respectively. Table 3 presents the statistics of the resulting training and validation data. All entities in both English and Chinese are split into characters, and the space is replaced with a special token (in our case, <s>). The vocabularies are built with all words from the original/sanitized training data, yielding 1,063/353 characters in English and 9,598/8,852 in Chinese. Table 4 presents the corpus-level BLEU-4 and chrF-4 scores (Popovi\u0107, 2015) for each model on the English to Chinese translation. We trained three models with random initial states and used their average BLEU scores with error range to represent the final BLEU score on the test set. It can be observed that our baseline system achieved a reasonable performance (\u223c50 BLEU) on the validation set but failed to translate most entities in the test set (<20 BLEU).",
"cite_spans": [
{
"start": 540,
"end": 555,
"text": "(Popovi\u0107, 2015)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 113,
"end": 120,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 481,
"end": 488,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Split",
"sec_num": null
},
{
"text": "The poor performance of our model is attributed to the fact that the Wikititles dataset contains highly diverse data, whereas the test set includes only English System B F P M Zoetis \u4f50\u4f0a\u8482\u65af 0.00 1 5 1 Wipro \u7ef4\u666e\u7f57 50.81 1 5 5 Table 5 : Translation examples generated by the \"sanitized\" model. B, F, P, and M denote BLEU, fluency, pronunciation, and meaning, respectively. company names. Therefore, it is difficult to train an NE translation model and generate new company names. Another potential reason is that the Wikititles dataset contains noisy data. The results obtained for models trained on sanitized data (\"Sanitized\" in Table 4 ) support these ideas and reflect a substantial improvement (+10.30 BLEU) obtained using four simple rule-based filters.",
"cite_spans": [],
"ref_spans": [
{
"start": 221,
"end": 228,
"text": "Table 5",
"ref_id": null
},
{
"start": 625,
"end": 632,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "General performance",
"sec_num": null
},
{
"text": "Human evaluation Further, we manually annotated 338 outputs of the model trained with sanitized data using the criteria introduced in Sec. 3.1. Figure 1 depicts the correlation between the BLEU score and human evaluation scores, where we use the sentence-level BLEU-4 score (Papineni et al., 2002) as the BLEU score for each translation item. We set F + max(P, M ) as the human evaluation scores, where F, P, and M represent fluency, pronunciation, and meaning, respectively. Here, considering that certain NEs are translated using hybrid methods, max(P, M ) is used to balance the weights from transliteration and literal translation. The Pearson correlation coefficient is also calculated, as approximately 0.12. Both Figure 1 and the Pearson correlation coefficient indicate that there is nearly no correlation between these two scores.",
"cite_spans": [
{
"start": 274,
"end": 297,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 144,
"end": 152,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 720,
"end": 728,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "General performance",
"sec_num": null
},
{
"text": "The reason for the low correlation is that most of the translation obtains a BLEU score of 0.0 but different human evaluation scores. In the present test set, certain NE translations are not similar to the references but could be evaluated by humans as being effective. Similarity with references does not represent the quality of the NE translation. Table 5 presents translation examples generated by the model trained on the sanitized data. The reference NE of \"Zoetis\" is \"\u7855\u817e\", where our model transliterates it and makes a high P score. It obtains a low F score because there is no original Zh word. For the NE \"Wipro\", our model translated it with a homophone, where the reference NE is \"\u5a01\u666e\u7f57\"; as a transliteration of people's names, it obtained high P and M scores.",
"cite_spans": [],
"ref_spans": [
{
"start": 351,
"end": 358,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "General performance",
"sec_num": null
},
{
"text": "This paper describes a new method for NE translation from English to Chinese. For this purpose, we presented human evaluation criteria for business names and build a test set. Further, we found that the correlation between the BLEU score and human evaluation scores is weak. The reason is that while the human evaluation scores represent the quality of the NE translation, BLEU represents the similarity between the reference NEs and the outputs from the model. Thus, we conclude that, for evaluating NE translations, reference-less methods should be more effective than reference-based methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://github.com/toshohirasawa/ enzh-named-entity-translation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://en.wikipedia.org/wiki/ Category:Companies_listed_on_the_New_ York_Stock_Exchange",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "TRANSLIT: A large-scale name transliteration resource",
"authors": [
{
"first": "Fernando",
"middle": [],
"last": "Benites",
"suffix": ""
},
{
"first": "Gilbert",
"middle": [],
"last": "Fran\u00e7ois Duivesteijn",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "3265--3271",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fernando Benites, Gilbert Fran\u00e7ois Duivesteijn, Pius von D\u00e4niken, and Mark Cieliebak. 2020. TRANSLIT: A large-scale name transliteration resource. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 3265-3271, Marseille, France. European Language Resources Association.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Proper name translation in cross-language information retrieval",
"authors": [
{
"first": "Sheng-Jie",
"middle": [],
"last": "Hsin-Hsi Chen",
"suffix": ""
},
{
"first": "Yung-Wei",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Shih-Chung",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tsai",
"suffix": ""
}
],
"year": 1998,
"venue": "36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "232--236",
"other_ids": {
"DOI": [
"10.3115/980845.980883"
]
},
"num": null,
"urls": [],
"raw_text": "Hsin-Hsi Chen, Sheng-Jie Huang, Yung-Wei Ding, and Shih-Chung Tsai. 1998. Proper name translation in cross-language information retrieval. In 36th An- nual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 1, pages 232- 236, Montreal, Quebec, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A semanticspecific model for Chinese named entity translation",
"authors": [
{
"first": "Yufeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of 5th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "138--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yufeng Chen and Chengqing Zong. 2011. A semantic- specific model for Chinese named entity translation. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 138-146, Chiang Mai, Thailand. Asian Federation of Natural Language Processing.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Alibaba's neural machine translation systems for WMT18",
"authors": [
{
"first": "Yongchao",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Shanbo",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Jingang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Shenglan",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Guchun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Haibo",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Pei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Changfeng",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Boxing",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers",
"volume": "",
"issue": "",
"pages": "368--376",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6408"
]
},
"num": null,
"urls": [],
"raw_text": "Yongchao Deng, Shanbo Cheng, Jun Lu, Kai Song, Jingang Wang, Shenglan Wu, Liang Yao, Guchun Zhang, Haibo Zhang, Pei Zhang, Changfeng Zhu, and Boxing Chen. 2018. Alibaba's neural machine translation systems for WMT18. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 368-376, Belgium, Brus- sels. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The measurement of observer agreement for categorical data",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Landis",
"suffix": ""
},
{
"first": "G",
"middle": [
"G"
],
"last": "Koch",
"suffix": ""
}
],
"year": 1977,
"venue": "Biometrics",
"volume": "33",
"issue": "1",
"pages": "159--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. R. Landis and G. G. Koch. 1977. The measurement of observer agreement for categorical data. Biomet- rics, 33(1):159-174.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Fully character-level neural machine translation without explicit segmentation",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "365--378",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00067"
]
},
"num": null,
"urls": [],
"raw_text": "Jason Lee, Kyunghyun Cho, and Thomas Hofmann. 2017. Fully character-level neural machine trans- lation without explicit segmentation. Transactions of the Association for Computational Linguistics, 5:365-378.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Semantic transliteration of personal names",
"authors": [
{
"first": "Haizhou",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Khe Chai",
"middle": [],
"last": "Sim",
"suffix": ""
},
{
"first": "Jin-Shea",
"middle": [],
"last": "Kuo",
"suffix": ""
},
{
"first": "Minghui",
"middle": [],
"last": "Dong",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "120--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haizhou Li, Khe Chai Sim, Jin-Shea Kuo, and Minghui Dong. 2007. Semantic transliteration of personal names. In Proceedings of the 45th Annual Meet- ing of the Association of Computational Linguistics, pages 120-127, Prague, Czech Republic. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Can Chinese phonemes improve machine transliteration?: A comparative study of English-to-Chinese transliteration models",
"authors": [
{
"first": "Jong-Hoon",
"middle": [],
"last": "Oh",
"suffix": ""
},
{
"first": "Kiyotaka",
"middle": [],
"last": "Uchimoto",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Torisawa",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "658--667",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jong-Hoon Oh, Kiyotaka Uchimoto, and Kentaro Tori- sawa. 2009. Can Chinese phonemes improve machine transliteration?: A comparative study of English-to-Chinese transliteration models. In Pro- ceedings of the 2009 Conference on Empirical Meth- ods in Natural Language Processing, pages 658- 667, Singapore. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "BLEU: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311-318.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "chrF: character n-gram F-score for automatic MT evaluation",
"authors": [
{
"first": "Maja",
"middle": [],
"last": "Popovi\u0107",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Tenth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "392--395",
"other_ids": {
"DOI": [
"10.18653/v1/W15-3049"
]
},
"num": null,
"urls": [],
"raw_text": "Maja Popovi\u0107. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392-395, Lisbon, Portugal. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Ter-plus: paraphrase, semantic, and alignment enhancements to translation edit rate",
"authors": [
{
"first": "G",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Nitin",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Madnani",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schwartz",
"suffix": ""
}
],
"year": 2009,
"venue": "Machine Translation",
"volume": "23",
"issue": "2-3",
"pages": "117--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew G Snover, Nitin Madnani, Bonnie Dorr, and Richard Schwartz. 2009. Ter-plus: paraphrase, se- mantic, and alignment enhancements to translation edit rate. Machine Translation, 23(2-3):117-127.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Context gates for neural machine translation",
"authors": [
{
"first": "Zhaopeng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Xiaohua",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "87--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhaopeng Tu, Yang Liu, Zhengdong Lu, Xiaohua Liu, and Hang Li. 2017. Context gates for neural ma- chine translation. Transactions of the Association for Computational Linguistics, 5:87-99.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Automatic English-Chinese name transliteration for development of multilingual resources",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Cornelia",
"middle": [
"Maria"
],
"last": "Verspoor",
"suffix": ""
}
],
"year": 1998,
"venue": "36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "1352--1356",
"other_ids": {
"DOI": [
"10.3115/980691.980789"
]
},
"num": null,
"urls": [],
"raw_text": "Stephen Wan and Cornelia Maria Verspoor. 1998. Au- tomatic English-Chinese name transliteration for de- velopment of multilingual resources. In 36th Annual Meeting of the Association for Computational Lin- guistics and 17th International Conference on Com- putational Linguistics, Volume 2, pages 1352-1356, Montreal, Quebec, Canada. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Word, subword or character? An empirical study of granularity in Chinese-English NMT",
"authors": [
{
"first": "Yining",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Long",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2017,
"venue": "China Workshop on Machine Translation",
"volume": "",
"issue": "",
"pages": "30--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yining Wang, Long Zhou, Jiajun Zhang, and Chengqing Zong. 2017. Word, subword or charac- ter? An empirical study of granularity in Chinese- English NMT. In China Workshop on Machine Translation, pages 30-42. Springer.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Correlation between BLEU score and human evaluation scores. Darkness of point denotes the level of overlap.",
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"text": "Criteria for human evaluation are fluency, pronunciation, and meaning. With respect to fluency dimension, Original indicates that there is at least one original Chinese word/phrase in Chinese NE, and #splitting refers to number of semantic splits. With respect to pronunciation dimension, Similar indicates that pronunciation in En-Zh is similar, and #syllable represents number of syllables in En-Zh. With respect to meaning dimension, Translated denotes that all words are translated directly, and short denotes that number of Hanzi is equal to or less than 4.",
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>English</td><td>Chinese (Pinyin)</td><td>F P M</td></tr><tr><td>Celanese</td><td>\u585e\u62c9\u5c3c\u65af (Sai La Ni Si)</td><td>1 5 1</td></tr><tr><td>Altria</td><td>\u5965\u9a70\u4e9a (Ao Chi Ya)</td><td>3 5 1</td></tr><tr><td>Apple Inc.</td><td>\u82f9\u679c\u516c\u53f8 (Pin Guo Gong Si)</td><td>5 1 5</td></tr></table>"
},
"TABREF1": {
"text": "Examples of evaluation data, in which two annotators evaluated identically. F, P, and M denote fluency, pronunciation, and meaning, respectively.",
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>"
},
"TABREF3": {
"text": "Statistics of dataset used to train MT models.",
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td colspan=\"4\">Char (En) and Char (Zh) denote the average number</td></tr><tr><td colspan=\"4\">of characters in the English and Chinese words for the</td></tr><tr><td colspan=\"2\">entities, respectively.</td><td/><td/></tr><tr><td/><td>Validation</td><td>Test</td><td/></tr><tr><td>Model</td><td>BLEU</td><td>BLEU</td><td>chrF</td></tr><tr><td>Vanilla</td><td>48.59</td><td colspan=\"2\">8.07\u00b10.36 15.84</td></tr><tr><td>Sanitized</td><td colspan=\"3\">49.03 18.37\u00b11.82 21.16</td></tr></table>"
}
}
}
}