ACL-OCL / Base_JSON /prefixY /json /Y13 /Y13-1044.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y13-1044",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:32:40.000550Z"
},
"title": "Vietnamese Text Accent Restoration With Statistical Machine Translation",
"authors": [
{
"first": "Luan-Nghia",
"middle": [],
"last": "Pham",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Haiphong University Haiphong",
"location": {
"country": "Vietnam"
}
},
"email": ""
},
{
"first": "Viet-Hong",
"middle": [],
"last": "Tran",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Economic And Technical Industries Hanoi",
"location": {
"country": "Vietnam"
}
},
"email": ""
},
{
"first": "Vinh-Van",
"middle": [],
"last": "Nguyen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National University Hanoi",
"location": {
"country": "Vietnam, Vietnam"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Vietnamese accentless texts exist on parallel with official vietnamese documents and play an important role in instant message, mobile SMS and online searching. Understanding correctly these texts is not simple because of the lexical ambiguity caused by the diversity in adding diacritics to a given accentless sequence. There have been some methods for solving the vietnamese accentless texts problem known as accent prediction and they have obtained promising results. Those methods are usually based on distance matching, n-gram, dictionary of words and phrases and heuristic techniques. In this paper, we propose a new method solving the accent prediction. Our method combine the strength of previous methods (combining n-gram method and phrase dictionary in general). This method considers the accent predicting as statistical machine translation (SMT) problem with source language as accentless texts and target language as accent texts, respectively. We also improve quality of accent predicting by applying some techniques such as adding dictionary, changing order of language model and tuning. The achieved result and the ability to enhance proposed system are obviously promising.",
"pdf_parse": {
"paper_id": "Y13-1044",
"_pdf_hash": "",
"abstract": [
{
"text": "Vietnamese accentless texts exist on parallel with official vietnamese documents and play an important role in instant message, mobile SMS and online searching. Understanding correctly these texts is not simple because of the lexical ambiguity caused by the diversity in adding diacritics to a given accentless sequence. There have been some methods for solving the vietnamese accentless texts problem known as accent prediction and they have obtained promising results. Those methods are usually based on distance matching, n-gram, dictionary of words and phrases and heuristic techniques. In this paper, we propose a new method solving the accent prediction. Our method combine the strength of previous methods (combining n-gram method and phrase dictionary in general). This method considers the accent predicting as statistical machine translation (SMT) problem with source language as accentless texts and target language as accent texts, respectively. We also improve quality of accent predicting by applying some techniques such as adding dictionary, changing order of language model and tuning. The achieved result and the ability to enhance proposed system are obviously promising.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Accent predicting problem refers to the situation where accents are removed (e.g. by some email preprocessing systems), cannot be entered (e.g. by standard English keyboards), or not explicitly represented in the text (e.g. in Arabic). We resolve the languages using Roman characters in writing together with additional accent and diacritical marks. These languages include European lan-guages such as Spanish and French and Asian languages such as Chinese Pinyin and Vietnamese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Vietnamese accentless texts coexist with official Vietnamese texts and it is relatively common texts on the internet. Official Vietnamese language is a complex language with many accent (including acute, grave, hook, tilder, and dot-below) and Latinh alphabets. These are two inseparable components in Vietnamese. However, many Vietnamese choose to use accentless Vietnamese because it is easier and quickly to type. For example, a official Vietnamese sentence: ch\u00fang t\u00f4i s\u1ebd bay t\u1edbi H\u00e0 N\u1ed9i v\u00e0o ch\u1ee7 nh\u1eadt ('We will fly to Hanoi on Sunday') will be written as an Vietnamese accentless sentence as chung toi se bay toi Ha Noi vao chu nhat. Decoding such a sentence could be quite hard for both human and machine because of lexical ambiguity. For instance, the accentless term \"toi\" can easily lead to confusion between the original Vietnamese \"t\u00f4i\" ('we') and the plausible alternative \"t\u1edbi\" ('to').",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Nowadays, the application of information technology to exchanging information is more and more popular. We daily receive many of emails, SMS but the majority of them are without accents which may cause troubles for interpreting the meaning. Therefore, automatic accent restoration of accentless Vietnamese texts have many of applications such as automatically inserting accent to emails, instant message, SMS are written without diacritics Vietnamese, or assistant for website administration in which accent Vietnamese is required. Therefore, it is essential to develop supporting tools which can automatically insert accent to Vietnamese texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Accent predicting problem is the particular problem of lexical disambiguation. The recent approach to lexical disambiguation is corpus-based such as n-gram, dictionary of phrases, ... In this paper, we propose the method for automatic accent restoration using Phrase-based SMT. Vietnamese accentless sentence and Vietnamese accent sentence (office Vietnamese sentence) will be source and target sentence in Phrase based SMT, respectively. We also improve quality of accent predicting by applying some techniques such as adding dictionary, changing size of n-gram of language model. The experiment results with Vietnamese corpus showed that our approach achieves promising results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is organised as follows. Related works are mentioned in Section 2. The methods for accent restoration using SMT are proposed in Section 3. In Section 4, we describe the experiments and results for evaluating the proposed methods. Finally, Section 5 concludes the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the recent years, several different methods were proposed to automatically restore accent for Vietnamese texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "2"
},
{
"text": "The VietPad (Quan, 2002) used a dictionary file and this one is stored all of words in Vietnamese. The idea of VietPad is based on the use of dictionary file and each non-diacritic word is mapped 1-1 into diacritic word. However, the dictionary file also is stored many words which are rarely used so these words is incorrectly restored accent. Therefore, VietPad can only restore Vietnamese accent texts with accuracy about 60-85% and this accuracy is depended on content of text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "2"
},
{
"text": "The AMPad (Tam, 2008) is an efficiency Vietnamese accent restoration tool and texts can be restored online. The idea of AMPad is based on the statistical frequency of diacritics words which correspond with non-diacritics word. The program used selection algorithm and the most appropriate word is chosen. AMPad can restore Vietnamese accent texts with accuracy about 80% or higher for political commentary and popular science domain. However, It also restore Vietnamese accent with accuracy about 50% for specialized documents or literature and poetry documents which have complex sentence structure and multiple meaning.",
"cite_spans": [
{
"start": 10,
"end": 21,
"text": "(Tam, 2008)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "2"
},
{
"text": "The VietEditor (Lan, 2005) is based on the idea of VietPad but it is improved. This program used a dictionary file and this file is stored all of phrase which are often used in Vietnamese texts. This file is called phrase dictionary. After each non-diacritic word is mapped 1-1 into diacritic word, the program check the phrase dictionary to find the most appropriate word. VietEditor restore Vietnamese accent texts more flexibly and accurately than VietPad.",
"cite_spans": [
{
"start": 4,
"end": 26,
"text": "VietEditor (Lan, 2005)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "2"
},
{
"text": "The viAccent (Truyen et al., 2008) allows you to restore Vietnamese accent texts online and with several different speed. Generally, the slower speed is the better result is. The idea of program used n-gram language model and it is reported at the conference PRICAI 2008 (The Pacific Rim International Conference on Artificial Intelligence).",
"cite_spans": [
{
"start": 13,
"end": 34,
"text": "(Truyen et al., 2008)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "2"
},
{
"text": "The VnMark (Toan, 2008) used n-gram language model to create a dictionary file. It is VN-MarkDic.txt file. This file shows occurrence probability of phrase in Vietnamese texts and it will build the case restoration for word or phrase. This combination will create sentences which are restored Vietnamese accents. When the weight of each sentence is identified, the best way will be selected accent restoration for Vietnamese text. However, Accuracy depends on the sentences of the dictionary file. Because the number of sentence is few so the result is still limited.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "2"
},
{
"text": "As mentioned in the section 2, the studies about accent restoration for Vietnamese text are based on experience. These studies used the dictionary file (such as VietPad) or n-gram language model with phrase (such as VnMark, AmPad, viAccent ...) but They are not yet generalized because this combination has some limitation as following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our method",
"sec_num": "3"
},
{
"text": "\u2022 The number of phrases are few (each phrase is only 2 words, 3 words) and there are no priority when each phrase is chosen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our method",
"sec_num": "3"
},
{
"text": "\u2022 The combination of language model and phrase dictionary is simple and It is mainly based on experience.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our method",
"sec_num": "3"
},
{
"text": "To overcome the above limitation, we present a general approach to restore accent for Vietnamese text. This method is viewed as machine translation from non-diacritical Vietnamese language (source language) into diacritical Vietnamese language (target language). This method has solved two above limitation by the use of phrase-table with the priority levels (the length of the phrase is arbitrary and the corresponding translation probability value) and It is combined language model with phrase-table by log-linear model (adding and turning of parameters for combination).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our method",
"sec_num": "3"
},
{
"text": "In this section, we will present the basics of Phrase-based SMT toolkit (Koehn et al., 2003) .",
"cite_spans": [
{
"start": 72,
"end": 92,
"text": "(Koehn et al., 2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of Phrase-table Statistical Machine Translation",
"sec_num": "3.1"
},
{
"text": "The goal of the model is translating a text from source language to target language. As described by (1), we have sentences in source language (English) e I 1 = e 1 , . . . e j , . . . , e I which are translated into target language (Vietnamese)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of Phrase-table Statistical Machine Translation",
"sec_num": "3.1"
},
{
"text": "v J 1 = v 1 , . . . , v j , . . . v J .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of Phrase-table Statistical Machine Translation",
"sec_num": "3.1"
},
{
"text": "Each sentence can be found in the target text then the we will choose sentence so that:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of Phrase-table Statistical Machine Translation",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "V J 1 = argmax v I 1 Pr(v J 1 /e J 1 )",
"eq_num": "(1)"
}
],
"section": "Overview of Phrase-table Statistical Machine Translation",
"sec_num": "3.1"
},
{
"text": "The figure below illustrates the process of phrasebased translation: In phrase translation model, sequence of consecutive words (phrases) are translated into the target language. The length of source phrase can be different from target phrase. We divide source sentence into several phrases and each phrase is translated into a target language, then it reorder the phrases so that the target sentence to satisfy the formula (1) and then they are grafted together. Finally, we get a translation in the target language. Figure 2 shows the phrase-based statistical translation model. There are many translation knowledge which can be used as language models, translation models, etc. The combination of component models (language model, translation model, word sense disambiguation, reordering model...) is based on log-linear model (Koehn et al., 2003) .",
"cite_spans": [
{
"start": 830,
"end": 850,
"text": "(Koehn et al., 2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 518,
"end": 526,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Overview of Phrase-table Statistical Machine Translation",
"sec_num": "3.1"
},
{
"text": "Statistical Machine Translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Accent prediction based on Phrase",
"sec_num": "3.2"
},
{
"text": "Vietnamese texts with accent are collected from newspapers, books, the internet, etc., and they are reprocessed to remove the excess characters. Then Finally, the system is implemented by the decoder process. For each translation choice will have a hypothesis. Suppose first selection word is the neil_Young, this word is translated into neil_Young (unchanged) because it do not find corresponding word. For simplicity, we translate from left to right of sentence. Next, da word is translated, for example, it can be \u0111\u00e1 or \u0111\u00e3. The probability of each hypothesis is calculated and updated for the each new hypothesis. Next, bieu_dien is translated, this phrase is found in the phrase table, language model and a phrase is chosen that is bi\u1ec3u_di\u1ec5n phrase. Combining hypotheses can happen, da bieu_dien phrase is restored to \u0111\u00e3 bi\u1ec3u_di\u1ec5n. Continue until all the words of sentence are translated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Accent prediction based on Phrase",
"sec_num": "3.2"
},
{
"text": "For evaluation, we used an accentless Vietnamese corpus with 206000 pairs, including 200000 pairs for training, 1000 pairs for tunning and 5000 pairs for development test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Statistics and Experiments",
"sec_num": "4.1"
},
{
"text": "The corpus for experiments was collected from newspapers, books,. . . on the internet with topics such as social, sports, science ). The Table 1 shows the summary statistical of our data sets. Several experiments are processed on the basis of Phrase-based Statistical Machine Translation model with MOSES open-source decoder (Koehn et al., 2007) . For training data and turning parameters, we used standard settings in the Moses toolkit (GIZA++ alignment, growdiagfinal-and, lexical reordering models, MERT turning). To build the language model, we used SRILM toolkit (Stolcke, 2002) with 3 and 4-gram. In this experiments, we evaluated the quality of the translation results by BLEU score (Papineni et al., 2002) and accuracy sentences.",
"cite_spans": [
{
"start": 325,
"end": 345,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF2"
},
{
"start": 568,
"end": 583,
"text": "(Stolcke, 2002)",
"ref_id": "BIBREF7"
},
{
"start": 690,
"end": 713,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 137,
"end": 144,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Corpus Statistics and Experiments",
"sec_num": "4.1"
},
{
"text": "We performed experiments on MT_VR system and MT_VR+Dict system:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Statistics and Experiments",
"sec_num": "4.1"
},
{
"text": "\u2022 MT_VR is a baseline Vietnamese restoration system. This system uses phrase-based statistical machine translation with standard settings in the Moses toolkit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Statistics and Experiments",
"sec_num": "4.1"
},
{
"text": "\u2022 MT_VR+Dict is a baseline Vietnamese restoration combine with dictionary information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Statistics and Experiments",
"sec_num": "4.1"
},
{
"text": "\u2022 We experimented on several different corpus and we evaluated translation quality. \u2022 Translate model with the corpus include 50.000, 100.000, 150.000 and 200.000 sentence pairs. After successful training, we tested with 5.000 pairs of sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "To improve the quality of the system we need to build a corpus with better quality as well as greater coverage and we need to process accurately data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "We have improved on some of the approach",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "The training from the raw corpus may have some limitations due to the size of the corpus. If the corpus is too small, the possibility of useful phrases are not learned when building phrase table. However, if corpus is too larger could in excess. In addition, we used the automatic segment of phrase tool so that it can be some errors in the analysis. We added Vietnamese dictionary of compound and syllable word into the phrase translation table and we assigned weight 1 into the each word, we solved this problem. Results as following: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Improved models using dictionary",
"sec_num": "4.2.1"
},
{
"text": "Changing n-gram level, the increased level of language model improved translation results. However, with level 4 or higher then the results has not been almost changed. Because in the Vietnamese language, the phrases including 3 or 4 words are more related than each other. The result with the weight assigned to 1 and level of language model 4: BLEU score=0.9850; accuracy when using MT_VR combine dictionary information: 92%. Table 3 shows experimental results with different training corpus. Experimental results show that using MT_VR combine dictionary infomation with 3-gram have the best result. Table 4 and Figure 5 show that training with 200.000 pairs of sentence will have the best accuracy accents prediction. ",
"cite_spans": [],
"ref_spans": [
{
"start": 428,
"end": 435,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 614,
"end": 622,
"text": "Figure 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Improved model using n-gram level changes",
"sec_num": "4.2.2"
},
{
"text": "We also compared our method with viAccent system (Truyen et al., 2008) because viAccent is the newest and efficient method for Vietnamese accent prediction. We conducted the experiment with the same test corpus (5000 sentences) for vi-Accent. Bleu scores of both MT_VR+Dict and vi-Accent system were showed on Table 6 . ",
"cite_spans": [
{
"start": 49,
"end": 70,
"text": "(Truyen et al., 2008)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 310,
"end": 317,
"text": "Table 6",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Comparison with other methods",
"sec_num": "4.2.3"
},
{
"text": "The experimental results showed that our approach achieves significant improvements over viAccent system. Performance of accent prediction with our method achieves better accuracy than that and some examples in test corpus was showed on Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 237,
"end": 245,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "In this paper, we introduced the issues of accents prediction for accentless Vietnamese texts and proposed a novel method to resolve this problem. The our idea is based on Phrase-based Statistical Machine Translation to develop a Vietnamese text accent restoration system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We combined the advantage of previous approach such as n-gram languages model and phrase dictionary. In general, experimental results showed that our approach achieves promised performance. The quality of accents prediction can be improved if we have a better corpus or assigned appropriate weight to dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "This work is funded by the Vietnam National Foundation for Science and Technology Development (NAFOSTED) under grant number 102.01-2011.08",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "6"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A Hybrid Approach to Word Segmentation of Vietnamese Texts",
"authors": [
{
"first": "Phuong-Le",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "Huyen-Nguyen Thi",
"middle": [],
"last": "Minh",
"suffix": ""
},
{
"first": "Azim",
"middle": [],
"last": "Roussanaly",
"suffix": ""
},
{
"first": "Vinh-Ho",
"middle": [],
"last": "Tuong",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2nd International Conference on Language and Automata Theory and Applications",
"volume": "5196",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Phuong-Le Hong, Huyen-Nguyen Thi Minh, Azim Roussanaly, and Vinh-Ho Tuong. 2008. A Hy- brid Approach to Word Segmentation of Vietnamese Texts. In Proceedings of the 2nd International Con- ference on Language and Automata Theory and Ap- plications, Springer, LNCS 5196.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Statistical phrase-based translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"Josef"
],
"last": "Och",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of HLT-NAACL 2003",
"volume": "",
"issue": "",
"pages": "127--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Pro- ceedings of HLT-NAACL 2003, pages 127-133. Ed- monton, Canada.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Ondrej",
"middle": [],
"last": "Bojar",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexan- dra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine transla- tion. Proceedings of ACL, Demonstration Session.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Approach to add accents for Vietnamese text without accent. Informatics Bachelor's thesis",
"authors": [
{
"first": "Lan",
"middle": [],
"last": "Phan Quoc",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Phan Quoc Lan. 2005. Approach to add accents for Vietnamese text without accent. Informatics Bache- lor's thesis, VietNam National University of Ho Chi Minh City.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A tree-to-string phrase-based model for statistical machine translation",
"authors": [
{
"first": "Akira",
"middle": [],
"last": "Thai Phuong Nguyen",
"suffix": ""
},
{
"first": "Tu",
"middle": [
"Bao"
],
"last": "Shimazu",
"suffix": ""
},
{
"first": "Minh",
"middle": [
"Le"
],
"last": "Ho",
"suffix": ""
},
{
"first": "Vinh",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Nguyen",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Twelfth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "143--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thai Phuong Nguyen, Akira Shimazu, Tu Bao Ho, Minh Le Nguyen, and Vinh Van Nguyen. 2008. A tree-to-string phrase-based model for statistical ma- chine translation. In Proceedings of the Twelfth Conference on Computational Natural Language Learning (CoNLL 2008), pages 143-150, Manch- ester, England, August. Coling 2008 Organizing Committee.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BLEU: A Method for Automatic Evaluation of Machine Translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Papineni, Kishore, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. ACL.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Srilm -an extensible language modeling toolkit",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of International Conference on Spoken Lan-guage Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Stolcke. 2002. \"Srilm -an extensible language modeling toolkit,\" in Proceedings of International Conference on Spoken Lan-guage Processing.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Constrained Sequence Classification for Lexical Disambiguation",
"authors": [
{
"first": "Dinh",
"middle": [
"Q"
],
"last": "Tran The Truyen",
"suffix": ""
},
{
"first": "Svetha",
"middle": [],
"last": "Phung",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Venkatesh",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of PRICAI 2008",
"volume": "",
"issue": "",
"pages": "430--441",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tran The Truyen, Dinh Q. Phung, and Svetha Venkatesh. 2008. Constrained Sequence Classifi- cation for Lexical Disambiguation. In Proceedings of PRICAI 2008, pages 430-441.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Phrase-based translation model",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "Diagram Phrase-based SMT translation based on log-linear model vnTokenizer tool(Hong et al., 2008) is used to segment words in Vietnamese language. After that, the text file with accent and a corresponding accentless text fileis are created. Two text files are the same number of line and each line of accent text file corresponds with an accentless line in another text file.Figure 3shows some sentences in corpus.",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "Some sentences in corpus",
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"text": ": Phrase-based Statistical Translation model has three important components. They include translation model, language model and decoder. Translation results are calculated with parameters in the PACLIC-27",
"uris": null
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"text": "Accents restoration base on Phrase-based SMT phrase",
"uris": null
},
"FIGREF5": {
"type_str": "figure",
"num": null,
"text": "Compare BLUE score with experiment systems",
"uris": null
},
"TABREF1": {
"num": null,
"text": "",
"html": null,
"type_str": "table",
"content": "<table><tr><td>language model. Example, ac-</td></tr><tr><td>centless sentence restoration in Vietnamese:</td></tr><tr><td>neil Young da bieu dien nhieu the loai nhac rock</td></tr><tr><td>va blue.</td></tr><tr><td>After the phrases is segmented. This sentence is:</td></tr><tr><td>neil_Young da bieu_dien nhieu the loai nhac rock</td></tr><tr><td>va blue.</td></tr><tr><td>Results after the translation:</td></tr><tr><td>neil_Young \u0111\u00e3 bi\u1ec3u_di\u1ec5n nhi\u1ec1u th\u1ec3 lo\u1ea1i nh\u1ea1c rock</td></tr><tr><td>v\u00e0 blue .</td></tr><tr><td>First, the source sentence will be divided into</td></tr><tr><td>phrases neil_Young, neil_Young da, neil_Young</td></tr><tr><td>da bieu_dien, da bieu_dien, bieu_dien, bieu_dien</td></tr><tr><td>nhieu,. . . .</td></tr><tr><td>After that, the system find the probability of</td></tr><tr><td>each phrase in the phrase table and language</td></tr><tr><td>model then the weight of sentence is computed .</td></tr><tr><td>Example:</td></tr><tr><td>We found weight of above phrases in the phrase</td></tr><tr><td>table and language model:</td></tr><tr><td>da bieu_dien ||| \u0111\u00e3 bi\u1ec3u_di\u1ec5n ||| 1 1 1 0.857179</td></tr><tr><td>2.718 ||| 1 1</td></tr><tr><td>the loai ||| th\u1ec3 lo\u1ea1i ||| 1 1 1 0.0362146 2.718 ||| 2 2</td></tr><tr><td>-3.436823 \u0111\u00e1 -0.3055001</td></tr><tr><td>-2.309609 \u0111\u00e3 -0.5276677</td></tr><tr><td>-4.109961 bi\u1ec3u_di\u1ec5n -0.2860174</td></tr><tr><td>-4.168163 \u0111\u00e3 bi\u1ec3u_di\u1ec5n</td></tr><tr><td>-2.227281 bi\u1ec3u_di\u1ec5n nhi\u1ec1u</td></tr><tr><td>-1.628649 th\u1ec3 lo\u1ea1i</td></tr></table>"
},
"TABREF3": {
"num": null,
"text": "The accuracy of experiment systems",
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF4": {
"num": null,
"text": "The accuracy of experiment systems",
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF5": {
"num": null,
"text": "Results with different n-grams levels",
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF6": {
"num": null,
"text": "Accent prediction of some sentences",
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF7": {
"num": null,
"text": "Compared our method with viAccent system",
"html": null,
"type_str": "table",
"content": "<table/>"
}
}
}
}