ACL-OCL / Base_JSON /prefixY /json /Y18 /Y18-1038.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y18-1038",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:35:42.907600Z"
},
"title": "Improving the neural network-based machine transliteration for low-resourced language pair",
"authors": [
{
"first": "Ngoc",
"middle": [
"Tan"
],
"last": "Le",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universite du Quebec a Montreal",
"location": {
"addrLine": "201, avenue du President-Kennedy",
"settlement": "Montreal",
"country": "Canada"
}
},
"email": ""
},
{
"first": "Fatiha",
"middle": [],
"last": "Sadat",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universite",
"location": {
"addrLine": "du Quebec a Montreal / 201, avenue du President-Kennedy",
"settlement": "Montreal",
"country": "Canada"
}
},
"email": "sadat.fatiha@uqam.ca"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Grapheme-to-phoneme models are key components in automatic speech recognition and text-to-speech systems. With low-resourced language pairs that do not have available and well-developed pronunciation lexicons, grapheme-to-phoneme models are particularly useful. The current work presents an approach that applies an alignment representation for input sequences and pre-trained source and target embeddings to overcome the transliteration challenge for a low-resourced languages pair. The proposed method is tested with French and Vietnamese low-resourced language pair. The results showed promising improvement compared to the state-of-the-art approaches, with a large increase of +7.30 BLEU and a reduction in translation error rate (TER) of \u22128.16 and phoneme error rate (PER) of \u221214.17.",
"pdf_parse": {
"paper_id": "Y18-1038",
"_pdf_hash": "",
"abstract": [
{
"text": "Grapheme-to-phoneme models are key components in automatic speech recognition and text-to-speech systems. With low-resourced language pairs that do not have available and well-developed pronunciation lexicons, grapheme-to-phoneme models are particularly useful. The current work presents an approach that applies an alignment representation for input sequences and pre-trained source and target embeddings to overcome the transliteration challenge for a low-resourced languages pair. The proposed method is tested with French and Vietnamese low-resourced language pair. The results showed promising improvement compared to the state-of-the-art approaches, with a large increase of +7.30 BLEU and a reduction in translation error rate (TER) of \u22128.16 and phoneme error rate (PER) of \u221214.17.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In many domains, including machine translation, end-to-end deep learning models have become a valuable alternative to more traditional statistical approaches (Wu et al., 2016; Koehn, 2017) . This is our motivation for applying a similar approach to build a machine transliteration system.",
"cite_spans": [
{
"start": 158,
"end": 175,
"text": "(Wu et al., 2016;",
"ref_id": "BIBREF14"
},
{
"start": 176,
"end": 188,
"text": "Koehn, 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the state-of-the-art, the grapheme-to-phoneme methods were based on the use of graphemephoneme mappings (Oh et al., 2006; Duan et al., 2016) . However, recurrent neural networks approaches do not require any alignment information. In this study, we propose a method to build a low-resourced machine transliteration system, using RNN-based models and alignment information for input sequences. We are interested in solving out-ofvocabulary words for machine translation systems, such as proper nouns or technical terms, for a lowresourced language pair.",
"cite_spans": [
{
"start": 107,
"end": 124,
"text": "(Oh et al., 2006;",
"ref_id": null
},
{
"start": 125,
"end": 143,
"text": "Duan et al., 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The structure of the article is as follows: Section 2 presents the state of the art on machine transliteration. In section 3, we describe some background and our proposed approach. Then, in section 4, we present several experiments, the evaluations and an error analysis. Finally, section 5 concludes with some perspectives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Transliteration can be considered as a subtask of machine translation, when we need to translate source graphemes into target phonemes. In other words, an alignment model needs to be constructed first, and the translation model is built on the basis of the alignments. Transliterating a word from the language of its origin to a foreign language is called Forward Transliteration, while transliterating a loan-word written in a foreign language back to the language of its origin is called Backward Transliteration (Karimi et al., 2011) .",
"cite_spans": [
{
"start": 515,
"end": 536,
"text": "(Karimi et al., 2011)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Statistical techniques based on large parallel transliteration corpora work well for rich-resource languages but low-resource languages do not have the luxury of such resources. For such languages, rule-based transliteration is the only viable option.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "From 2009 to 2016, various transliteration systems were proposed during the Named Entities Workshop evaluation campaigns 1 (Duan et al., 2016) . These campaigns consist in transliterating from English into languages with a wide variety of writing systems, including Hindi, Tamil, Russian, Kannada, Chinese, Korean, Thai and Japanese. We can see that the romanization of non-Latin writing systems remains a complex computational task that depends crucially on which language is involved. Through this workshop, much progress has been made in methodologies for resolving the transliteration of proper nouns. We see the emergence of different approaches, such as grapheme-to-phoneme conversion (Finch and Sumita, 2010; Ngo et al., 2015) , based on statistics like machine translation (Laurent et al., 2009; Nicolai et al., 2015) and neural networks (Finch et al., 2016; Shao and Nivre, 2016; Thu et al., 2016) . Other works used attentionless sequence-to-sequence models for the transliteration task (Yao and Zweig, 2015; Rosca and Breuel, 2016) . One study used a bidirectional Long Short-Term Memory (LSTM) models together with input delays for grapheme-to-phoneme conversion (Rao et al., 2015) .",
"cite_spans": [
{
"start": 121,
"end": 142,
"text": "1 (Duan et al., 2016)",
"ref_id": null
},
{
"start": 691,
"end": 715,
"text": "(Finch and Sumita, 2010;",
"ref_id": null
},
{
"start": 716,
"end": 733,
"text": "Ngo et al., 2015)",
"ref_id": null
},
{
"start": 781,
"end": 803,
"text": "(Laurent et al., 2009;",
"ref_id": "BIBREF10"
},
{
"start": 804,
"end": 825,
"text": "Nicolai et al., 2015)",
"ref_id": null
},
{
"start": 846,
"end": 866,
"text": "(Finch et al., 2016;",
"ref_id": "BIBREF3"
},
{
"start": 867,
"end": 888,
"text": "Shao and Nivre, 2016;",
"ref_id": null
},
{
"start": 889,
"end": 906,
"text": "Thu et al., 2016)",
"ref_id": "BIBREF13"
},
{
"start": 997,
"end": 1018,
"text": "(Yao and Zweig, 2015;",
"ref_id": "BIBREF15"
},
{
"start": 1019,
"end": 1042,
"text": "Rosca and Breuel, 2016)",
"ref_id": null
},
{
"start": 1175,
"end": 1193,
"text": "(Rao et al., 2015)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Another important challenge with the extraction of named entities and automatic transliteration is related to the vast variety of writing systems. All these difficulties are aggravated by the lack of bilingual pronunciation dictionaries for proper nouns, ambiguous transcriptions and orthographic variation in a given language. In addition to transliteration generation systems, there are also transliteration mining systems that try to obtain parallel transliteration pairs from comparable corpora (Kumaran et al., 2010; Tran et al., 2016; Sajjad et al., 2017) .",
"cite_spans": [
{
"start": 499,
"end": 521,
"text": "(Kumaran et al., 2010;",
"ref_id": "BIBREF9"
},
{
"start": 522,
"end": 540,
"text": "Tran et al., 2016;",
"ref_id": "BIBREF14"
},
{
"start": 541,
"end": 561,
"text": "Sajjad et al., 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "In our literature review, we found a few cases in which Vietnamese had been studied for the transliteration task. Cao et al. (2010) applied the statistical-based approach as machine translation in the transliteration task for the English-Vietnamese low-resource language pair, with a performance of 63 BLEU points. Ngo et al. (2015) proposed a statistical model for English and Vietnamese, with a phonological constraint on syllables. Their system performed better than the rule-based baseline system, with a 70% reduction in error rates. Le and Sadat (2017) explored RNN, particularly, LSTM, in the transliteration task for French and Vietnamese. Their results showed that the RNN-based system performed better than the baseline system, which was based on a statistical approach. In this work, we propose a new approach by using alignment representation for input sequences and pre-trained source/target embeddings in the input layer in order to build a neural network-based transliteration system to solve the problem of scattered data due to a low-resource language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "French syllable structure is very rich, with a variety of structures such as CV , CV C, CCV CC, etc., where C is a consonant and V is a vowel. On the other hand, the structure of syllables in Vietnamese is very simple. One of the linguistic features of Vietnamese is that a word consists of one or more syllables (Phe, 1997) . A syllable in Vietnamese has the following structure:",
"cite_spans": [
{
"start": 313,
"end": 324,
"text": "(Phe, 1997)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phonology of Vietnamese",
"sec_num": "3.1"
},
{
"text": "Syllable = Onset + V owel + Coda",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phonology of Vietnamese",
"sec_num": "3.1"
},
{
"text": "The boundary of a syllable depends on consonant groups (onset and coda) and vowels. Vietnamese uses a Latin alphabet with 29 letters (Figure 1 ). There are 12 vowels and 17 consonant unigrams, 9 consonant bi-grams and 1 tri-gram. The vowels consist of V = {\"a\", \"\u0103\", \"\u00e2\", \"e\", \"\u00ea\", \"i\", \"o\", \"\u00f4\", \"\u01a1\", \"u\", \"\u01b0\", \"y\" }. Possible consonants in the Onset = {\"b\", \"ch\", \"c\", \"d\", \"\u0111\", \"gi\", \"gh\", \"g\", \"h\", \"kh\", \"k\", \"l\", \"m\", \"ngh\", \"ng\", \"nh\", \"n\", \"ph\", \"q\", \"r\", \"s\", \"th\", \"tr\", \"t\", \"v\", \"x\", \"p\"}. Of these consonants, eight can appear in Coda = {\"c\", \"ch\", \"m\", \"n\", \"ng\", \"nh\", \"p\", \"t\"}. Phe (1997) found about 10,000 syllables in Vietnamese . In this work, we focus on the graphemes and phonemes of all the words in the bilingual pronunciation dictionary.",
"cite_spans": [],
"ref_spans": [
{
"start": 133,
"end": 142,
"text": "(Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Phonology of Vietnamese",
"sec_num": "3.1"
},
{
"text": "Our proposed approach for an efficient transliteration consists of three main steps: (1) First, we obtain a bilingual pronunciation dictionary for a low-resource language pair, in this case French and Vietnamese. Then, this learning data is pre-processed with normalization in lowercasing, removing the hyphens separating syllables and segmenting all syllables at the character level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3.2"
},
{
"text": "(2) We extract the alignment output from the bilingual pronunciation dictionary and modify the input sequences based on the alignment results ( Figure 2 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 144,
"end": 152,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3.2"
},
{
"text": "(3) Then we train an RNN-based machine transliteration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Approach",
"sec_num": "3.2"
},
{
"text": "Given the grapheme sequence of a word and a phoneme sequence corresponding to its pronunci-ation, an alignment strategy consists of associating each grapheme with its corresponding phoneme, often of length 1. In other words, the grapheme and phoneme sequences are aligned to form joint grapheme-phoneme units, which are called graphone. Yao and Zweig (2015) have reported that, in these alignments, a grapheme may correspond to a null phoneme, a single phoneme or a compound phoneme, with two phonemes. Some illustrations are given in Table 1 .",
"cite_spans": [
{
"start": 337,
"end": 357,
"text": "Yao and Zweig (2015)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 535,
"end": 542,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Grapheme-to-Phoneme Alignments",
"sec_num": "3.3"
},
{
"text": "Given a grapheme sequence G = {g 1 , g 2 , ..., g N }, a corresponding phoneme sequence P = {p 1 , p 2 , ..., p N }, and an alignment A, the posterior probability p(P |G, A) is estimated as follows: Table 1 : Examples of grapheme-to-phoneme alignments, in this case grapheme for French and phoneme for Vietnamese. A grapheme G aligns with a single phoneme (1-to-1), a compound phoneme (1-to-2), two graphemes with one phoneme (2-to-1), or many graphemes with many phonemes (m-to-m). The letter 's' is aligned with a null phoneme '_' that is not pronounced. Figure 3 : Our RNN-based model architecture with encoder-decoder bi-directional LSTM and alignment representation on input sequences. We use <s> and </s>, <os> and </eos> markers to pad the grapheme/phoneme sequences to a fixed length.",
"cite_spans": [],
"ref_spans": [
{
"start": 199,
"end": 206,
"text": "Table 1",
"ref_id": null
},
{
"start": 557,
"end": 565,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Grapheme-to-Phoneme Alignments",
"sec_num": "3.3"
},
{
"text": "Alignments 1-to-1 1-to-2 2-to-1 m-to-m Graphemes p | a | r | i | s | n | i | c | e | j | a | c | q:u | e:s | t:r | u | f:f | a:u | t | Phonemes p | a | r | i | _ n | \u00ed:t | x | \u1edd | gi | \u1eaf | c | c | \u01a1 | t:r | u:y | p:h | \u00f4 | _ |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grapheme-to-Phoneme Alignments",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(P |G, A) \u2248 N n=1 p(p n |p n\u2212k n\u22121 , g n+k n\u2212k )",
"eq_num": "(1)"
}
],
"section": "Grapheme-to-Phoneme Alignments",
"sec_num": "3.3"
},
{
"text": "where k is the context window size, and n is the position index in the alignment. A graphone alignment strategy can be automatically learnt from a pronunciation dictionary using maximum entropy derived from equation 1 (Barros and Weiss, 2006) or expectation-maximization (Jiampojamarn et al., 2007) . Bisani and Ney (2008) used an alignment strategy in a multi-joint sequence model for grapheme-to-phoneme conversion. Their system performed with better accuracy in terms of phoneme error rate.",
"cite_spans": [
{
"start": 218,
"end": 242,
"text": "(Barros and Weiss, 2006)",
"ref_id": "BIBREF1"
},
{
"start": 271,
"end": 298,
"text": "(Jiampojamarn et al., 2007)",
"ref_id": "BIBREF5"
},
{
"start": 301,
"end": 322,
"text": "Bisani and Ney (2008)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grapheme-to-Phoneme Alignments",
"sec_num": "3.3"
},
{
"text": "In contrast, in RNN-based models, that alignment information is not available during the decoding phase, in which the given grapheme sequence does not indicate any clusters aligned with a null/single/compound phoneme or vice-versa. We observe that combining 1-to-2 and 2-to-1 alignment strategies, called bigram-align, can solve this challenge when dealing with small data. Using graphone bigrams can sufficiently cover the output alphabet in the training data. Alignments using graphone bigrams are similar to parallel corpora alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grapheme-to-Phoneme Alignments",
"sec_num": "3.3"
},
{
"text": "In this work, we use alignment information to alter input sequences because in low-resource settings, representation is crucial for neural networks models. On the other hand, the source and target embeddings will be pre-trained with the training data. For the transliteration task, these embeddings are considered as linear vector mappings between the source and the target. Then they become one of features in the input layer. We expect that exploiting this kind of alignment representation for RNNbased machine transliteration will enhance the system's performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grapheme-to-Phoneme Alignments",
"sec_num": "3.3"
},
{
"text": "To evaluate the efficiency of our proposed transliteration system in low resource settings, we used a bilingual pronunciation dictionary that has been collected from the news websites, as presented by Cao et al. (2010) . The learning data comprise 4,259 pairs of bilingual French-Vietnamese named entities pairs, with a set of vocabularies that contains 31 graphemes on the French source side, and 71 phonemes on the Vietnamese target side. This bilingual dictionary was filtered out from the 146M-word corpus collected from French-Vietnamese newspaper texts available on the Internet between April 2008 and October 2009. We found that most of the named entities were persons, locations and organizations. To overcome the problem of the scattering of learning data, we performed the pre-processing step with segmentation of all syllables at the character level and presented the whole dataset in lowercase. The bilingual pronunciation dictionary for the learning was divided into training, development and test sets at a ratio of 90% , 5% and 5% respectively.",
"cite_spans": [
{
"start": 201,
"end": 218,
"text": "Cao et al. (2010)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments 4.1 Configuration",
"sec_num": "4"
},
{
"text": "To deal with the alignment representation, we used the m-2-m aligner 2 toolkit (Jiampojamarn et al., 2007) to align the training data at the character level. We chose m = 2 (bigram-align) for all experiments; this means that a maximum of two graphemes on the source side will be aligned with a maximum of two phonemes on the target side. For the pre-trained source and target embeddings, we applied the word2vec 3 toolkit (Mikolov et al., 2013) with a dimension of 64, a continuous space window size of 5 and the skip-gram option.",
"cite_spans": [
{
"start": 79,
"end": 106,
"text": "(Jiampojamarn et al., 2007)",
"ref_id": "BIBREF5"
},
{
"start": 422,
"end": 444,
"text": "(Mikolov et al., 2013)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments 4.1 Configuration",
"sec_num": "4"
},
{
"text": "We applied the nmt-keras 4 toolkit (Peris, 2017) to train our transliteration model for the French-Vietnamese language pair. In the transliteration system configuration, we used two-layer encoderdecoder bi-directional RNN, with a 64-dimension projection layer to encode the input sequences and 128 nodes in each hidden layer. We performed two mechanisms of LSTM and GRU (Gated Recurrent Unit). The attention mechanism focuses on input sequence of annotations (Bahdanau et al., 2014) . These 64-dimension character embeddings are shared across encoder and decoder LSTMs layers. We used the Adam optimizer to learn the weights of the network with a default learning rate of 0.001. For decoding, the beam search was assigned the size of 6. All the RNN hyper-parameters were determined by tuning on the development set.",
"cite_spans": [
{
"start": 459,
"end": 482,
"text": "(Bahdanau et al., 2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments 4.1 Configuration",
"sec_num": "4"
},
{
"text": "We use different evaluation metrics such as BiLingual Evaluation Understudy (BLEU) (Papineni et al., 2002) , Translation Error Rate (TER) (Snover et al., 2009) , and Phoneme Error Rate (PER), that is 2 https://github.com/letter-to-phoneme/ m2m-aligner/ 3 https://code.google.com/archive/p/ word2vec/ 4 https://github.com/lvapeab/nmt-keras/ similar to Word Error Rate. These metrics were automatically evaluated with a tool, MultEval version 0.5.1 5 (Clark et al., 2011) . To evaluate our proposed approach, we implemented five systems (Table 2): (1) Baseline system A : phrase-based statistical machine translation (pbSMT).",
"cite_spans": [
{
"start": 83,
"end": 106,
"text": "(Papineni et al., 2002)",
"ref_id": null
},
{
"start": 138,
"end": 159,
"text": "(Snover et al., 2009)",
"ref_id": null
},
{
"start": 447,
"end": 469,
"text": "5 (Clark et al., 2011)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 535,
"end": 545,
"text": "(Table 2):",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "We implemented a pbSMT system with Moses 6 (Koehn et al., 2007) . We used mGIZA (Gao and Vogel, 2008) to align the corpus at the character level, and SRILM (Stolcke and others, 2002) to create a character-based 5-gram language model for the target language.",
"cite_spans": [
{
"start": 43,
"end": 63,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF7"
},
{
"start": 80,
"end": 101,
"text": "(Gao and Vogel, 2008)",
"ref_id": "BIBREF4"
},
{
"start": 156,
"end": 182,
"text": "(Stolcke and others, 2002)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "(2) Baseline system B : multi-joint sequence model for grapheme-to-phoneme convertion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "We applied the Sequitur-G2P 7 toolkit to train a transliteration model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "(3) System 1 : encoder-decoder bidirectional LSTM + attention mechanism.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "(4) System 2 : encoder-decoder bidirectional GRU + attention mechanism + alignment representation for input sequences. The difference between the two baseline systems' performance is minor. Baseline system B seems slightly more efficient than baseline system A, with a gain of +4.40 BLEU, as well as reduced -3.58 for translation error rate (TER) and -6.20 for phoneme error rate (PER) ( Table 2) .",
"cite_spans": [],
"ref_spans": [
{
"start": 388,
"end": 396,
"text": "Table 2)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "In addition, by comparing the two basic systems and the three systems 1, 2 and 3 (proposed approach), we find significant results with scores up to 68.60 BLEU, reductions in translation error rate (TER) and phoneme error rate (PER) up to 15.92 and 30.03, respectively .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "In principle, the GRU cell does not need to use a memory unit to control the flow of information like the LSTM cell. It is possible that all hidden states are directly used without any control. The GRUbased architectures have fewer parameters and can therefore train a little faster or need less data to generalize. This is one of the reasons that the system 3 has achieved the best performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "Otherwise, by comparing the system 3 with the other two systems A and B, we observe significant gains of +7.30 and + 2.90 BLEU as well as reductions of -8.16 and -4.58 TER, -14.17 and -7.97 PER, respectively ( Table 2) .",
"cite_spans": [],
"ref_spans": [
{
"start": 210,
"end": 218,
"text": "Table 2)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "All the experimental results showed that using the alignment representation, combining with the pretrained source and target embeddings resulted in significant advances over other methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.2"
},
{
"text": "We performed an error analysis of the five evaluation systems to better understand what kinds of errors in predicted phonemes occurred between the French source and the Vietnamese target.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.3"
},
{
"text": "We compared the transliteration prediction results for the named entities in the five evaluation systems with some proper nouns that had not been seen during the learning phase (Table 3) .",
"cite_spans": [],
"ref_spans": [
{
"start": 177,
"end": 186,
"text": "(Table 3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.3"
},
{
"text": "The results showed that baseline systems A (pb-SMT) and B (multi-joint sequence model) made some transliteration prediction errors in proper nouns such as Paris, Tours, Truffaut and Zurich, while systems 1, 2 and 3 (our proposed approach) provided better results. The baseline systems encountered difficulties in optimally predicting all the transliteration possibilities due to the original variety of proper nouns (e.g. French, English, Italian, Spanish, Portuguese, Russian, etc.) and the pronunciation of different tail syllables (Table 3) :",
"cite_spans": [
{
"start": 422,
"end": 483,
"text": "French, English, Italian, Spanish, Portuguese, Russian, etc.)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 534,
"end": 543,
"text": "(Table 3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.3"
},
{
"text": "\u2022 \"-s\" or \"-ce\" (x\u01a1 or \u03c6) : Nice \u2192 n\u00edt-x\u01a1 or Paris \u2192 pa-ri",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.3"
},
{
"text": "\u2022 \"-x\" or \"-ch\" (\u00edch or \u00edt or \u03c6) : Zurich \u2192 giuyr\u00edch or giuy-r\u00edt",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.3"
},
{
"text": "\u2022 and other graphemes such as \"-u-\" (/y/ = uy or u) : Truffaut \u2192 tru-ph\u00f4 or truy-ph\u00f4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.3"
},
{
"text": "In another analysis, we observed that the baseline systems, especially the statistical-based system A, made some errors such as 'x' to 's' (for Vincente) or 'd uy' to 'iu' (for Yukon) or 'im' to 'anh' (for Zimbabwe), whereas our proposed RNN-based system (system 3) successfully transliterated these cases. The baseline systems removed the last phoneme in the hypotheses for Nice and Jacques, whereas our proposed RNN-based system produced these predictions successfully. But it proposed another transliteration hypothesis in 'b \u01a1' instead of 'b u \u00ea' (for Zimbabwe) (Table 4) .",
"cite_spans": [],
"ref_spans": [
{
"start": 566,
"end": 575,
"text": "(Table 4)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.3"
},
{
"text": "In this paper, we presented an approach for machine transliteration in low resource settings, based on the alignment representation for input sequences and pre-trained source and target embeddings. The method can be trained using a small amount of training data. Eventually, this approach could be extended to any low-resources language pair when a bilingual pronunciation dictionary is available. Therefore, this method is extremely useful for underresourced languages for which training data is difficult to find. 32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Copyright 2018 by the authors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "http://workshop.colips.org/news2016/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018Copyright 2018 by the authors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018Copyright 2018 by the authors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.cs.cmu.edu/~jhclark/ downloads/multeval-0.5.1.tgz 6 http://www.statmt.org/moses/ 7 https://www-i6.informatik.rwth-aachen.de/web/Software/g2p.html PACLIC 32",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018Copyright 2018 by the authors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018Copyright 2018 by the authors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": " Baseline system B System 1 System 2 System 3 p a r i p a r \u00ed t p a r \u00ed t p a r i p a r \u00ed:t x:\u01a1 p a r i _ t i g \u1edd r a n n\u01a1 t i g \u1edd r a n n\u01a1 t i g \u1edd r a n n\u01a1 t i g \u1edd r a n n\u01a1 t i g:\u1edd r a n n:\u01a1 t i g:\u1edd r a n n:\u01a1 t u l u gi \u01a1 t u l u x \u01a1 t u l u x \u01a1 t u l u gi \u01a1 t u l uy s:\u1edd t u l uy s:\u01a1 t u a t u \u1ed1 c t x \u01a1 t u \u1ed1 c t x \u01a1 t u\u00fd t x\u01a1 t:\u01a1 uy _ x:\u01a1 t u a _ x:\u01a1 tr uy ph \u00f4 tr uy ph \u1ed1 t tr uy ph \u1ed1 t tr \u00fa p ph \u00f4 tr uy ph \u00f4 tr u ph \u00f4 gi uy r \u00ed ch gi uy r i gi uy r i gi uy r \u00ed ch gi uy r \u00ed:ch gi uy r \u00ed:t Finch and Eiichiro Sumita. 2010. Transliter- ation using a phrase-based statistical machine translation system to re-score the output of a joint multi-",
"cite_spans": [
{
"start": 500,
"end": 544,
"text": "Finch and Eiichiro Sumita. 2010. Transliter-",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1,
"end": 49,
"text": "Baseline system B System 1 System 2 System 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baseline system A",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Maximum entropy motivated grapheme-to-phoneme, stress and syllable boundary prediction for portuguese textto-speech",
"authors": [
{
"first": "Maria Jo\u00e3o",
"middle": [],
"last": "Barros",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Weiss",
"suffix": ""
}
],
"year": 2006,
"venue": "IV Jornadas en Tecnolog\u00edas del Habla",
"volume": "",
"issue": "",
"pages": "177--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Jo\u00e3o Barros and Christian Weiss. 2006. Maxi- mum entropy motivated grapheme-to-phoneme, stress and syllable boundary prediction for portuguese text- to-speech. IV Jornadas en Tecnolog\u00edas del Habla. Zaragoza, Spain, pages 177-182.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Jointsequence models for grapheme-to-phoneme conversion",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "Bisani",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2010 Named Entities Workshop",
"volume": "50",
"issue": "",
"pages": "48--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximilian Bisani and Hermann Ney. 2008. Joint- sequence models for grapheme-to-phoneme conver- sion. Speech communication, 50(5):434-451. gram model. In Proceedings of the 2010 Named Enti- ties Workshop, pages 48-52. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Target-bidirectional neural models for machine transliteration",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Finch",
"suffix": ""
},
{
"first": "Lemao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xiaolin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "78--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Finch, Lemao Liu, Xiaolin Wang, and Eiichiro Sumita. 2016. Target-bidirectional neural models for machine transliteration. ACL 2016, pages 78-82.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Parallel implementations of word alignment tool",
"authors": [
{
"first": "Qin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2008,
"venue": "Software Engineering, Testing, and Quality Assurance for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "49--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qin Gao and Stephan Vogel. 2008. Parallel implemen- tations of word alignment tool. In Software Engineer- ing, Testing, and Quality Assurance for Natural Lan- guage Processing, pages 49-57. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Applying many-to-many alignments and hidden markov models to letter-to-phoneme conversion",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Sittichai Jiampojamarn",
"suffix": ""
},
{
"first": "Tarek",
"middle": [],
"last": "Kondrak",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sherif",
"suffix": ""
}
],
"year": 2007,
"venue": "HLT-NAACL",
"volume": "7",
"issue": "",
"pages": "372--379",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sittichai Jiampojamarn, Grzegorz Kondrak, and Tarek Sherif. 2007. Applying many-to-many alignments and hidden markov models to letter-to-phoneme con- version. In HLT-NAACL, volume 7, pages 372-379.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Machine transliteration survey. ACM Computing Surveys (CSUR)",
"authors": [
{
"first": "Sarvnaz",
"middle": [],
"last": "Karimi",
"suffix": ""
},
{
"first": "Falk",
"middle": [],
"last": "Scholer",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Turpin",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "43",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarvnaz Karimi, Falk Scholer, and Andrew Turpin. 2011. Machine transliteration survey. ACM Computing Sur- veys (CSUR), 43(3):17.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions, pages 177-180. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Report of news 2010 transliteration mining shared task",
"authors": [
{
"first": "A",
"middle": [],
"last": "Kumaran",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mitesh",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Khapra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Named Entities Workshop",
"volume": "",
"issue": "",
"pages": "21--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A Kumaran, Mitesh M Khapra, and Haizhou Li. 2010. Report of news 2010 transliteration mining shared task. In Proceedings of the 2010 Named Entities Workshop, pages 21-28. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Grapheme to phoneme conversion using an smt system",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Laurent",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Del\u00e9glise",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Meignier",
"suffix": ""
},
{
"first": "France",
"middle": [],
"last": "Sp\u00e9cinov-Tr\u00e9laz\u00e9",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of INTERSPEECH, ISCA",
"volume": "",
"issue": "",
"pages": "708--711",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Laurent, Paul Del\u00e9glise, Sylvain Meignier, and France Sp\u00e9cinov-Tr\u00e9laz\u00e9. 2009. Grapheme to phoneme conversion using an smt system. In Proceed- ings of INTERSPEECH, ISCA, pages 708-711.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A neural network transliteration model in low resource settings",
"authors": [
{
"first": "Ngoc",
"middle": [
"Tan"
],
"last": "",
"suffix": ""
},
{
"first": "Le",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Fatiha",
"middle": [],
"last": "Sadat",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ngoc Tan Le and Fatiha Sadat. 2017. A neural net- work transliteration model in low resource settings. In",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Srilm-an extensible language modeling toolkit",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke et al. 2002. Srilm-an extensible lan- guage modeling toolkit. In Interspeech, volume 2002, page 2002.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Comparison of grapheme-tophoneme conversion methods on a myanmar pronunciation dictionary",
"authors": [
{
"first": "Ye",
"middle": [],
"last": "Kyaw Thu",
"suffix": ""
},
{
"first": "Win",
"middle": [
"Pa"
],
"last": "Pa",
"suffix": ""
},
{
"first": "Yoshinori",
"middle": [],
"last": "Sagisaka",
"suffix": ""
},
{
"first": "Naoto",
"middle": [],
"last": "Iwahashi",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 6th Workshop on South and Southeast Asian Natural Language Processing 2016",
"volume": "",
"issue": "",
"pages": "11--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ye Kyaw Thu, Win Pa Pa, Yoshinori Sagisaka, and Naoto Iwahashi. 2016. Comparison of grapheme-to- phoneme conversion methods on a myanmar pronun- ciation dictionary. Proceedings of the 6th Workshop on South and Southeast Asian Natural Language Pro- cessing 2016, pages 11-22.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A character level based and word level based approach for chinese-vietnamese machine translation",
"authors": [
{
"first": "Phuoc",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Dien",
"middle": [],
"last": "Dinh",
"suffix": ""
},
{
"first": "Hien T Nguyen ; Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2016,
"venue": "Google's neural machine translation system: Bridging the gap between human and machine translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.08144"
]
},
"num": null,
"urls": [],
"raw_text": "Phuoc Tran, Dien Dinh, and Hien T Nguyen. 2016. A character level based and word level based approach for chinese-vietnamese machine translation. Compu- tational intelligence and neuroscience, 2016. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine trans- lation. arXiv preprint arXiv:1609.08144.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Sequence-tosequence neural net models for grapheme-to-phoneme conversion",
"authors": [
{
"first": "Kaisheng",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1506.00196"
]
},
"num": null,
"urls": [],
"raw_text": "Kaisheng Yao and Geoffrey Zweig. 2015. Sequence-to- sequence neural net models for grapheme-to-phoneme conversion. arXiv preprint arXiv:1506.00196.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "(1) pre-processing, (2) modification of the input sequences based on alignment representation and (3) creation of an RNN-based machine transliteration. The whole process is illustrated in Figure 2. 32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018 Copyright 2018 by the authors a b c d e f g h i j k l m n o p q r s t u v w Illustration of Vietnamese alphabet."
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Architecture of machine transliteration for a low-resource language pair dealing with bilingual named entities"
},
"TABREF1": {
"content": "<table/>",
"text": "Evaluation of scoring for all systems: BLEU, TER and PER.",
"num": null,
"type_str": "table",
"html": null
}
}
}
}