ACL-OCL / Base_JSON /prefixP /json /paclic /2020.paclic-1.24.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:02:30.218416Z"
},
"title": "Iterative Multilingual Neural Machine Translation for Less-Common and Zero-Resource Language Pairs",
"authors": [
{
"first": "Minh-Thuan",
"middle": [],
"last": "Nguyen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Engineering and Technology, VNU Hanoi",
"location": {}
},
"email": ""
},
{
"first": "Phuong-Thai",
"middle": [],
"last": "Nguyen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Engineering and Technology, VNU Hanoi",
"location": {}
},
"email": ""
},
{
"first": "Van-Vinh",
"middle": [],
"last": "Nguyen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Engineering and Technology, VNU Hanoi",
"location": {}
},
"email": ""
},
{
"first": "Minh-Cong",
"middle": [],
"last": "Nguyen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Engineering and Technology, VNU Hanoi",
"location": {}
},
"email": "minhcongnguyen1508@gmail.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Research on providing machine translation systems for unseen language pairs is gaining increasing attention in recent years. However, the quality of their systems is poor for most language pairs, especially for less-common pairs such as Khmer-Vietnamese. In this paper, we show a simple iterative traininggenerating-filtering-training process that utilizes all available pivot parallel data to generate synthetic data for unseen directions. In addition, we propose a filtering method based on word alignments and the longest parallel phrase to filter out noise sentence pairs in the synthetic data. Experiment results on zero-shot Khmer\u2192Vietnamese and Indonesian\u2192Vietnamese directions show that our proposed model outperforms some strong baselines and achieves a promising result under the zero-resource condition on ALT benchmarks. Besides, the results also indicate that our model can easily improve their quality with a small amount of real parallel data.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Research on providing machine translation systems for unseen language pairs is gaining increasing attention in recent years. However, the quality of their systems is poor for most language pairs, especially for less-common pairs such as Khmer-Vietnamese. In this paper, we show a simple iterative traininggenerating-filtering-training process that utilizes all available pivot parallel data to generate synthetic data for unseen directions. In addition, we propose a filtering method based on word alignments and the longest parallel phrase to filter out noise sentence pairs in the synthetic data. Experiment results on zero-shot Khmer\u2192Vietnamese and Indonesian\u2192Vietnamese directions show that our proposed model outperforms some strong baselines and achieves a promising result under the zero-resource condition on ALT benchmarks. Besides, the results also indicate that our model can easily improve their quality with a small amount of real parallel data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Neural Machine Translation (NMT) has recently achieved impressive performance on high-resource language pairs which have large amounts of parallel training data (Wu et al., 2016) (Vaswani et al., 2017) . However, these systems still work poorly when the parallel data is low or unavailable. Research on zero-resource language pairs is gaining much attention in recent years, and it has been found to use pivot language, zero-shot NMT, or zero-resource NMT approaches to deal with the translation of unseen language pairs.",
"cite_spans": [
{
"start": 161,
"end": 178,
"text": "(Wu et al., 2016)",
"ref_id": "BIBREF24"
},
{
"start": 179,
"end": 201,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In pivot language approaches, sentences are first translated from the source language into the pivot language through a source-pivot system, and then from the pivot language into the target language by using a pivot-target system. Although this simple process has shown strong translation performance (Johnson et al., 2017) , it has a few limitations. The pivoting translation process at least doubles decoding time during inference because more than one pivot language may be required to translate from the source to the target language. Additionally, translation errors compound in a pipeline.",
"cite_spans": [
{
"start": 301,
"end": 323,
"text": "(Johnson et al., 2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Zero-shot NMT approaches are inspirited from multilingual NMT (multi-NMT) systems that use only one encoder and one decoder to represent multiple languages in the same vector space, hence it should be possible to take advantage of data from high-resource language pairs to improve the translation of low-resource language pairs. (Ha et al., 2016; Johnson et al., 2017) showed that the zeroshot systems are able to generate reasonable output at the target language by adding the desired output language's language tag at the beginning of the source sentence. Note that there is no direct parallel data between the source and target languages during training. However, the performance of these approaches is still poor when the source and target languages are unrelated or the observed language pairs are not enough to capture the relation of unseen language pairs. Similar to the above approaches, zero-resource NMT approaches do not use any direct source-target parallel corpus, but the approaches focus on generating pseudo-parallel corpus by using back-translation to translate sentences in the pivot language of the pivot-target parallel corpus to the source language (Lakew et al., 2017; Gu et al., 2019) . One of the main limitations of these approaches is that the source between training and testing scenarios are different since the source in training is synthetic. However, the approaches still outperform pivot language and zero-shot NMT approaches because they can potentially utilize all available parallel and monolingual corpus (Currey and Heafield, 2019) .",
"cite_spans": [
{
"start": 329,
"end": 346,
"text": "(Ha et al., 2016;",
"ref_id": "BIBREF7"
},
{
"start": 347,
"end": 368,
"text": "Johnson et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 1171,
"end": 1191,
"text": "(Lakew et al., 2017;",
"ref_id": "BIBREF12"
},
{
"start": 1192,
"end": 1208,
"text": "Gu et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 1542,
"end": 1569,
"text": "(Currey and Heafield, 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, our main contributions are (1) improving the quality of zero-resource NMT by introducing a simple iterative training-generatingfiltering-training process and (2) proposing a noise filtering method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Especially, we evaluate our approach on less-common and low-resource language pairs such as Khmer-Vietnamese. In this scenario, source-pivot (Khmer-English) and pivottarget (English-Vietnamese) pairs are also lowresource (pivot is often English). Our approach starts from a multilingual NMT system that is trained on source-pivot and pivot-language pairs, the system then generates source-target synthetic corpus by back-translating the pivot side of the pivot-target corpus to the source language. Next, We filter out poor translations in the generated translations by applying our proposed data filtering method based on word alignments and the longest parallel phrase. After that, the multilingual NMT system is continuously trained on both the filtered synthesis data and the original training data, we repeat this traininggenerating-filtering-training cycle for a few iterations. As a result, our experiments showed that by adding the filtered synthetic corpus, our model outperformed the pivot, zero-shot, and zero-resource baselines over zero-shot Khmer\u2192Vietnamese and Indonesian\u2192Vietnamese directions on the Asian Language Treebank (ALT) Parallel Corpus (Riza et al., 2016) . Moreover, the experiment results indicate that our model can easily improve their quality with a small amount of real parallel data.",
"cite_spans": [
{
"start": 1162,
"end": 1181,
"text": "(Riza et al., 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is organized as follows. We first review relevant works on translation for zeroresource language pairs in Section 2, then introduce some background and related formulas in Section 3. Next, we show our approach in Section 4. After that, we illustrate our experiments and results in Section 5. Finally, our conclusion is presented in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Training a machine translation system for translating unseen language pairs has received much interest from researchers in recent years. This section discusses relevant works on zero-shot and zeroresource NMT, which are related to our approach. Zero-shot NMT (Ha et al., 2016; Johnson et al., 2017) showed that using a single NMT can learn to translate between language pairs it has never seen during training (zero-shot translation). Their solution does not require any changes to the traditional NMT model architecture. Instead, they add an artificial token at the beginning of the source sentence to specify the required target language. Although this approach illustrated promising results for some untrained language pairs such as from Portuguese to Spanish, its performance is often not good enough to be useful and lags behind pivoting. In our work, we use this system as an initial multi-NMT system. (Arivazhagan et al., 2019) pointed out that the success of zero-shot translation depends on the ability of the model to capture language invariant features for cross-lingual transfer. Therefore, they proposed two classes of auxiliary losses to align the source and pivot vector spaces. The first minimizes the discrepancy between the feature distributions by minimizing a domain adversarial loss (Gani et al., 2015) that trains a discriminator to distinguish between different encoder languages using representations from an adversarial encoder. The second takes advantage of available parallel data to enforce alignment between the source and the pivot language at the instance level. However, this approach does not work for less-common language pairs such as Khmer-Vietnamese since the size of multilingual training data including source-pivot and pivot-target is low, so it is not enough to capture the language invariant features. Zero-resource NMT (Lakew et al., 2017) used a multilingual NMT system to generate zero-shot translations on some portion of the training data, then re-start the training process on both the multilingual data and the generated translations. By adding the synthetic cor-pus, the model can alleviate the spurious correlation problem. This work is similar to our work but they did not filter out noise sentence pairs in the synthetic corpus. (Currey and Heafield, 2019) augmented zeroresource NMT with monolingual data from the pivot language. The authors pointed out that the pivot language is often high-resource language and more high-quality than the monolingual source or target language (pivot language is often English), so leveraging the monolingual pivot language data is worthwhile to enhance the quality of zero-resource NMT systems.",
"cite_spans": [
{
"start": 259,
"end": 276,
"text": "(Ha et al., 2016;",
"ref_id": "BIBREF7"
},
{
"start": 277,
"end": 298,
"text": "Johnson et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 908,
"end": 934,
"text": "(Arivazhagan et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 1304,
"end": 1323,
"text": "(Gani et al., 2015)",
"ref_id": "BIBREF5"
},
{
"start": 2282,
"end": 2309,
"text": "(Currey and Heafield, 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The standard NMT architecture contains an encoder, a decoder and an attention-mechanism, which are trained with maximum likelihood in an end-to-end system . Assume the source sentence and its translation are",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "x = {x 1 , ..., x Tx } and y = {y 1 , ..., y Ty } respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "Encoder is a bidirectional Recurrent Neural Network (RNN) (Schuster and Paliwal, 1997) that encodes the source sentence into a sequence of hidden state vectors, the hidden state vector of word",
"cite_spans": [
{
"start": 58,
"end": 86,
"text": "(Schuster and Paliwal, 1997)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "x i is h i = [ \u2212 \u2192 h i ; \u2190 \u2212 h i ],",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "where \u2212 \u2192 h i and \u2190 \u2212 h i are forward and backward hidden state respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2212 \u2192 h i = f (e x i , \u2212 \u2192 h i\u22121 ) (1) \u2190 \u2212 h i = f (e x i , \u2190 \u2212 h i+1 )",
"eq_num": "(2)"
}
],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "Note that e x i is the vector of word x i , f is a nonlinear function such as Long Short-term Memory (Hochreiter and Schmidhuber, 1997) or Gated Recurrent Unit .",
"cite_spans": [
{
"start": 101,
"end": 135,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "Attention is a mechanism used to compute a context vector by searching through the source sentence at each decoding step . At the j-th step, the score between the target word y j and the i-th source word is computed and normalized as below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e i,j = v T a tanh(W a s j\u22121 + U a h i )",
"eq_num": "(3)"
}
],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b1 ij = exp(e ij ) Tx i =1 exp(e i j )",
"eq_num": "(4)"
}
],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "The context vector c j is computed as a weighted sum of all source hidden states:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "c j = Tx i=1 \u03b1 ij h i (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "Decoder is a unidirectional RNN which uses the representation of the encoder and the context vector to predict words in the target language. At the j-th step, the target hidden state s j is computed by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s j = f (e y j\u22121 , s j\u22121 , c j )",
"eq_num": "(6)"
}
],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "Given the previous predicted words y <j = {y 1 , ..., y j\u22121 }, the context vector c j and the target hidden state s j , the decoder is trained to predict the next word y j as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(y j |y <j , s j , c j ) = softmax(W o t j )",
"eq_num": "(7)"
}
],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "t j = g(e y j\u22121 , c j , s j )",
"eq_num": "(8)"
}
],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "where g is a nonlinear function, W o is used to output a vocabulary-sized vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "3.2 Multilingual NMT (Ha et al., 2016; Johnson et al., 2017) indicated a simple approach to use a standard NMT system to translate between multiple languages. This system leverages the knowledge from translation between multiple languages and is referred to as a multilingual NMT system. In order to make use of multilingual data containing multiple language pairs into the standard NMT system, authors proposed one simple modification to the input data, which is to add an artificial token at the beginning of the input sentence to indicate the desired target language. After adding the token to the input data, over-sampling or undersampling techniques are applied to balance the ratio of language pairs in the multilingual data, and the model is trained with all the multilingual data at once. Besides, a shared wordpiece model (Sennrich et al., 2015) across all the source and target data is used to address the problem of translation of unknown words and limitation of the vocabulary for computational efficiency, usually with 32,000 word pieces.",
"cite_spans": [
{
"start": 21,
"end": 38,
"text": "(Ha et al., 2016;",
"ref_id": "BIBREF7"
},
{
"start": 39,
"end": 60,
"text": "Johnson et al., 2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "This paper concentrates on improving the quality of zero-resource NMT between two languages X and Y given a pivot language Z. We assume that we have X \u2194 Z and Z \u2194 Y parallel data, but no direct X \u2194 Y data. Algorithm 1 represents our proposed training process. Notably, our experiments focus on less-common and low-resource language pairs such as Khmer-Vietnamese, Indonesian-Vietnamese, so the amount of X \u2194 Z and Z \u2194 Y parallel data is quite small. Therefore, in order to build a good initial multi-NMT model, the first step of our work is to augment the multilingual training data that is shown in Section 4.1. Take a look at the Algorithm 1, given an initial training data D including X \u2194 Y and Y \u2194 Z parallel data, our training process contains four main steps which are iterated for multiple times.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "Algorithm 1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "Iterative Multi-NMT with Data Filtering Procedure 1: D = (X \u2194 Z, Z \u2194 Y) 2: repeat 3: Multi-NMT \u2190 training using dataset D 4: for each Z in (Z \u2194 Y) do 5: X* \u2190 Multi-NMT(Z)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": ", generating 6: end for 7: S \u2190 (X* \u2194 Y), synthetic data 8: F \u2190 Filter(S), filtering synthetic data 9: D \u2190 D \u222a F 10: until Multi-NMT converges Figure 1 : Algorithm of the proposed approach using iterative multi-NMT with data filtering.",
"cite_spans": [],
"ref_spans": [
{
"start": 142,
"end": 150,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "\u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "Step 1 (line 3): Train a multilingual NMT by using the training dataset D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "\u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "Step 2 (line 4, 5, 6): Generate (X * \u2192 Y ) synthetic parallel data by using the trained multi-NMT model to translate sentences from pivot language Z in (Z \u2194 Y ) to language X. We can obtain more synthetic data (X \u2194 Y ) by translating sentences from pivot language Z in (X \u2194 Z) to language Y .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "\u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "Step 3 (line 8): Filter the synthetic data to eliminate bad parallel sentence pairs by using data selection techniques (See Section 4.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "\u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "Step 4 (line 9): Expand the multilingual training data by adding the filtered synthetic data F to the original training data D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "In our training-generating-filtering-training cycle, new synthetic X \u2194 Y data is generated at each iteration. We expect that by adding this synthetic data, the multi-NMT model not only improves the translation of zero-shot directions between X and Y but also boosts other directions such as between X and Z, Y and Z. Therefore, round after round, we can build a better multi-NMT system with the synthetic data. Use this better system in order to generate new synthetic data, then use this data with the original training data to build an even better system. Finally, this cycle continues until the model converges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 3.1 Neural Machine Translation",
"sec_num": "3"
},
{
"text": "As mentioned above, if the amount of multilingual training data is too small, the multi-NMT system is unable to learn to translate between zero-shot directions. Hence, in our work, to augment the parallel data for (X \u2194 Z) and (Z \u2194 Y ), we leverage monolingual data in both target and source side by using back-translation (Sennrich et al., 2016) and self-training (Zhang and Zong, 2016). Given a parallel data (X \u2194 Z) and monolingual data M X , M Z in language X, Z respectively, we denote by \u2212 \u2192 f and \u2190 \u2212 g the forward (from X to Z) and the backward (from Z to X) NMT systems. Back-translation is a popular data augmentation method utilizing target side monolingual data. To perform back-translation, given the parallel data (X \u2194 Z), a base backward NMT system \u2190 \u2212 g is trained and use it to translate M Z to language X, denoted by \u2190 \u2212 g (M Z ). The original parallel data (X \u2194 Z) is then concatenated with the back-translated data ( \u2190 \u2212 g (M Z ) \u2194 M Z ) to obtain a new training data. Self-Training augments the original training data by first training a base forward NMT system \u2212 \u2192 f on (X \u2194 Z) data, then use this trained model to translate M X to language Z, denoted by",
"cite_spans": [
{
"start": 322,
"end": 345,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "4.1"
},
{
"text": "\u2212 \u2192 f (M X ). The new synthetic data (M X \u2194 \u2212 \u2192 f (M X ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "4.1"
},
{
"text": "is also combined with the original training data to obtain a new training dataset. In our work, we augment parallel data by using both these two methods because they are complementary to each other. The original training data is combined with back-translated and self-trained data to obtained the augmented parallel data,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "4.1"
},
{
"text": "(X \u2194 Z) \u222a ( \u2190 \u2212 g (M Z ) \u2194 M Z ) \u222a (M X \u2194 \u2212 \u2192 f (M X ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "4.1"
},
{
"text": "Combining synthetic data with the multilingual training data is a simple and effective way to boost the quality of zero-shot directions in zeroshot NMT and zero-resource NMT systems (Lakew et al., 2017; Currey and Heafield, 2019) . However, the synthetic data potentially contains a lot of noise-translation errors, since it is often generated by using back-translation or self-training. Therefore, in this section, we show our proposed method to filter noise sentence pairs from synthetic data based on sentence semantic similarity. As described in Section 4, a synthetic sentence pair (x i , y i ) is generated by translating z i in (Z \u2194 Y ) data to x i . We consider that (x i , y i ) is good synthetic sentence pair if x i is both semantically similar to y i and z i . A semantic score for each synthetic sentence x i is computed as below:",
"cite_spans": [
{
"start": 182,
"end": 202,
"text": "(Lakew et al., 2017;",
"ref_id": "BIBREF12"
},
{
"start": 203,
"end": 229,
"text": "Currey and Heafield, 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Filtering",
"sec_num": "4.2"
},
{
"text": "score(x i ) = sim(x i , y i ) + sim(x i , z i ) 2 (9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Filtering",
"sec_num": "4.2"
},
{
"text": "where sim(x i , y i ) and sim(x i , z i ) are the semantic similarity of (x i , y i ) and (x i , z i ) sentence pair respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Filtering",
"sec_num": "4.2"
},
{
"text": "To compute the semantic similarity of two sentences in different languages, (Xu et al., 2019) relies on cosine similarities of sentence embedding vectors in a common vector space such as bilingual word embedding (Luong et al., 2015b) . Our method first also embeds words in different languages into a common vector space as work in (Conneau et al., 2017) , then calculate the sentence similarity based on word alignment scores and the longest parallel phrase of the candidate sentence pairs. In order to acquire word alignments of a sentence pair (x, y), we iterate sentence x from left to right and greedily align each word in x to the most similar word in y which was not already aligned. For measuring the similarity of words we use cosine similarity of word embeddings. Afterward, given a set of word alignments A, we can easily extract parallel phrases of (x, y) by using the phrase extraction algorithm in the Statistical Machine Translation System (Koehn et al., 2003) . Finally, the semantic similarity score of the sentence pair (x, y) is computed by averaging word alignment scores and weighting it with the ratio of the length of the longest parallel phrase p and the length of the sentence x as follows:",
"cite_spans": [
{
"start": 76,
"end": 93,
"text": "(Xu et al., 2019)",
"ref_id": "BIBREF25"
},
{
"start": 212,
"end": 233,
"text": "(Luong et al., 2015b)",
"ref_id": "BIBREF15"
},
{
"start": 332,
"end": 354,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 955,
"end": 975,
"text": "(Koehn et al., 2003)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Filtering",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "sim(x, y) = |p| |x| \u00d7 a\u2282A score(a) |A|",
"eq_num": "(10)"
}
],
"section": "Data Filtering",
"sec_num": "4.2"
},
{
"text": "where |p| and |x| are the length of longest parallel phrase and sentence x respectively, |A| is the number of word alignments, a is a word alignment candidate and score(a) is word alignment score that is computed by using cosine similarity of two words in the alignment a.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Filtering",
"sec_num": "4.2"
},
{
"text": "In this work, we evaluate our approach on zero-resource Khmer-Vietnamese (km-vi) and Indonesian-Vietnamese (id-vi) language pairs with English is the pivot language. The parallel datasets for Khmer-English (km-en) and Indonesian-English (id-en) are from the Asian Language Treebank (ALT) Parallel Corpus (Riza et al., 2016) and for English-Vietnamese is from the UET dataset (Vu Huy et al., 2013) (see Table 1 for details). All testing datasets are from the ALT corpus with size of 1,018 sentences. In addition, we used monolingual data released in Wikipedia 1 for Vietnamese, English and Indonesia and data from WMT2020 2 for Khmer. After de-duplication and removing too short (<5 tokens) or too long (>100 tokens) sentences, we obtained approximately 11 million, 5 million, 2 million and 3 million unique sentences for English, Vietnamese, Khmer, and Indonesian respectively. Moreover, as mentioned in Section 4.1, before training models, we augmented the multilingual training data by using back-translation and self-training. In order to choose the right ratio between real and synthetic parallel data, we experimented on different real-to-synthetic ratios. We found that 1:4 real-to-synthetic ratio is the best ratio for both Khmer-English and Indonesian-English pairs as shown in Table 2 . Finally, we acquired the Direction Training real real+BT+ST Khmer-English 18,088 162,792 English-Vietnamese 233,000 -Indonesian-English 18,088 162,792 Table 2 : Experiment results on BLEU score to choose the right real:synthetic ratios for Khmer\u2192English (km\u2192en) and Indonesian\u2192English (id\u2192en) using back-translation (BT) and self-training (ST).",
"cite_spans": [
{
"start": 304,
"end": 323,
"text": "(Riza et al., 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 402,
"end": 409,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 1286,
"end": 1293,
"text": "Table 2",
"ref_id": null
},
{
"start": 1321,
"end": 1447,
"text": "Direction Training real real+BT+ST Khmer-English 18,088 162,792 English-Vietnamese 233,000 -Indonesian-English 18,088",
"ref_id": "TABREF0"
},
{
"start": 1456,
"end": 1463,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "5.1"
},
{
"text": "final augmented data by combining the original data with back-translated and self-trained data as shown in Table 1 . Note that, to prevent imbalances between language pairs in the multilingual training data, we did not augment for the English-Vietnamese pair since the size of this pair is much larger other pairs.",
"cite_spans": [],
"ref_spans": [
{
"start": 107,
"end": 114,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "5.1"
},
{
"text": "To learn a shared vocabulary for training multi-NMT, we used SentencePiece (Kudo and Richardson, 2018) with size 32,000 over the combined English, Vietnamese, Khmer, and Indonesian monolingual data. Besides, we added target language tags at both the beginning and end of the source sentences in the multilingual training data.",
"cite_spans": [
{
"start": 75,
"end": 102,
"text": "(Kudo and Richardson, 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "5.2"
},
{
"text": "The multilingual word embedding model used in our filtering method was acquired by using the unsupervised method in MUSE library 3 . The word embeddings for English, Vietnamese, Khmer, and Indonesian are trained with fastText toolkit 4 on corresponding monolingual data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "5.2"
},
{
"text": "All translation results shown in our work were computed in terms of BLEU score (Papineni et al., 2002) measured with multi-bleu.perl script 5",
"cite_spans": [
{
"start": 79,
"end": 102,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "5.2"
},
{
"text": "All models in our experiments are based on the encoder-decoder with attention architecture (Luong et al., 2015a) . We used OpenNMT-py 6 to run all experiments with the configuration as follows. We used the Gradient Descent optimizer with a learning rate of 1.0 that decayed exponentially in the last 80% of the training duration, training batch is 64, maximum sentence length is 100, beam width is 10, label smoothing is 0.2, dropout is 0.3 and is applied on top of various process, all models variables are initialized uniformly in range (-0.1, 0.1).",
"cite_spans": [
{
"start": 91,
"end": 112,
"text": "(Luong et al., 2015a)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "5.3"
},
{
"text": "In this paper, we evaluate our proposed method on two direct (zero-shot) translations, Khmer \u2192 Vietnamese (km \u2192 vi) and Indonesian \u2192 Vietnamese (id \u2192 vi). Notably, the setting of experiments for these 2 directions is the same, so in the following, we only describe experiments for evaluating the Khmer-Vietnamese language pair. Firstly, We compare our models to three baselines as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "5.3"
},
{
"text": "\u2022 zero-shot NMT: This model is trained on the Khmer \u2194 English and English \u2194 Vietnamese parallel data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "5.3"
},
{
"text": "\u2022 zero-resource NMT: This model is trained on the synthetic data Khmer \u2194 Vietnamese created by using the above zero-shot NMT model to translate English sentences in (English \u2194 Vietnamese) to Khmer sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "5.3"
},
{
"text": "\u2022 pivot language: use the above zero-shot NMT to translate Khmer sentences into English then from English to Vietnamese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "5.3"
},
{
"text": "Our proposed models are designated as below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "5.3"
},
{
"text": "\u2022 Iterative multi-NMT: This model is trained by iterating training-generating-training schema for several rounds. We use the above zero-shot NMT as an initial multi-NMT model for this training process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "5.3"
},
{
"text": "\u2022 Iterative multi-NMT + Xu's data filtering: This model is trained by iterating traininggenerating-filtering-training schema for several rounds as shown in Section 4. We also use the above zero-shot NMT as an initial multi-NMT model and use the method of (Xu et al., 2019) in the data filtering step.",
"cite_spans": [
{
"start": 255,
"end": 272,
"text": "(Xu et al., 2019)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "5.3"
},
{
"text": "\u2022 Iterative multi-NMT + our data filtering: This model is trained by using the traininggenerating-filtering-training process and our proposed method for data filtering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "5.3"
},
{
"text": "Note that, in the last two models, we use a similarity threshold of 0.4 achieved the best result (see Table 4 for details), to filter out poor synthetic sentence pairs. Table 3 shows our results for the km \u2192 vi and id \u2192 vi zero-resource translation experiments. Experiments (1), (2), and (3) indicate the performance of the three baseline models. It can be seen that zero-shot NMT performed the worst result while the two other models illustrate promising results. The explanation for this results is that the amount of multilingual training data is not enough for enabling zero-shot translation on the multi-NMT system. Experiment (4) outperforms all three baseline models since it is benefit from both zero-shot and zero-resource NMT system. In addition, Experiments (5) and (6) show the effect of our training-generating-filtering-training process. By eliminating poor synthetic sentence pairs before re-training, the systems perform better results. Especially, the results on experiment (5) and (6) indicate that our proposed filtering method is more effective than the method of (Xu et al., 2019) for filtering noises in synthetic data. Table 4 shows the effect of different filtering threshold on translation performance. All models are trained similar to the model Iterative multi-NMT + our data filtering, the only different is the filtering threshold to eliminate poor sentence pairs. Notably, a threshold of 0.0 means that all synthetic data is kept to re-train in the next iteration. The results illustrate that the threshold of 0.4 achieved the best result, outperforming the baseline (threshold is 0.0) by +1.64 and +1.69 BLEU for km \u2192 vi and id \u2192 vi directions respectively. On the other hand, Table 5 shows that if we finetune our proposed model Iterative multi-NMT + our data filtering on a small amount of real parallel data, the model performs a significant improvement by +9.26 and +4.76 over the baselines (models are only trained on real parallel data). The real datasets for km \u2192 vi and id \u2192 vi are from the ALT corpus with size of 18,088 sentence pairs. This results prove that our proposed model work well on both zero-resource and low-resource language pairs. ",
"cite_spans": [
{
"start": 1085,
"end": 1102,
"text": "(Xu et al., 2019)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 102,
"end": 110,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 170,
"end": 177,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 1143,
"end": 1150,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 1709,
"end": 1716,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Models",
"sec_num": "5.3"
},
{
"text": "In this paper, we have shown a training-generatingfiltering-training cycle to build a model for translating zero-resource language pairs. In addition, we proposed a simple filtering method based on word alignments and the longest parallel phrase to filter out poor quality sentence pairs from the synthetic data. Experiment results show that our proposed methods outperformed some strong baselines and achieve a promising result under zeroresource conditions for the Khmer\u2192Vietnamese and Indonesian\u2192Vietnamese directions. Specially, our proposed model can easily improve their quality with a small amount of real parallel data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://linguatools.org/tools/corpora/wikipediamonolingual-corpora/ 2 http://www.statmt.org/wmt20/parallel-corpusfiltering.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/facebookresearch/MUSE 4 https://fasttext.cc/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/moses-smt/mosesdecoder 6 https://github.com/OpenNMT/OpenNMT-py",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The missing ingredient in zero-shot neural machine translation",
"authors": [
{
"first": "Naveen",
"middle": [],
"last": "Arivazhagan",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Bapna",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Roee",
"middle": [],
"last": "Aharoni",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Roee Aharoni, Melvin Johnson, and Wolfgang Macherey. 2019. The missing ingredient in zero-shot neural ma- chine translation, 03.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Y. Bengio. 2014. Neural machine translation by jointly learning to align and translate. ArXiv, 1409, 09.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "On the Properties of Neural Machine Translation: Encoder-Decoder Approaches",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation",
"volume": "",
"issue": "",
"pages": "103--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the Proper- ties of Neural Machine Translation: Encoder-Decoder Approaches. In Proceedings of SSST-8, Eighth Work- shop on Syntax, Semantics and Structure in Statisti- cal Translation, pages 103-111, Doha, Qatar, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Word Translation Without Parallel Data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ran- zato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2017. Word Translation Without Parallel Data. CoRR, abs/1710.04087.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Zero-resource neural machine translation with monolingual pivot data",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Currey",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Currey and Kenneth Heafield. 2019. Zero-resource neural machine translation with monolingual pivot data. pages 99-107, 01.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Domainadversarial training of neural networks",
"authors": [
{
"first": "Yaroslav",
"middle": [],
"last": "Gani",
"suffix": ""
},
{
"first": "Evgeniya",
"middle": [],
"last": "Ustinova",
"suffix": ""
},
{
"first": "Hana",
"middle": [],
"last": "Ajakan",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Germain",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Larochelle",
"suffix": ""
},
{
"first": "Francois",
"middle": [],
"last": "Laviolette",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Marchand",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Lempitsky",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yaroslav Gani, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Francois Laviolette, Mario Marchand, and Victor Lempitsky. 2015. Domain- adversarial training of neural networks. 05.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Improved zero-shot neural machine translation via ignoring spurious correlations",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Yong Wang, Kyunghyun Cho, and Victor Li. 2019. Improved zero-shot neural machine translation via ignoring spurious correlations. pages 1258-1268, 01.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Toward multilingual neural machine translation with universal encoder and decoder",
"authors": [
{
"first": "Thanh",
"middle": [
"Le"
],
"last": "Ha",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "11",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thanh Le Ha, Jan Niehues, and Alexander Waibel. 2016. Toward multilingual neural machine translation with universal encoder and decoder. 11.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Long Short-Term Memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Comput",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long Short-Term Memory. Neural Comput., 9(8):1735- 1780, November.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Google's Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation",
"authors": [
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Thorat",
"suffix": ""
},
{
"first": "Fernanda",
"middle": [],
"last": "Vi\u00e9gas",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Wattenberg",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Macduff",
"middle": [],
"last": "Hughes",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "339--351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi\u00e9gas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's Multilingual Neural Machine Translation System: En- abling Zero-Shot Translation. Transactions of the As- sociation for Computational Linguistics, 5:339-351.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Statistical Phrase-based Translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"Josef"
],
"last": "Och",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology",
"volume": "1",
"issue": "",
"pages": "48--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical Phrase-based Translation. In Pro- ceedings of the 2003 Conference of the North Ameri- can Chapter of the Association for Computational Lin- guistics on Human Language Technology -Volume 1, NAACL '03, pages 48-54, Stroudsburg, PA, USA. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "66--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing: System Demon- strations, pages 66-71, Brussels, Belgium, November. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Improving zero-shot translation of low-resource languages",
"authors": [
{
"first": "Quintino",
"middle": [],
"last": "Surafel Melaku Lakew",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Lotito",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Turchi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "12",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Surafel Melaku Lakew, Quintino Lotito, Matteo Negri, Marco Turchi, and Marcello Federico. 2017. Improv- ing zero-shot translation of low-resource languages. 12.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Effective Approaches to",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015a. Effective Approaches to",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Attention-based Neural Machine Translation. CoRR",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Attention-based Neural Machine Translation. CoRR, abs/1508.04025.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Bilingual Word Representations with Monolingual Quality in Mind",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "151--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Hieu Pham, and Christopher D. Manning. 2015b. Bilingual Word Representations with Mono- lingual Quality in Mind. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Lan- guage Processing, pages 151-159, Denver, Colorado, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Bleu: a Method for Automatic Evaluation of Machine Translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a Method for Automatic Eval- uation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Computa- tional Linguistics, pages 311-318, Philadelphia, Penn- sylvania, USA, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Introduction of the asian language treebank",
"authors": [
{
"first": "H",
"middle": [],
"last": "Riza",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Purwoadi",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Gunarso",
"suffix": ""
},
{
"first": "A",
"middle": [
"A"
],
"last": "Uliniansyah",
"suffix": ""
},
{
"first": "S",
"middle": [
"M"
],
"last": "Ti",
"suffix": ""
},
{
"first": "L",
"middle": [
"C"
],
"last": "Aljunied",
"suffix": ""
},
{
"first": "V",
"middle": [
"T"
],
"last": "Mai",
"suffix": ""
},
{
"first": "N",
"middle": [
"P"
],
"last": "Thang",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Thai",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Chea",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Sam",
"suffix": ""
},
{
"first": "K",
"middle": [
"M"
],
"last": "Seng",
"suffix": ""
},
{
"first": "K",
"middle": [
"T"
],
"last": "Soe",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Nwet",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ding",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 Conference of The Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Techniques (O-COCOSDA)",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Riza, M. Purwoadi, Gunarso, T. Uliniansyah, A. A. Ti, S. M. Aljunied, L. C. Mai, V. T. Thang, N. P. Thai, V. Chea, R. Sun, S. Sam, S. Seng, K. M. Soe, K. T. Nwet, M. Utiyama, and C. Ding. 2016. In- troduction of the asian language treebank. In 2016 Conference of The Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Techniques (O- COCOSDA), pages 1-6.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Bidirectional recurrent neural networks",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Kuldip",
"middle": [],
"last": "Paliwal",
"suffix": ""
}
],
"year": 1997,
"venue": "IEEE Transactions on",
"volume": "45",
"issue": "",
"pages": "2673--2681",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Schuster and Kuldip Paliwal. 1997. Bidirectional recurrent neural networks. Signal Processing, IEEE Transactions on, 45:2673 -2681, 12.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. 08.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Improving Neural Machine Translation Models with Monolingual Data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving Neural Machine Translation Mod- els with Monolingual Data. In Proceedings of the 54th",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "1",
"issue": "",
"pages": "86--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany, August. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Attention is All you Need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Process- ing Systems 30, pages 5998-6008. Curran Associates, Inc.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Bootstrapping Phrase-based Statistical Machine Translation via WSD Integration",
"authors": [
{
"first": "Hien",
"middle": [],
"last": "Vu Huy",
"suffix": ""
},
{
"first": "Phuong-Thai",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Tung-Lam",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Sixth International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1042--1046",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hien Vu Huy, Phuong-Thai Nguyen, Tung-Lam Nguyen, and M.L Nguyen. 2013. Bootstrapping Phrase-based Statistical Machine Translation via WSD Integration. In Proceedings of the Sixth International Joint Con- ference on Natural Language Processing, pages 1042- 1046, Nagoya, Japan, October. Asian Federation of Natural Language Processing.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Klingner",
"suffix": ""
},
{
"first": "Apurva",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Xiaobing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Yoshikiyo",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Taku",
"middle": [],
"last": "Kato",
"suffix": ""
},
{
"first": "Hideto",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Kazawa",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, ukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. 09.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Improving Neural Machine Translation by Filtering Synthetic Parallel Data",
"authors": [
{
"first": "Guanghao",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Youngjoong",
"middle": [],
"last": "Ko",
"suffix": ""
},
{
"first": "Jungyun",
"middle": [],
"last": "Seo",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "21",
"issue": "",
"pages": "1535--1545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guanghao Xu, Youngjoong Ko, and Jungyun Seo. 2019. Improving Neural Machine Translation by Filtering Synthetic Parallel Data. Entropy, 21(12):1213, Dec. Jiajun Zhang and Chengqing Zong. 2016. Exploit- ing Source-side Monolingual Data in Neural Machine Translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Process- ing, pages 1535-1545, Austin, Texas, November. As- sociation for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"type_str": "table",
"content": "<table><tr><td>real:syntheic</td><td colspan=\"2\">km \u2192 en</td><td colspan=\"2\">id \u2192 en</td></tr><tr><td>ratio</td><td>BT</td><td>ST</td><td>BT</td><td>ST</td></tr><tr><td>1:0</td><td colspan=\"4\">14.19 13.7 21.57 20.52</td></tr><tr><td>1:1</td><td colspan=\"4\">15.32 15.72 22.26 21.18</td></tr><tr><td>1:2</td><td colspan=\"4\">16.87 17.58 24.06 22.10</td></tr><tr><td>1:3</td><td colspan=\"4\">17.25 18.21 24.37 21.99</td></tr><tr><td>1:4</td><td colspan=\"4\">18.3 18.62 24.79 22.64</td></tr><tr><td>1:5</td><td colspan=\"4\">18.1 17.93 24.02 21.60</td></tr><tr><td>1:6</td><td colspan=\"4\">17.54 17.01 23.70 21.42</td></tr></table>",
"html": null,
"num": null,
"text": "Number of sentences used for training. real column show the size of original data and real+BT +ST column illustrates the size of the augmented data."
},
"TABREF2": {
"type_str": "table",
"content": "<table><tr><td>Threshold</td><td>km \u2192 vi</td><td>id \u2192 vi</td></tr><tr><td>0.0</td><td>15.23</td><td>17.24</td></tr><tr><td>0.1</td><td>15.81</td><td>17.75</td></tr><tr><td>0.2</td><td>16.02</td><td>17.96</td></tr><tr><td>0.3</td><td>16.25</td><td>18.29</td></tr><tr><td>0.4</td><td colspan=\"2\">16.87 (+1.64) 18.93 (+1.69)</td></tr><tr><td>0.5</td><td>16.62</td><td>18.58</td></tr><tr><td>0.6</td><td>16.37</td><td>18.34</td></tr></table>",
"html": null,
"num": null,
"text": "BLEU scores for our proposed models compared with strong baselines."
},
"TABREF3": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": "The effect of the quality of filtered synthethic data with different filtering thresholds in terms of BLEU sore."
},
"TABREF5": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": "Translation performance (BLEU) when finetuning our proposed model on a small amount of real parallel data."
}
}
}
}