ACL-OCL / Base_JSON /prefixR /json /R19 /R19-1003.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R19-1003",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:03:23.435170Z"
},
"title": "Bilingual Low-Resource Neural Machine Translation with Round-Tripping: The Case of Persian-Spanish",
"authors": [
{
"first": "Benyamin",
"middle": [],
"last": "Ahmadnia",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tulane University",
"location": {
"settlement": "New Orleans",
"region": "LA",
"country": "USA"
}
},
"email": "ahmadnia@tulane.edu"
},
{
"first": "Bonnie",
"middle": [
"J"
],
"last": "Dorr",
"suffix": "",
"affiliation": {},
"email": "bdorr@ihmc.us"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The quality of Neural Machine Translation (NMT), as a data-driven approach, massively depends on quantity, quality and relevance of the training dataset. Such approaches have achieved promising results for bilingually high-resource scenarios but are inadequate for low-resource conditions. This paper describes a roundtrip training approach to bilingual lowresource NMT that takes advantage of monolingual datasets to address training data scarcity, thus augmenting translation quality. We conduct detailed experiments on Persian-Spanish as a bilingually low-resource scenario. Experimental results demonstrate that this competitive approach outperforms the baselines.",
"pdf_parse": {
"paper_id": "R19-1003",
"_pdf_hash": "",
"abstract": [
{
"text": "The quality of Neural Machine Translation (NMT), as a data-driven approach, massively depends on quantity, quality and relevance of the training dataset. Such approaches have achieved promising results for bilingually high-resource scenarios but are inadequate for low-resource conditions. This paper describes a roundtrip training approach to bilingual lowresource NMT that takes advantage of monolingual datasets to address training data scarcity, thus augmenting translation quality. We conduct detailed experiments on Persian-Spanish as a bilingually low-resource scenario. Experimental results demonstrate that this competitive approach outperforms the baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Neural Machine Translation (NMT) has made considerable progress in recent years. However, to achieve acceptable translation output, large sets of aligned parallel sentences are required for the training phase. Thus, as a data-driven paradigm, the quality of NMT output strongly depends on the quality as well as quantity of the provided training data (Bahdanau et al., 2015) . Unfortunately, in practice, collecting such parallel text by human labeling is very costly and time consuming (Ahmadnia and Serrano, 2017).",
"cite_spans": [
{
"start": 351,
"end": 374,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Low-resource languages are those that have fewer technologies and datasets relative to some measure of their international importance. The biggest issue with low-resource languages is the extreme difficulty of obtaining sufficient resources. Natural Language Processing (NLP) methods that have been created for analysis of low-resource languages are likely to encounter similar issues to those faced by documentary and descriptive linguists whose primary endeavor is the study of minority languages. Lessons learned from such studies are highly informative to NLP researchers who seek to overcome analogous challenges in the computational processing of these types of languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Assuming that large monolingual texts are available, an obvious next step is to leverage these texts to augment the NMT systems' performance. Various approaches have been developed for this purpose. In some approaches, target monolingual texts are employed to train a Language Model (LM) that is then integrated with MT models trained from parallel texts to enhance translation quality (Brants et al., 2007; G\u00fcl\u00e7ehre et al., 2015) . Although these approaches utilize monolingual text to train a LM, they do not address the shortage of bilingual training datasets.",
"cite_spans": [
{
"start": 386,
"end": 407,
"text": "(Brants et al., 2007;",
"ref_id": "BIBREF4"
},
{
"start": 408,
"end": 430,
"text": "G\u00fcl\u00e7ehre et al., 2015)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In other approaches, bilingual datasets are automatically generated from monolingual texts by utilizing the Translation Model (TM) trained on aligned bilingual text; the resulting sentence pairs are used to enlarge the initial training dataset for subsequent learning iterations (Ueffing et al., 2008; Sennrich et al., 2016) . Although these approaches enlarge the bilingual training dataset, there is no quality control and, thus, the accuracy of the generated bilingual dataset cannot be guaranteed (Ahmadnia et al., 2018) .",
"cite_spans": [
{
"start": 279,
"end": 301,
"text": "(Ueffing et al., 2008;",
"ref_id": "BIBREF23"
},
{
"start": 302,
"end": 324,
"text": "Sennrich et al., 2016)",
"ref_id": "BIBREF19"
},
{
"start": 501,
"end": 524,
"text": "(Ahmadnia et al., 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To tackle the issues described above, we apply a new round-tripping approach that incorporates dual learning (He et al., 2016) for automatic learning from unlabeled data, but transcends this prior work through effective leveraging of monolingual text. Specifically, the round-tripping approach takes advantage of the bootstrapping methods including self-training and co-training. These methods start their mission from a small set of labelled examples, while also considering one or two weak translation models, and makes improvement through the incorporation of unlabeled data into the training dataset.",
"cite_spans": [
{
"start": 109,
"end": 126,
"text": "(He et al., 2016)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the round-tripping approach, the two translation tasks (forward and backward) together make a closed loop, i.e., one direction produces informative feedback for training the TM for the other direction, and vice versa. The feedback signalswhich consist of the language model likelihood of the output model and the reconstruction error of the original sentence-drive the process of iterative updates of the forward and backward TMs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For the purpose of evaluation, we apply this approach to a bilingually low-resource language pair (Persian-Spanish) to leverage monolingual data in a more effective way. By utilizing the roundtripping approach, the monolingual data play a similar role to the bilingual data, effectively reducing the requirement for parallel data. In particular, each model provides guidance to the other throughout the learning process. Our results show that round-tripping for NMT works well in the Persian-Spanish low-resource scenario. By learning from monolingual data, this approach achieves comparable accuracy to a NMT approach trained from the full bilingual data for the two translation tasks (forward and backward).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of this paper is organized as follows; Section 2 presents the previous related work. In Section 3, we briefly review the relevant mathematical background of NMT paradigm. Section 4 describes the round-trip training approach. The experiments and results are presented in Section 5. Conclusions and future work are discussed in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The integration of monolingual data for NMT models was first proposed by (G\u00fcl\u00e7ehre et al., 2015), who train monolingual LMs independently, and then integrate them during decoding through rescoring of the beam (adding the recurrent hidden state of the LM to the decoder state of the encoder-decoder network). In this approach, an additional controller mechanism controls the magnitude of the LM signal. The controller parameters and output parameters are tuned on further parallel training data, but the LM parameters are fixed during the fine tuning stage. Jean et al. (2015) also report on experiments with reranking of NMT output with a 5-gram LM, but improvements are small. The production of synthetic parallel texts bears resemblance to data augmentation techniques, where datasets are often augmented with rotated, scaled, or otherwise distorted variants of the (limited) training set (Rowley et al., 1998) .",
"cite_spans": [
{
"start": 557,
"end": 575,
"text": "Jean et al. (2015)",
"ref_id": "BIBREF10"
},
{
"start": 891,
"end": 912,
"text": "(Rowley et al., 1998)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "A similar avenue of research is self-training (McClosky et al., 2006) . The self-training approach as a bootstrapping method typically refers to the scenario where the training dataset is enhanced with training instances with artificially produced output labels (whereas we start with neural network based output, i.e., the translation, and artificially produce an input). We expect that this is more robust towards noise in MT.",
"cite_spans": [
{
"start": 46,
"end": 69,
"text": "(McClosky et al., 2006)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Hoang et al. 2018showed that the quality of back translation matters and proposed an iterative back translation, in which back translated data are used to build better translation systems in forward and backward directions. These, in turn, are used to reback translate monolingual data. This process is iterated several times.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Improving NMT with monolingual source data, following similar work on phrase-based SMT (Schwenk, 2008) , remains possible future work. Domain adaptation of neural networks via continued training has been shown to be effective for neural language models by (Ter-Sarkisov et al., 2015) .",
"cite_spans": [
{
"start": 87,
"end": 102,
"text": "(Schwenk, 2008)",
"ref_id": "BIBREF18"
},
{
"start": 256,
"end": 283,
"text": "(Ter-Sarkisov et al., 2015)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Round-tripping has already been utilized in SMT by (Ahmadnia et al., 2019) . In this work, forward and backward models produce informative feedback to iteratively update the TMs during the training of the system.",
"cite_spans": [
{
"start": 51,
"end": 74,
"text": "(Ahmadnia et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "NMT consists of an encoder and a decoder. Following (Bahdanau et al., 2015) , we adopt an attention-based encoder-decoder model, i.e., one that selectively focuses on sub-parts of the sentence during translation. Consider a source sentence X = {x 1 , x 2 , ..., x J } and a target sentence Y = {y 1 , y 2 , ..., y I }. The problem of translation from the source language to the target is solved by finding the best target language sentence\u0177 that maximizes the conditional probability:",
"cite_spans": [
{
"start": 52,
"end": 75,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y = arg max y P (y|x)",
"eq_num": "(1)"
}
],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "The conditional word probabilities given the source language sentence and preceding target language words compose the conditional probability 20 as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "P (y|x) = I i=1 P (y i |y <i , x) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "where y i is the target word emitted by the decoder at step i and y < i = (y 1 , y 2 , ..., y i\u22121 ). To compose the model, both the encoder and decoder are implemented employing Recurrent neural Networks (RNNs) (Rumelhart et al., 1986) , i.e., the encoder converts source words into a sequence of vectors, and the decoder generates target words one-by-one based on the conditional probability shown in the Equation (2). More specifically, the encoder takes a sequence of source words as inputs and returns forward hidden vectors",
"cite_spans": [
{
"start": 211,
"end": 235,
"text": "(Rumelhart et al., 1986)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2212 \u2192 h j (1 \u2264 j \u2264 J) of the forward-RNN: \u2212 \u2192 h j = f ( \u2212\u2212\u2192 h j\u22121 , x)",
"eq_num": "(3)"
}
],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "Similarly, we obtain backward hidden vectors \u2190 \u2212 h j (1 \u2264 j \u2264 J) of the backward-RNN, in the reverse order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2190 \u2212 h j = f ( \u2190\u2212\u2212 h j\u22121 , x)",
"eq_num": "(4)"
}
],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "These forward and backward vectors are concatenated to make source vectors h j (1 \u2264 j \u2264 J) based on Equation 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h j = \u2212 \u2192 h j ; \u2190 \u2212 h j",
"eq_num": "(5)"
}
],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "The decoder takes source vectors as input and returns target words. It starts with the initial hidden vector h J (concatenated source vector at the end), and generates target words in a recurrent manner using its hidden state and an output context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "The conditional output probability of a target language word y i is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "P (y i |y <i , x) = softmax (f (d i , y i\u22121 , c i )) (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "where f is a non-linear function and d i is the hidden state of the decoder at step i:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d i = g(d i\u22121 , y i\u22121 , c i )",
"eq_num": "(7)"
}
],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "where g is a non-linear function taking its previous state vector with the previous output word as inputs to update its state vector. c i is a context vector to retrieve source inputs in the form of a weighted sum of the source vectors h j , first taking as input the hidden state d i at the top layer of a stacking LSTM (Hochreiter and Schmidhuber, 1997) . The goal is to derive a context vector c i that captures relevant source information to help predict the current target word y i . While these models differ in how the context vector c i is derived, they share the same subsequent steps. c i is calculated as follows:",
"cite_spans": [
{
"start": 321,
"end": 355,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "c i = J j=1 \u03b1 t,j h j (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "where h j is the annotation of source word x j and \u03b1 t,j is a weight for the j th source vector at time step t to generate y i :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b1 t,j = exp (score (d i , h j )) J j =1 exp (score (d i , h j ))",
"eq_num": "(9)"
}
],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "The score function above may be defined in a variety of ways as discussed by Luong et al. (2015) .",
"cite_spans": [
{
"start": 77,
"end": 96,
"text": "Luong et al. (2015)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "In this paper, we denote all the parameters to be optimized in the neural network as \u0398 and denote C as the dataset that contains source-target sentence pairs for the training phase. Hence, the learning objective is to seek the optimal parameters \u0398 * :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u0398 * = arg max \u0398 (x,y)\u2208C I (i=1) log P (y t |y <t , x; \u0398)",
"eq_num": "(10)"
}
],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "4 Method Description",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "Round-tripping involves two related translation tasks: the outbound-trip (source-target direction) and the inbound-trip (target-source direction). The defining traits of these forward and backward tasks are that they form a closed loop and both produce informative feedback that enables simultaneous training of the TMs. We assume availability of: (1) monolingual datasets (C X and C Y ) for the source and target languages; and (2) two weak TMs (emergent from training on initial small bilingual corpora) that bidirectionally translate sentences from source and target languages. The goal of the round-tripping approach is to augment the accuracy of the two TMs by employing the two monolingual datasets instead of a bilingual text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "We start by translating a sample sentence in one of the monolingual datasets, as the outbound-trip (forward) translation to the target language. This step generates more bilingual sentence pairs between the source and target languages. We then translate the resulting sentence pairs backward through the inbound-trip translation to the original language. This step finds high-quality sentences throughout the entirety of the generated sentence pairs. Evaluating the results of this round-tripping approach will provide an indication of the quality of the two TMs, and will enable their enhancement, accordingly. This process is iterated for several rounds until both TMs converge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "We define K X as the number of sentences in C X and K Y as the number of sentences in C Y . We take P (.|S; \u0398 XY ) and P (.|S; \u0398 Y X ) to be two neural TMs in which \u0398 XY and \u0398 Y X are supposed as their parameters. We also assume the existence of two LMs for languages X and Y , trained in advance either by using other resources or using the monolingual data (C X and C Y ). Each LM takes a sentence as input and produces a real number, based on target-language fluency (LM correctness) together translation accuracy (TM correctness). This number represents the confidence of the translation quality of the sentence in its own language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "We start with a sentence in C X and denote S sample as a translation output sample. This step has a score as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R 1 = LM Y (S sample )",
"eq_num": "(11)"
}
],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "The R 1 score indicates the well-formedness of the output sentence in language Y . Given the translation output S sample , we employ the log probability value of s recovered from the S sample as the score of the construction:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R 2 = log P (S|S sample ; \u0398 Y X )",
"eq_num": "(12)"
}
],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "We then adopt the LM score and construction score as the total reward score:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R total = \u03b1R 1 + (1 \u2212 \u03b1)R 2",
"eq_num": "(13)"
}
],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "where \u03b1 is an input hyper-parameter. The total reward score is considered a function of S, S sample , \u0398 XY and \u0398 Y X . To maximize this score, we optimize the parameters in the TMs utilizing Stochastic Gradient Descent (SGD) (Sutton et al., 2000) . According to the forward TM, we sample the s sample and then compute the gradient of the expected score (E[R total ]), where E is taken from S sample :",
"cite_spans": [
{
"start": 225,
"end": 246,
"text": "(Sutton et al., 2000)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "\u2207 \u0398 XY E[R total ] = E[R total \u2207 \u0398 XY log P (S sample |S; \u0398 XY )] (14) \u2207 \u0398 Y X E[R total ] = E[(1 \u2212 \u03b1)\u2207 \u0398 Y X log P (S|S sample ; \u0398 Y X )] (15)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "Algorithm 1 shows the round-tripping procedure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "Algorithm 1 Round-trip training for NMT Input: Monolingual dataset in source and target languages (C X and C Y ), initial translation models in outbound and inbound trips (\u0398 XY and \u0398 Y X ), language models in source and target languages (LM X and LM Y ), trade-off parameter between 0 and 1 (\u03b1), beam search size (N ), learning rates (\u03b3 1,t and \u03b3 2,t ). 1: repeat: 2: t = t + 1. 3: Sample sentences S X and S Y from C X and C Y respectively. 4: // Update model starting from language X.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "Set S = S X . 5: // Generate top-N translations using \u0398 XY .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "Generate sentences S sample,1 , ..., S sample,N . 6: for n = 1, ..., N do 7: // Set LM score for n th sampled sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "R 1,n = LM Y (S sample,n ). 8: // Set TM score for n th sampled sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "R 2,n = logP (S|S sample,n ; \u0398 Y X ). 9: // Set total score of n th sampled sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "R n = \u03b1R 1,n + (1 \u2212 \u03b1)R 2,n . 10: end for 11: // SDG computing for \u0398 XY .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "\u2207 \u0398 XY\u00ca [R total ] = 1 N N n=1 [R n \u2207 \u0398 XY log P (S sample,n |S; \u0398 XY )]. 12: // SDG computing for \u0398 Y X . \u2207 \u0398 Y X\u00ca [R total ] = 1 N N n=1 [(1 \u2212 \u03b1)\u2207 \u0398 Y X log P (S|S sample,n ; \u0398 Y X )]. 13: // Model update. \u0398 XY \u2190 \u0398 XY + \u03b3 1,t \u2207 \u0398 XY\u00ca [R total ]. 14: // Model update. \u0398 Y X \u2190 \u0398 Y X + \u03b3 2,t \u2207 \u0398 Y X\u00ca [R total ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": ". 15: // Update model starting from language Y .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "Set S = S Y . 16: Go through lines 5 \u2212 14 symmetrically. 17: until convergence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "To achieve reasonable translations we use beam search. We generate N-best sample translations and use the averaged value on the beam search results to estimate the true gradient value. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "We apply the round-trip training approach to bilingual Persian-Spanish translation, and evaluate the results. We used the Persian-Spanish small bilingual corpora from the Tanzil corpus (Tiedemann, 2012) , 2 which contains about 50K parallel sentence pairs. We also used 5K and 10K parallel sentences extracted from the OpenSubtitles2018 collection (Tiedemann, 2012) , 3 as the validation and test datasets, respectively. Finally, we utilized 70K parallel sentences from the KDE4 corpus (Tiedemann, 2012), 4 as the monolingual data.",
"cite_spans": [
{
"start": 185,
"end": 202,
"text": "(Tiedemann, 2012)",
"ref_id": "BIBREF22"
},
{
"start": 348,
"end": 365,
"text": "(Tiedemann, 2012)",
"ref_id": "BIBREF22"
},
{
"start": 368,
"end": 369,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "We implemented the DyNet-based model architecture (Mi et al., 2016) on top of Mantis (Cohn et al., 2016) which is an implementation of the attention sequence-to-sequence (Seq-to-Seq) NMT. For each language, we constructed a vocabulary with the most common 50K words in the parallel corpora, and OOV words were replace with a special token <UNK>. For monolingual corpora, sentences containing at least one OOV word were removed. Additionally, sentences with more than 80 words were removed from the training set. 5 The encoders and decoders make use of Long Short-Term Memory (LSTM) with 500 embedding dimensions, 500 hidden dimensions. For training, we used the SGD algorithm as the optimizer. The batch size was set as 64 with 20 batches pre-fetched and sorted by sentence lengths.",
"cite_spans": [
{
"start": 50,
"end": 67,
"text": "(Mi et al., 2016)",
"ref_id": "BIBREF13"
},
{
"start": 85,
"end": 104,
"text": "(Cohn et al., 2016)",
"ref_id": "BIBREF5"
},
{
"start": 512,
"end": 513,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "Below we compare the system based on the optimized round-trip training (round-tripping) through two translation systems; the first one is the standard NMT system (baseline), and the second one is the system that generates pseudo bilingual sentence pairs from monolingual corpora to assist the training step (self-training). For the pseudo-NMT we used the trained NMT model to generate pseudo bilingual sentence pairs from monolingual text, removed sentences with more than 80 words (as above), merged the generated data with the original parallel training data, and then trained the model for testing. Each of the translation systems was trained on a single GPU until their performances stopped improving on the validation set. This approach required an LM for each language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "We trained an RNN-based LM (Mikolov et al., 2010) for each language using its corresponding monolingual corpus. The LM was then fixed and the log-likelihood of a received message was utilized for scoring the TM.",
"cite_spans": [
{
"start": 27,
"end": 49,
"text": "(Mikolov et al., 2010)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "To start the round-trip training approach, the systems are initialized using warm-start TMs trained from initial small bilingual data. The goal is to see whether the round-tripping augments the baseline accuracy. We retrain the baseline systems by enlarging the initial small bilingual corpus: we add the optimized generated bilingual sentences to the initial parallel text. The new enlarged translation system contains both the initial and optimized generated bilingual text. For each translation task, we train the round-trip training approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "We employ Bilingual Evaluation Understudy (BLEU) (Papineni et al., 2001 ) (using multibleu.perl script from Moses) as the evaluation metric. BLEU is calculated for individual translated segments by comparing them with a data set of reference translations. The scores of each segment, ranging between 0 and 100, are averaged over the entire evaluation dataset to yield an estimate of the overall translation quality (higher is better).",
"cite_spans": [
{
"start": 49,
"end": 71,
"text": "(Papineni et al., 2001",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "The baseline systems for Persian-Spanish are first trained, while our round-trip method conducts joint training. We summarize the overall performances in Table 1 : BLEU scores for Persian-Spanish translation task and vice-versa.",
"cite_spans": [],
"ref_spans": [
{
"start": 154,
"end": 161,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "As seen in Table 1 , the round-tripping systems outperform the others in both translation directions. In Persian to Spanish translation, the roundtripping system outperforms the baseline by about 3.87 BLEU points and also outperforms the selftraining system by about 6.07 BLEU points. In the back translation from Spanish to Persian, the improvement of the round-tripping outperforms both the baseline and self-training by about 3.79 and 5.62 BLEU points, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 11,
"end": 18,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "These results demonstrate the effectiveness of the round-trip training approach. The baseline systems outperform the self-training ones in all cases because of the noise in the generated bilingual sentences used by self-training. Upon further examination, this result might have been expected given that the aim of round-trip training is to optimize the generated bilingual sentences by selecting the high-quality sentences to get better performance over self-training systems. When the size of bilingual corpus is small, the round-tripping makes a larger improvement. This outcome is an indication that round-trip training approach makes effective use of monolingual data. Table 2 shows the performance of the baseline alongside of the enlarged translation systems, where the latter leverages the training text of the baseline and the round-tripping systems as well.",
"cite_spans": [],
"ref_spans": [
{
"start": 674,
"end": 681,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "NMT systems Pe-Es Es-Pe baseline 31.12 29.56 enlarged 34.21 33.03 Table 2 : BLEU scores comparing the baseline and enlarged NMT systems for Persian-Spanish translation task and vice-versa.",
"cite_spans": [],
"ref_spans": [
{
"start": 66,
"end": 73,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "As seen in Table 2 , the BLEU scores of the enlarged NMT systems are better than the baseline ones in both translation directions. The enlarged system in the Persian-Spanish direction outperforms the baseline by about 3.47 BLEU points, and outperforms the baseline by about 3.09 BLEU points in the back translation. The improvements show that the optimized round-trip training system is promising for tackling the training data scarcity and it also helps to enhance translation quality.",
"cite_spans": [],
"ref_spans": [
{
"start": 11,
"end": 18,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "In this paper, we applied a round-tripping approach based on a retraining scenario to tackle training data scarcity in NMT systems. An exciting finding of this work is that it is possible to learn translations directly from monolingual data of the two languages. We employed a low-resource language pair and verified the hypothesis that, regardless of the amount of training resources, this approach outperforms the baseline. The results demonstrate that round-trip training is promising and better utilizes the monolingual data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "Many Artificial Intelligence (AI) tasks are naturally in dual form. Some examples are: (1) speech recognition paired with text-to-speech; (2) image captioning paired with image generation; and (3) question answering paired with question gener-ation. Thus, a possible future direction would be to design and test the round-tripping approach for more tasks beyond MT. We note that roundtripping is not restricted to two tasks only. Indeed, the key idea is to form a closed loop so feedback signals are extracted by comparing the original input data with the final output data. Therefore, if more than two associated tasks form a closed loop, this approach can applied in each task for improvement of the overall model, even in the face of unlabeled data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "We used beam sizes 500 and 1000.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://opus.nlpl.eu/Tanzil.php 3 http://opus.nlpl.eu/OpenSubtitles-v2018.php 4 http://opus.nlpl.eu/KDE4-v2.php 5 The average sentence length is 47; an upper bound of 80 ensured exclusion of non-sentential and other spurious material.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to express our sincere gratitude to Dr. Michael W. Mislove (Tulane University, USA), Dr. Javier Serrano (Universitat Aut\u00f2noma de Barcelona, Spain) and Dr. Gholamreza Haffari (Monash University, Australia) for all their support. We also would like to acknowledge the financial support received from the School of Science and Engineering, Tulane University (USA).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Statistical machine translation for bilingually low-resource scenarios: A roundtripping approach",
"authors": [
{
"first": "Gholamreza",
"middle": [],
"last": "Benyamin Ahmadnia",
"suffix": ""
},
{
"first": "Javier",
"middle": [],
"last": "Haffari",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Serrano",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 3rd IEEE International Conference on Machine Learning and Natural Language Processing",
"volume": "",
"issue": "",
"pages": "261--265",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benyamin Ahmadnia, Gholamreza Haffari, and Javier Serrano. 2018. Statistical machine translation for bilingually low-resource scenarios: A round- tripping approach. In Proceedings of the 3rd IEEE International Conference on Machine Learning and Natural Language Processing. pages 261-265.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Round-trip training approach for bilingually low-resource statistical machine translation systems",
"authors": [
{
"first": "Gholamreza",
"middle": [],
"last": "Benyamin Ahmadnia",
"suffix": ""
},
{
"first": "Javier",
"middle": [],
"last": "Haffari",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Serrano",
"suffix": ""
}
],
"year": 2019,
"venue": "International Journal of Artificial Intelligence",
"volume": "17",
"issue": "1",
"pages": "167--185",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benyamin Ahmadnia, Gholamreza Haffari, and Javier Serrano. 2019. Round-trip training approach for bilingually low-resource statistical machine transla- tion systems. International Journal of Artificial In- telligence 17(1):167-185.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Employing pivot language technique through statistical and neural machine translation frameworks: The case of under-resourced persian-spanish language pair",
"authors": [
{
"first": "Benyamin",
"middle": [],
"last": "Ahmadnia",
"suffix": ""
},
{
"first": "Javier",
"middle": [],
"last": "Serrano",
"suffix": ""
}
],
"year": 2017,
"venue": "International Journal on Natural Language Computing",
"volume": "6",
"issue": "5",
"pages": "37--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benyamin Ahmadnia and Javier Serrano. 2017. Em- ploying pivot language technique through statisti- cal and neural machine translation frameworks: The case of under-resourced persian-spanish language pair. International Journal on Natural Language Computing 6(5):37-47.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Represen- tations.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Large language models in machine translation",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Ashok",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Popat",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"J"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "858--867",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Brants, Ashok C. Popat, Peng Xu, Franz J. Och, and Jeffrey Dean. 2007. Large language mod- els in machine translation. In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. pages 858-867.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Incorporating structural alignment biases into an attentional neural translation model",
"authors": [
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Cong Duy Vu",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Vymolova",
"suffix": ""
},
{
"first": "Kaisheng",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Haffari",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics Human Language Technologies",
"volume": "",
"issue": "",
"pages": "876--885",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trevor Cohn, Cong Duy Vu Huang, Ekaterina Vy- molova, Kaisheng Yao, Chris Dyer, and Gholamreza Haffari. 2016. Incorporating structural alignment bi- ases into an attentional neural translation model. In Proceedings of the Conference of the North Amer- ican Chapter of the Association for Computational Linguistics Human Language Technologies. pages 876-885.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "On using monolingual corpora in neural machine translation",
"authors": [
{
"first": "Orhan",
"middle": [],
"last": "\u00c7 Aglar G\u00fcl\u00e7ehre",
"suffix": ""
},
{
"first": "Kelvin",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Huei-Chi",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\u00c7 aglar G\u00fcl\u00e7ehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Lo\u00efc Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On us- ing monolingual corpora in neural machine transla- tion. CoRR abs/1503.03535.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Dual learning for machine translation",
"authors": [
{
"first": "Di",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Yingce",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Liwei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Nenghai",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Wei-Ying",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 30th Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learn- ing for machine translation. In Proceedings of the 30th Conference on Neural Information Processing Systems.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Iterative backtranslation for neural machine translation",
"authors": [
{
"first": "Duy",
"middle": [],
"last": "Vu Cong",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Haffari",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation",
"volume": "",
"issue": "",
"pages": "18--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. 2018. Iterative back- translation for neural machine translation. In Pro- ceedings of the 2nd Workshop on Neural Machine Translation and Generation. pages 18-24.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735-1780.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "On using very large target vocabulary for neural machine translation",
"authors": [
{
"first": "S\u00e9bastien",
"middle": [],
"last": "Jean",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Memisevic",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.2007"
]
},
"num": null,
"urls": [],
"raw_text": "S\u00e9bastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large tar- get vocabulary for neural machine translation. arXiv preprint arXiv:1412.2007 .",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Addressing the rare word problem in neural machine translation",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Zaremba",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "11--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. In Pro- ceedings of the 53rd Annual Meeting of the Associ- ation for Computational Linguistics and the 7th In- ternational Joint Conference on Natural Language Processing. pages 11-19.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Effective self-training for parsing",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Main Conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "152--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David McClosky, Eugene Charniak, and Mark John- son. 2006. Effective self-training for parsing. In Proceedings of the Main Conference on Human Lan- guage Technology Conference of the North Amer- ican Chapter of the Association of Computational Linguistics. pages 152-159.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Supervised attentions for neural machine translation",
"authors": [
{
"first": "Haitao",
"middle": [],
"last": "Mi",
"suffix": ""
},
{
"first": "Zhiguo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Abe",
"middle": [],
"last": "Ittycheriah",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the International Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2283--2288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haitao Mi, Zhiguo Wang, and Abe Ittycheriah. 2016. Supervised attentions for neural machine translation. In Proceedings of the International Conference on Empirical Methods in Natural Language Process- ing. pages 2283-2288.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Recurrent neural network based language model",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Karafiat",
"suffix": ""
},
{
"first": "Lukas",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Cernocky",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of INTERSPEECH",
"volume": "",
"issue": "",
"pages": "1045--1048",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Martin Karafiat, Lukas Burget, Jan Cernocky, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Proceed- ings of INTERSPEECH. pages 1045-1048.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Bleu: A method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2001. Bleu: A method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computa- tional Linguistics. pages 311-318.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Neural network-based face detection",
"authors": [
{
"first": "A",
"middle": [],
"last": "Henry",
"suffix": ""
},
{
"first": "Shumeet",
"middle": [],
"last": "Rowley",
"suffix": ""
},
{
"first": "Takeo",
"middle": [],
"last": "Baluja",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kanade",
"suffix": ""
}
],
"year": 1998,
"venue": "IEEE Trans. Pattern Anal. Mach. Intell",
"volume": "20",
"issue": "1",
"pages": "23--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Henry A. Rowley, Shumeet Baluja, and Takeo Kanade. 1998. Neural network-based face detection. IEEE Trans. Pattern Anal. Mach. Intell. 20(1):23-38.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Learning representations by backpropagating errors",
"authors": [
{
"first": "David",
"middle": [
"E"
],
"last": "Rumelhart",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "Ronald",
"middle": [
"J"
],
"last": "Williams",
"suffix": ""
}
],
"year": 1986,
"venue": "Nature",
"volume": "323",
"issue": "",
"pages": "533--536",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. 1986. Learning representations by back- propagating errors. Nature 323:533-536.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Investigations on large-scale lightly-supervised training for statistical machine translation",
"authors": [
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of IWSLT",
"volume": "",
"issue": "",
"pages": "182--189",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Holger Schwenk. 2008. Investigations on large-scale lightly-supervised training for statistical machine translation. In Proceedings of IWSLT. pages 182- 189.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Policy gradient methods for reinforcement learning with function approximation",
"authors": [
{
"first": "Richard",
"middle": [
"S"
],
"last": "Sutton",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "Mcallester",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Satinder",
"suffix": ""
},
{
"first": "Yishay",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mansour",
"suffix": ""
}
],
"year": 2000,
"venue": "Advances in Neural Information Processing Systems",
"volume": "12",
"issue": "",
"pages": "1057--1063",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard S. Sutton, David A. Mcallester, Satinder P. Singh, and Yishay Mansour. 2000. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing Systems. volume 12, pages 1057-1063.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Incremental adaptation strategies for neural network language models",
"authors": [
{
"first": "Aram",
"middle": [],
"last": "Ter-Sarkisov",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality",
"volume": "",
"issue": "",
"pages": "48--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aram Ter-Sarkisov, Holger Schwenk, Lo\u00efc Barrault, and Fethi Bougares. 2015. Incremental adaptation strategies for neural network language models. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality. pages 48-56.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Parallel data, tools and interfaces in opus",
"authors": [
{
"first": "J\u00f8rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 8th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f8rg Tiedemann. 2012. Parallel data, tools and inter- faces in opus. In Proceedings of the 8th Interna- tional Conference on Language Resources and Eval- uation.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "On using monolingual corpora in statistical machine translation",
"authors": [
{
"first": "Nicola",
"middle": [],
"last": "Ueffing",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Haffari",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Sarkar",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicola Ueffing, Gholamreza Haffari, and Anoop Sarkar. 2008. On using monolingual corpora in sta- tistical machine translation. Journal of Machine Translation .",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">NMT systems Pe-Es Es-Pe</td></tr><tr><td>baseline</td><td>31.12 29.56</td></tr><tr><td>self-train</td><td>29.29 27.36</td></tr><tr><td>round-trip</td><td>34.91 33.43</td></tr></table>",
"html": null,
"text": ""
}
}
}
}