ACL-OCL / Base_JSON /prefixN /json /N19 /N19-1043.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N19-1043",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:59:43.514405Z"
},
"title": "Bi-Directional Differentiable Input Reconstruction for Low-Resource Neural Machine Translation",
"authors": [
{
"first": "Xing",
"middle": [],
"last": "Niu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Maryland",
"location": {}
},
"email": "xingniu@cs.umd.edu"
},
{
"first": "Weijia",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Maryland",
"location": {}
},
"email": "weijia@cs.umd.edu"
},
{
"first": "Marine",
"middle": [],
"last": "Carpuat",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Maryland",
"location": {}
},
"email": "marine@cs.umd.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We aim to better exploit the limited amounts of parallel text available in low-resource settings by introducing a differentiable reconstruction loss for neural machine translation (NMT). This loss compares original inputs to reconstructed inputs, obtained by backtranslating translation hypotheses into the input language. We leverage differentiable sampling and bi-directional NMT to train models end-to-end, without introducing additional parameters. This approach achieves small but consistent BLEU improvements on four language pairs in both translation directions, and outperforms an alternative differentiable reconstruction strategy based on hidden states.",
"pdf_parse": {
"paper_id": "N19-1043",
"_pdf_hash": "",
"abstract": [
{
"text": "We aim to better exploit the limited amounts of parallel text available in low-resource settings by introducing a differentiable reconstruction loss for neural machine translation (NMT). This loss compares original inputs to reconstructed inputs, obtained by backtranslating translation hypotheses into the input language. We leverage differentiable sampling and bi-directional NMT to train models end-to-end, without introducing additional parameters. This approach achieves small but consistent BLEU improvements on four language pairs in both translation directions, and outperforms an alternative differentiable reconstruction strategy based on hidden states.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Neural Machine Translation (NMT) performance degrades sharply when parallel training data is limited (Koehn and Knowles, 2017) . Past work has addressed this problem by leveraging monolingual data (Sennrich et al., 2016a; Ramachandran et al., 2017) or multilingual parallel data (Zoph et al., 2016; Johnson et al., 2017; Gu et al., 2018a) . We hypothesize that the traditional training can be complemented by better leveraging limited training data. To this end, we propose a new training objective for this model by augmenting the standard translation cross-entropy loss with a differentiable input reconstruction loss to further exploit the source side of parallel samples.",
"cite_spans": [
{
"start": 101,
"end": 126,
"text": "(Koehn and Knowles, 2017)",
"ref_id": "BIBREF19"
},
{
"start": 197,
"end": 221,
"text": "(Sennrich et al., 2016a;",
"ref_id": "BIBREF30"
},
{
"start": 222,
"end": 248,
"text": "Ramachandran et al., 2017)",
"ref_id": "BIBREF29"
},
{
"start": 279,
"end": 298,
"text": "(Zoph et al., 2016;",
"ref_id": "BIBREF41"
},
{
"start": 299,
"end": 320,
"text": "Johnson et al., 2017;",
"ref_id": "BIBREF17"
},
{
"start": 321,
"end": 338,
"text": "Gu et al., 2018a)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Input reconstruction is motivated by the idea of round-trip translation. Suppose sentence f is translated forward to e using model \u03b8 f e and then translated back tof using model \u03b8 ef , then e is more likely to be a good translation if the distance betweenf and f is small (Brislin, 1970) . Prior work applied round-trip translation to monolingual examples and sampled the intermediate translation e from a K-best list generated by model \u03b8 f e using beam search (Cheng et al., 2016; . However, beam search is not differentiable which prevents back-propagating reconstruction errors to \u03b8 f e . As a result, reinforcement learning algorithms, or independent updates to \u03b8 f e and \u03b8 ef were required.",
"cite_spans": [
{
"start": 272,
"end": 287,
"text": "(Brislin, 1970)",
"ref_id": "BIBREF5"
},
{
"start": 461,
"end": 481,
"text": "(Cheng et al., 2016;",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we focus on the problem of making input reconstruction differentiable to simplify training. In past work, Tu et al. (2017) addressed this issue by reconstructing source sentences from the decoder's hidden states. However, this reconstruction task can be artificially easy if hidden states over-memorize the input. This approach also requires a separate auxiliary reconstructor, which introduces additional parameters.",
"cite_spans": [
{
"start": 121,
"end": 137,
"text": "Tu et al. (2017)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose instead to combine benefits from differentiable sampling and bi-directional NMT to obtain a compact model that can be trained endto-end with back-propagation. Specifically,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Translations are sampled using the Straight-Through Gumbel Softmax (STGS) estimator (Jang et al., 2017; Bengio et al., 2013) , which allows back-propagating reconstruction errors.",
"cite_spans": [
{
"start": 86,
"end": 105,
"text": "(Jang et al., 2017;",
"ref_id": "BIBREF16"
},
{
"start": 106,
"end": 126,
"text": "Bengio et al., 2013)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Our approach builds on the bi-directional NMT model (Niu et al., 2018; Johnson et al., 2017) , which improves low-resource translation by jointly modeling translation in both directions (e.g., Swahili \u2194 English). A single bi-directional model is used as a translator and a reconstructor (i.e. \u03b8 ef = \u03b8 f e ) without introducing more parameters.",
"cite_spans": [
{
"start": 54,
"end": 72,
"text": "(Niu et al., 2018;",
"ref_id": "BIBREF25"
},
{
"start": 73,
"end": 94,
"text": "Johnson et al., 2017)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Experiments show that our approach outperforms reconstruction from hidden states. It achieves consistent improvements across various low-resource language pairs and directions, showing its effectiveness in making better use of limited parallel data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Using round-trip translations (f \u2192 e \u2192f ) as a training signal for NMT usually requires auxiliary models to perform back-translation and cannot be trained end-to-end without reinforcement learning. For instance, Cheng et al. (2016) added a reconstruction loss for monolingual examples to the training objective. evaluated the quality of e by a language model andf by a reconstruction likelihood. Both approaches have symmetric forward and backward translation models which are updated alternatively. This require policy gradient algorithms for training, which are not always stable.",
"cite_spans": [
{
"start": 212,
"end": 231,
"text": "Cheng et al. (2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Back-translation (Sennrich et al., 2016a) performs half of the reconstruction process, by generating a synthetic source side for monolingual target language examples: e \u2192f . It uses an auxiliary backward model to generate the synthetic data but only updates the parameters of the primary forward model. Iteratively updating forward and backward models (Zhang et al., 2018; Niu et al., 2018) is an expensive solution as back-translations are regenerated at each iteration.",
"cite_spans": [
{
"start": 17,
"end": 41,
"text": "(Sennrich et al., 2016a)",
"ref_id": "BIBREF30"
},
{
"start": 352,
"end": 372,
"text": "(Zhang et al., 2018;",
"ref_id": "BIBREF40"
},
{
"start": 373,
"end": 390,
"text": "Niu et al., 2018)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Prior work has sought to simplify the optimization of reconstruction losses by side-stepping beam search. Tu et al. (2017) first proposed to reconstruct NMT input from the decoder's hidden states while Wang et al. (2018a,b) suggested to use both encoder and decoder hidden states to improve translation of dropped pronouns. However, these models might achieve low reconstruction errors by learning to copy the input to hidden states. To avoid copying the input, Artetxe et al. (2018) and Lample et al. (2018) used denoising autoencoders (Vincent et al., 2008) in unsupervised NMT.",
"cite_spans": [
{
"start": 106,
"end": 122,
"text": "Tu et al. (2017)",
"ref_id": "BIBREF33"
},
{
"start": 202,
"end": 223,
"text": "Wang et al. (2018a,b)",
"ref_id": null
},
{
"start": 462,
"end": 483,
"text": "Artetxe et al. (2018)",
"ref_id": "BIBREF0"
},
{
"start": 488,
"end": 508,
"text": "Lample et al. (2018)",
"ref_id": "BIBREF21"
},
{
"start": 537,
"end": 559,
"text": "(Vincent et al., 2008)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Our approach is based instead on the Gumbel Softmax (Jang et al., 2017; Maddison et al., 2017) , which facilitates differentiable sampling of sequences of discrete tokens. It has been successfully applied in many sequence generation tasks, including artificial language emergence for multiagent communication (Havrylov and Titov, 2017) , composing tree structures from text (Choi et al., 2018) , and tasks under the umbrella of generative adversarial networks (Goodfellow et al., 2014) such as generating the context-free grammar (Kusner and Hern\u00e1ndez-Lobato, 2016), machine comprehension (Wang et al., 2017) and machine translation (Gu et al., 2018b) .",
"cite_spans": [
{
"start": 52,
"end": 71,
"text": "(Jang et al., 2017;",
"ref_id": "BIBREF16"
},
{
"start": 72,
"end": 94,
"text": "Maddison et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 309,
"end": 335,
"text": "(Havrylov and Titov, 2017)",
"ref_id": "BIBREF13"
},
{
"start": 374,
"end": 393,
"text": "(Choi et al., 2018)",
"ref_id": "BIBREF7"
},
{
"start": 460,
"end": 485,
"text": "(Goodfellow et al., 2014)",
"ref_id": "BIBREF10"
},
{
"start": 589,
"end": 608,
"text": "(Wang et al., 2017)",
"ref_id": "BIBREF35"
},
{
"start": 633,
"end": 651,
"text": "(Gu et al., 2018b)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "NMT is framed as a conditional language model, where the probability of predicting target token e t at step t is conditioned on the previously generated sequence of tokens e <t and the source sequence f given the model parameter \u03b8. Suppose each token is indexed and represented as a one-hot vector, its probability is realized as a softmax function over a linear transformation a(h t ) where h t is the decoder's hidden state at step t:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "P (e t |e <t , f ; \u03b8) = softmax(a(h t )) e t . (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "The hidden state is calculated by a neural network g given the embeddings of the previous target tokens e <t in the embedding matrix E(e <t ) and the context c t coming from the source:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "h t = g(E(e <t ), c t ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "( 2)In our bi-directional model, the source sentence can be either f or e and is respectively translated to e or f . The language is marked by a tag (e.g., <en>) at the beginning of each source sentence (Johnson et al., 2017; Niu et al., 2018) . To facilitate symmetric reconstruction, we also add language tags to target sentences. The training data corpus is then built by swapping the source and target sentences of a parallel corpus and appending the swapped version to the original.",
"cite_spans": [
{
"start": 203,
"end": 225,
"text": "(Johnson et al., 2017;",
"ref_id": "BIBREF17"
},
{
"start": 226,
"end": 243,
"text": "Niu et al., 2018)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "Our bi-directional model performs both forward translation and backward reconstruction. By contrast, uni-directional models require an auxiliary reconstruction module, which introduces additional parameters. This module can be either a decoder-based reconstructor (Tu et al., 2017; Wang et al., 2018a,b) or a reversed dual NMT model (Cheng et al., 2016; Wang et al., 2018c; Zhang et al., 2018) .",
"cite_spans": [
{
"start": 264,
"end": 281,
"text": "(Tu et al., 2017;",
"ref_id": "BIBREF33"
},
{
"start": 282,
"end": 303,
"text": "Wang et al., 2018a,b)",
"ref_id": null
},
{
"start": 333,
"end": 353,
"text": "(Cheng et al., 2016;",
"ref_id": "BIBREF6"
},
{
"start": 354,
"end": 373,
"text": "Wang et al., 2018c;",
"ref_id": "BIBREF38"
},
{
"start": 374,
"end": 393,
"text": "Zhang et al., 2018)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-Directional Reconstruction",
"sec_num": "3.1"
},
{
"text": "Here the reconstructor, which shares the same parameter with the translator T (\u2022), can also be trained end-to-end by maximizing the loglikelihood of reconstructing f :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-Directional Reconstruction",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L R = f log P (f | T (f ; \u03b8); \u03b8),",
"eq_num": "(3)"
}
],
"section": "Bi-Directional Reconstruction",
"sec_num": "3.1"
},
{
"text": "Combining with the forward translation likelihood",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-Directional Reconstruction",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L T = (f e) log P (e | f ; \u03b8),",
"eq_num": "(4)"
}
],
"section": "Bi-Directional Reconstruction",
"sec_num": "3.1"
},
{
"text": "we use L = L T +L R as the final training objective for f \u2192 e. The dual e \u2192 f model is trained simultaneously by swapping the language direction in bi-directional NMT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-Directional Reconstruction",
"sec_num": "3.1"
},
{
"text": "Reconstruction is reliable only with a model that produces reasonable base translations. Following prior work (Tu et al., 2017; Cheng et al., 2016) , we pre-train a base model with L T and fine-tune it with L T + L R .",
"cite_spans": [
{
"start": 110,
"end": 127,
"text": "(Tu et al., 2017;",
"ref_id": "BIBREF33"
},
{
"start": 128,
"end": 147,
"text": "Cheng et al., 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-Directional Reconstruction",
"sec_num": "3.1"
},
{
"text": "We use differentiable sampling to side-step beam search and back-propagate error signals. We use the Gumbel-Max reparameterization trick (Maddison et al., 2014) to sample a translation token at each time step from the softmax distribution in Equation 1:",
"cite_spans": [
{
"start": 137,
"end": 160,
"text": "(Maddison et al., 2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Differentiable Sampling",
"sec_num": "3.2"
},
{
"text": "e t = one-hot arg max k a(h t ) k + G k (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Differentiable Sampling",
"sec_num": "3.2"
},
{
"text": "where G k is i.i.d. and drawn from Gumbel(0, 1) 1 . We use scaled Gumbel with parameter \u03b2, i.e. Gumbel(0, \u03b2), to control the randomness. The sampling becomes deterministic (which is equivalent to greedy search) as \u03b2 approaches 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Differentiable Sampling",
"sec_num": "3.2"
},
{
"text": "Since arg max is not a differentiable operation, we approximate its gradient with the Straight-Through Gumbel Softmax (STGS) (Jang et al., 2017; Bengio et al., 2013) : \u2207 \u03b8 e t \u2248 \u2207 \u03b8\u1ebdt , wher\u1ebd",
"cite_spans": [
{
"start": 125,
"end": 144,
"text": "(Jang et al., 2017;",
"ref_id": "BIBREF16"
},
{
"start": 145,
"end": 165,
"text": "Bengio et al., 2013)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Differentiable Sampling",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e t = softmax (a(h t ) + G)/\u03c4",
"eq_num": "(6)"
}
],
"section": "Differentiable Sampling",
"sec_num": "3.2"
},
{
"text": "As \u03c4 approaches 0, softmax is closer to arg max but training might be more unstable. While the STGS estimator is biased when \u03c4 is large, it performs well in practice (Gu et al., 2018b; Choi et al., 2018) and is sometimes faster and more effective than reinforcement learning (Havrylov and Titov, 2017) .",
"cite_spans": [
{
"start": 166,
"end": 184,
"text": "(Gu et al., 2018b;",
"ref_id": "BIBREF12"
},
{
"start": 185,
"end": 203,
"text": "Choi et al., 2018)",
"ref_id": "BIBREF7"
},
{
"start": 275,
"end": 301,
"text": "(Havrylov and Titov, 2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Differentiable Sampling",
"sec_num": "3.2"
},
{
"text": "To generate coherent intermediate translations, the decoder used for sampling only consumes its previously predicted\u00ea <t . This contrasts with the usual teacher forcing strategy (Williams and Zipser, 1989) , which always feeds in the groundtruth previous tokens e <t when predicting the current token\u00ea t . With teacher forcing, the sequence concatenation [e <t ;\u00ea t ] is probably coherent at each time step, but the actual predicted sequence [\u00ea <t ;\u00ea t ] would break the continuity. 2 1 i.e. G k = \u2212 log(\u2212 log(u k )) and u k \u223c Uniform(0, 1 ",
"cite_spans": [
{
"start": 178,
"end": 205,
"text": "(Williams and Zipser, 1989)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Differentiable Sampling",
"sec_num": "3.2"
},
{
"text": "We evaluate our approach on four low-resource language pairs. Parallel data for Swahili\u2194English (SW\u2194EN), Tagalog\u2194English (TL\u2194EN) and Somali\u2194English (SO\u2194EN) contains a mixture of domains such as news and weblogs and is collected from the IARPA MATERIAL program 3 , the Global Voices parallel corpus 4 , Common Crawl (Smith et al., 2013) , and the LORELEI Somali representative language pack (LDC2018T11). The test samples are extracted from the held-out ANALYSIS set of MATERIAL. Parallel Turkish\u2194English (TR\u2194EN) data is provided by the WMT news translation task (Bojar et al., 2018) . We use pre-processed \"corpus\", \"newsdev2016\", \"newstest2017\" as training, development and test sets. 5 We apply normalization, tokenization, truecasing, joint source-target BPE with 32,000 operations (Sennrich et al., 2016b) and sentencefiltering (length 80 cutoff) to parallel data. Itemized data statistics after preprocessing can be found in Table 1 . We report case-insensitive BLEU with the WMT standard '13a' tokenization using SacreBLEU (Post, 2018) .",
"cite_spans": [
{
"start": 315,
"end": 335,
"text": "(Smith et al., 2013)",
"ref_id": "BIBREF32"
},
{
"start": 562,
"end": 582,
"text": "(Bojar et al., 2018)",
"ref_id": "BIBREF4"
},
{
"start": 686,
"end": 687,
"text": "5",
"ref_id": null
},
{
"start": 785,
"end": 809,
"text": "(Sennrich et al., 2016b)",
"ref_id": "BIBREF31"
},
{
"start": 1029,
"end": 1041,
"text": "(Post, 2018)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 930,
"end": 937,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Tasks and Data",
"sec_num": "4.1"
},
{
"text": "We build NMT models upon the attentional RNN encoder-decoder architecture (Bahdanau et al., 2015) implemented in the Sockeye toolkit (Hieber et al., 2017 ). Our translation model uses a bidirectional encoder with a single LSTM layer of size 512, multilayer perceptron attention with a layer size of 512, and word representations of size 512. We apply layer normalization (Ba et al., Table 2 : BLEU scores on eight translation directions. The numbers before and after '\u00b1' are the mean and standard deviation over five randomly seeded models. Our proposed methods (\u03b2 = 0/0.5) achieve small but consistent improvements. \u2206BLEU scores are in bold if mean\u2212std is above zero while in red if the mean is below zero.",
"cite_spans": [
{
"start": 74,
"end": 97,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF2"
},
{
"start": 133,
"end": 153,
"text": "(Hieber et al., 2017",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 383,
"end": 390,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Configuration and Baseline",
"sec_num": "4.2"
},
{
"text": "2016) and add dropout to embeddings and RNNs (Gal and Ghahramani, 2016 ) with probability 0.2. We train using the Adam optimizer (Kingma and Ba, 2015) with a batch size of 48 sentences and we checkpoint the model every 1000 updates. The learning rate for baseline models is initialized to 0.001 and reduced by 30% after 4 checkpoints without improvement of perplexity on the development set. Training stops after 10 checkpoints without improvement.",
"cite_spans": [
{
"start": 45,
"end": 70,
"text": "(Gal and Ghahramani, 2016",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Configuration and Baseline",
"sec_num": "4.2"
},
{
"text": "The bi-directional NMT model ties source and target embeddings to yield a bilingual vector space. It also ties the output layer's weights and embeddings to achieve better performance in lowresource scenarios (Press and Wolf, 2017; Nguyen and Chiang, 2018) .",
"cite_spans": [
{
"start": 208,
"end": 230,
"text": "(Press and Wolf, 2017;",
"ref_id": "BIBREF28"
},
{
"start": 231,
"end": 255,
"text": "Nguyen and Chiang, 2018)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Configuration and Baseline",
"sec_num": "4.2"
},
{
"text": "We train five randomly seeded bi-directional baseline models by optimizing the forward translation objective L T and report the mean and standard deviation of test BLEU. We fine-tune baseline models with objective L T + L R , inheriting all settings except the learning rate which is re-initialized to 0.0001. Each randomly seeded model is fine-tuned independently, so we are able to report the standard deviation of \u2206BLEU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Configuration and Baseline",
"sec_num": "4.2"
},
{
"text": "We compare our approach with reconstruction from hidden states (HIDDEN). Following the best practice of Wang et al. (2018a) , two reconstructors are used to take hidden states from both the encoder and the decoder. The corresponding two reconstruction losses and the canonical translation loss were originally uniformly weighted (i.e. 1, 1, 1), but we found that balancing the reconstruction and translation losses yields better results (i.e. 0.5, 0.5, 1) in preliminary experiments. 6 We use the reconstructor exclusively to compute the reconstruction training loss. It has also been 6 We observed around 0.2 BLEU gains for TR\u2194EN tasks.",
"cite_spans": [
{
"start": 104,
"end": 123,
"text": "Wang et al. (2018a)",
"ref_id": "BIBREF36"
},
{
"start": 484,
"end": 485,
"text": "6",
"ref_id": null
},
{
"start": 585,
"end": 586,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contrastive Reconstruction Model",
"sec_num": "4.3"
},
{
"text": "used to re-rank translation hypotheses in prior work, but Tu et al. (2017) showed in ablation studies that the gains from re-ranking are small compared to those from training. Table 2 shows that our reconstruction approach achieves small but consistent BLEU improvements over the baseline on all eight tasks. 7 We evaluate the impact of the Gumbel Softmax hyperparameters on the development set. We select \u03c4 = 2 and \u03b2 = 0/0.5 based on training stability and BLEU. Greedy search (i.e. \u03b2 = 0) performs similarly as sampling with increased Gumbel noise (i.e. more random translation selection when \u03b2 = 0.5): increased randomness in sampling does not have a strong impact on BLEU, even though random sampling may approximate the data distribution better . We hypothesize that more random translation selection introduces lower quality samples and therefore noisier training signals. This is consistent with the observation that random sampling is less effective for back-translation in low-resource settings (Edunov et al., 2018) .",
"cite_spans": [
{
"start": 58,
"end": 74,
"text": "Tu et al. (2017)",
"ref_id": "BIBREF33"
},
{
"start": 309,
"end": 310,
"text": "7",
"ref_id": null
},
{
"start": 1004,
"end": 1025,
"text": "(Edunov et al., 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 176,
"end": 183,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Contrastive Reconstruction Model",
"sec_num": "4.3"
},
{
"text": "Sampling-based reconstruction is effective even if there is moderate domain mismatch between the training and the test data, such as in the case that the word type out-of-vocabulary (OOV) rate of TR\u2192EN is larger than 20%. Larger improvements can be achieved when the test data is closer to training examples. For example, the OOV rate of SW\u2192EN is much smaller than the OOV rate of TR\u2192EN and the former obtains higher \u2206BLEU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "Our approach yields more consistent results than reconstructing from hidden states. The latter fails to improve BLEU in more difficult cases, such as TR\u2194EN with high OOV rates. We observe extremely low training perplexity for HID- DEN compared with our proposed approach (Figure 1a) . This suggests that HIDDEN yields representations that memorize the input rather than improve output representations. Another advantage of our approach is that all parameters were jointly pre-trained, which results in more stable training behavior. By contrast, reconstructing from hidden states requires to initialize the reconstructors independently and suffers from unstable early training behavior (Figure 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 271,
"end": 282,
"text": "(Figure 1a)",
"ref_id": "FIGREF0"
},
{
"start": 686,
"end": 695,
"text": "(Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "We studied reconstructing the input of NMT from its intermediate translations to better exploit training samples in low-resource settings. We used a bi-directional NMT model and the Straight-Through Gumbel Softmax to build a fully differentiable reconstruction model that does not require any additional parameters. We empirically demonstrated that our approach is effective in low-resource scenarios. In future work, we will investigate the use of differentiable reconstruction from sampled sequences in unsupervised and semi-supervised sequence generation tasks. In particular, we will exploit monolingual corpora in addition to parallel corpora for NMT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://www.iarpa.gov/index.php/ research-programs/material 4 http://casmacat.eu/corpus/ global-voices.html 5 http://data.statmt.org/wmt18/ translation-task/preprocessed/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The improvements are significant with p < 0.01.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the three anonymous reviewers for their helpful comments and suggestions. We also thank the members of the Computational Linguistics and Information Processing (CLIP) lab at the University of Maryland for helpful discussions.This research is based upon work supported in part by an Amazon Web Services Machine Learning Research Award, and by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract #FA8650-17-C-9117. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Unsupervised neural machine translation",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 6th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural ma- chine translation. In Proceedings of the 6th Interna- tional Conference on Learning Representations.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 3th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3th International Conference on Learning Rep- resentations.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Estimating or propagating gradients through stochastic neurons for conditional computation",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "L\u00e9onard",
"suffix": ""
},
{
"first": "Aaron",
"middle": [
"C"
],
"last": "Courville",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, Nicholas L\u00e9onard, and Aaron C. Courville. 2013. Estimating or propagating gradi- ents through stochastic neurons for conditional com- putation. CoRR, abs/1308.3432.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Findings of the 2018 conference on machine translation (WMT18)",
"authors": [
{
"first": "Ondrej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Fishel",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "272--303",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ondrej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, and Christof Monz. 2018. Find- ings of the 2018 conference on machine translation (WMT18). In Proceedings of the Third Conference on Machine Translation, pages 272-303. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Back-translation for crosscultural research",
"authors": [
{
"first": "Richard",
"middle": [
"W"
],
"last": "Brislin",
"suffix": ""
}
],
"year": 1970,
"venue": "Journal of Cross-Cultural Psychology",
"volume": "1",
"issue": "3",
"pages": "185--216",
"other_ids": {
"DOI": [
"http://journals.sagepub.com/doi/10.1177/135910457000100301"
]
},
"num": null,
"urls": [],
"raw_text": "Richard W. Brislin. 1970. Back-translation for cross- cultural research. Journal of Cross-Cultural Psy- chology, 1(3):185-216.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Semisupervised learning for neural machine translation",
"authors": [
{
"first": "Yong",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1965--1974",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yong Cheng, Wei Xu, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Semi- supervised learning for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1965-1974. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Learning to compose task-specific tree structures",
"authors": [
{
"first": "Jihun",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Sang-Goo",
"middle": [],
"last": "Kang Min Yoo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "5094--5101",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jihun Choi, Kang Min Yoo, and Sang-goo Lee. 2018. Learning to compose task-specific tree structures. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, pages 5094-5101. AAAI Press.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Understanding back-translation at scale",
"authors": [
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "489--500",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 489-500. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A theoretically grounded application of dropout in recurrent neural networks",
"authors": [
{
"first": "Yarin",
"middle": [],
"last": "Gal",
"suffix": ""
},
{
"first": "Zoubin",
"middle": [],
"last": "Ghahramani",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems",
"volume": "29",
"issue": "",
"pages": "1019--1027",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yarin Gal and Zoubin Ghahramani. 2016. A theoret- ically grounded application of dropout in recurrent neural networks. In Advances in Neural Informa- tion Processing Systems 29, pages 1019-1027. Cur- ran Associates, Inc.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Generative adversarial nets",
"authors": [
{
"first": "Ian",
"middle": [
"J"
],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Pouget-Abadie",
"suffix": ""
},
{
"first": "Mehdi",
"middle": [],
"last": "Mirza",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Warde-Farley",
"suffix": ""
},
{
"first": "Sherjil",
"middle": [],
"last": "Ozair",
"suffix": ""
},
{
"first": "Aaron",
"middle": [
"C"
],
"last": "Courville",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems",
"volume": "27",
"issue": "",
"pages": "2672--2680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. 2014. Gen- erative adversarial nets. In Advances in Neural In- formation Processing Systems 27, pages 2672-2680. Curran Associates, Inc.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Universal neural machine translation for extremely low resource languages",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Hany",
"middle": [],
"last": "Hassan",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "344--354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Hany Hassan, Jacob Devlin, and Victor O. K. Li. 2018a. Universal neural machine transla- tion for extremely low resource languages. In Pro- ceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 344-354. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Neural machine translation with gumbelgreedy decoding",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"Jiwoong"
],
"last": "Im",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "5125--5132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Daniel Jiwoong Im, and Victor O. K. Li. 2018b. Neural machine translation with gumbel- greedy decoding. In Proceedings of the Thirty- Second AAAI Conference on Artificial Intelligence, pages 5125-5132. AAAI Press.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Emergence of language with multi-agent games: Learning to communicate with sequences of symbols",
"authors": [
{
"first": "Serhii",
"middle": [],
"last": "Havrylov",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "2146--2156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Serhii Havrylov and Ivan Titov. 2017. Emergence of language with multi-agent games: Learning to com- municate with sequences of symbols. In Advances in Neural Information Processing Systems 30, pages 2146-2156. Curran Associates, Inc.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Dual learning for machine translation",
"authors": [
{
"first": "Di",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Yingce",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Liwei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Nenghai",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Wei-Ying",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems",
"volume": "29",
"issue": "",
"pages": "820--828",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learn- ing for machine translation. In Advances in Neural Information Processing Systems 29, pages 820-828. Curran Associates, Inc.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Sockeye: A toolkit for neural machine translation",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hieber",
"suffix": ""
},
{
"first": "Tobias",
"middle": [],
"last": "Domhan",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Denkowski",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vilar",
"suffix": ""
},
{
"first": "Artem",
"middle": [],
"last": "Sokolov",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Clifton",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton, and Matt Post. 2017. Sockeye: A toolkit for neural machine translation. CoRR, abs/1712.05690.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Categorical reparameterization with gumbel-softmax",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Jang",
"suffix": ""
},
{
"first": "Shixiang",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Poole",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Jang, Shixiang Gu, and Ben Poole. 2017. Cate- gorical reparameterization with gumbel-softmax. In Proceedings of the 5th International Conference on Learning Representations.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Google's multilingual neural machine translation system: Enabling zero-shot translation",
"authors": [
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Thorat",
"suffix": ""
},
{
"first": "Fernanda",
"middle": [
"B"
],
"last": "Vi\u00e9gas",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Wattenberg",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Macduff",
"middle": [],
"last": "Hughes",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "339--351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Tho- rat, Fernanda B. Vi\u00e9gas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: Enabling zero-shot translation. Transac- tions of the Association for Computational Linguis- tics, 5:339-351.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 3th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceed- ings of the 3th International Conference on Learn- ing Representations.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Six challenges for neural machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Knowles",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Neural Machine Translation",
"volume": "",
"issue": "",
"pages": "28--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn and Rebecca Knowles. 2017. Six chal- lenges for neural machine translation. In Pro- ceedings of the First Workshop on Neural Machine Translation, pages 28-39. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "GANS for sequences of discrete elements with the gumbel-softmax distribution",
"authors": [
{
"first": "J",
"middle": [],
"last": "Matt",
"suffix": ""
},
{
"first": "Jos\u00e9 Miguel Hern\u00e1ndez-Lobato",
"middle": [],
"last": "Kusner",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the NIPS 2016 Workshop on Adversarial Training",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt J. Kusner and Jos\u00e9 Miguel Hern\u00e1ndez-Lobato. 2016. GANS for sequences of discrete elements with the gumbel-softmax distribution. In Proceed- ings of the NIPS 2016 Workshop on Adversarial Training.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Unsupervised machine translation using monolingual corpora only",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 6th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018. Unsupervised machine translation using monolingual corpora only. In Proceedings of the 6th International Conference on Learning Representations.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "The concrete distribution: A continuous relaxation of discrete random variables",
"authors": [
{
"first": "Chris",
"middle": [
"J"
],
"last": "Maddison",
"suffix": ""
},
{
"first": "Andriy",
"middle": [],
"last": "Mnih",
"suffix": ""
},
{
"first": "Yee Whye",
"middle": [],
"last": "Teh",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. 2017. The concrete distribution: A continuous re- laxation of discrete random variables. In Proceed- ings of the 5th International Conference on Learn- ing Representations.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A* sampling",
"authors": [
{
"first": "Chris",
"middle": [
"J"
],
"last": "Maddison",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Tarlow",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Minka",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems",
"volume": "27",
"issue": "",
"pages": "3086--3094",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris J. Maddison, Daniel Tarlow, and Tom Minka. 2014. A* sampling. In Advances in Neural Infor- mation Processing Systems 27, pages 3086-3094. Curran Associates, Inc.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Improving lexical choice in neural machine translation",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Toan",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "334--343",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toan Q. Nguyen and David Chiang. 2018. Improv- ing lexical choice in neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 334-343. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Bi-directional neural machine translation with synthetic parallel data",
"authors": [
{
"first": "Xing",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Denkowski",
"suffix": ""
},
{
"first": "Marine",
"middle": [],
"last": "Carpuat",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation",
"volume": "",
"issue": "",
"pages": "84--91",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xing Niu, Michael Denkowski, and Marine Carpuat. 2018. Bi-directional neural machine translation with synthetic parallel data. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 84-91. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Analyzing uncertainty in neural machine translation",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 35th International Conference on Machine Learning",
"volume": "80",
"issue": "",
"pages": "3953--3962",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Michael Auli, David Grangier, and Marc'Aurelio Ranzato. 2018. Analyzing uncer- tainty in neural machine translation. In Proceed- ings of the 35th International Conference on Ma- chine Learning, volume 80 of Proceedings of Ma- chine Learning Research, pages 3953-3962. PMLR.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A call for clarity in reporting BLEU scores",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "186--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation, pages 186-191. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Using the output embedding to improve language models",
"authors": [
{
"first": "Ofir",
"middle": [],
"last": "Press",
"suffix": ""
},
{
"first": "Lior",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Computational",
"volume": "",
"issue": "",
"pages": "157--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ofir Press and Lior Wolf. 2017. Using the output em- bedding to improve language models. In Proceed- ings of the 15th Conference of the European Chap- ter of the Association for Computational Compu- tational, pages 157-163. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Unsupervised pretraining for sequence to sequence learning",
"authors": [
{
"first": "Prajit",
"middle": [],
"last": "Ramachandran",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "383--391",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Prajit Ramachandran, Peter J. Liu, and Quoc V. Le. 2017. Unsupervised pretraining for sequence to se- quence learning. In Proceedings of the 2017 Con- ference on Empirical Methods in Natural Language Processing, pages 383-391. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "86--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics, pages 86-96. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics, pages 1715-1725. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Dirt cheap web-scale parallel text from the common crawl",
"authors": [
{
"first": "Jason",
"middle": [
"R"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Herve",
"middle": [],
"last": "Saint-Amand",
"suffix": ""
},
{
"first": "Magdalena",
"middle": [],
"last": "Plamada",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1374--1383",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason R. Smith, Herve Saint-Amand, Magdalena Pla- mada, Philipp Koehn, Chris Callison-Burch, and Adam Lopez. 2013. Dirt cheap web-scale parallel text from the common crawl. In Proceedings of the 51st Annual Meeting of the Association for Compu- tational Linguistics, pages 1374-1383. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Neural machine translation with reconstruction",
"authors": [
{
"first": "Zhaopeng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lifeng",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "Xiaohua",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "3097--3103",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhaopeng Tu, Yang Liu, Lifeng Shang, Xiaohua Liu, and Hang Li. 2017. Neural machine translation with reconstruction. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pages 3097-3103. AAAI Press.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Extracting and composing robust features with denoising autoencoders",
"authors": [
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Larochelle",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Pierre-Antoine",
"middle": [],
"last": "Manzagol",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 25th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1096--1103",
"other_ids": {
"DOI": [
"10.1145/1390156.1390294"
]
},
"num": null,
"urls": [],
"raw_text": "Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. 2008. Extracting and composing robust features with denoising autoen- coders. In Proceedings of the 25th International Conference on Machine Learning, pages 1096- 1103. ACM.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Conditional generative adversarial networks for commonsense machine comprehension",
"authors": [
{
"first": "Bingning",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "4123--4129",
"other_ids": {
"DOI": [
"10.24963/ijcai.2017/576"
]
},
"num": null,
"urls": [],
"raw_text": "Bingning Wang, Kang Liu, and Jun Zhao. 2017. Con- ditional generative adversarial networks for com- monsense machine comprehension. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, pages 4123-4129.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Translating pro-drop languages with reconstruction models",
"authors": [
{
"first": "Longyue",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhaopeng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Shuming",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "4937--4945",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Longyue Wang, Zhaopeng Tu, Shuming Shi, Tong Zhang, Yvette Graham, and Qun Liu. 2018a. Trans- lating pro-drop languages with reconstruction mod- els. In Proceedings of the Thirty-Second AAAI Con- ference on Artificial Intelligence, pages 4937-4945. AAAI Press.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Learning to jointly translate and predict dropped pronouns with a shared reconstruction mechanism",
"authors": [
{
"first": "Longyue",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhaopeng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2997--3002",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Longyue Wang, Zhaopeng Tu, Andy Way, and Qun Liu. 2018b. Learning to jointly translate and pre- dict dropped pronouns with a shared reconstruction mechanism. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Pro- cessing, pages 2997-3002. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Dual transfer learning for neural machine translation with marginal distribution regularization",
"authors": [
{
"first": "Yijun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yingce",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Jiang",
"middle": [],
"last": "Bian",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Guiquan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "5553--5560",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yijun Wang, Yingce Xia, Li Zhao, Jiang Bian, Tao Qin, Guiquan Liu, and Tie-Yan Liu. 2018c. Dual transfer learning for neural machine translation with marginal distribution regularization. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, pages 5553-5560. AAAI Press.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "A learning algorithm for continually running fully recurrent neural networks",
"authors": [
{
"first": "Ronald",
"middle": [
"J"
],
"last": "Williams",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Zipser",
"suffix": ""
}
],
"year": 1989,
"venue": "Neural Computation",
"volume": "1",
"issue": "2",
"pages": "270--280",
"other_ids": {
"DOI": [
"10.1162/neco.1989.1.2.270"
]
},
"num": null,
"urls": [],
"raw_text": "Ronald J. Williams and David Zipser. 1989. A learn- ing algorithm for continually running fully recurrent neural networks. Neural Computation, 1(2):270- 280.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Joint training for neural machine translation models with monolingual data",
"authors": [
{
"first": "Zhirui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shujie",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Mu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Enhong",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "555--562",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhirui Zhang, Shujie Liu, Mu Li, Ming Zhou, and En- hong Chen. 2018. Joint training for neural machine translation models with monolingual data. In Pro- ceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, pages 555-562. AAAI Press.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Transfer learning for low-resource neural machine translation",
"authors": [
{
"first": "Barret",
"middle": [],
"last": "Zoph",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1568--1575",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1568-1575. Associa- tion for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "Training curves of perplexity on the training and the development sets for TR\u2194EN. Reconstructing from hidden states (HIDDEN) and reconstructing from sampled translations (\u03b2 = 0) are compared. HIDDEN achieves extremely low training perplexity and suffers from unstable training during the early stage.",
"num": null
},
"TABREF1": {
"html": null,
"text": "Experiments are conducted on four lowresource language pairs, in both translation directions.",
"content": "<table/>",
"type_str": "table",
"num": null
},
"TABREF2": {
"html": null,
"text": "Baseline 33.60 \u00b1 0.14 30.70 \u00b1 0.19 27.23 \u00b1 0.11 32.15 \u00b1 0.21 12.25 \u00b1 0.08 20.80 \u00b1 0.12 12.90 \u00b1 0.04 15.32 \u00b1 0.11 HIDDEN 33.41 \u00b1 0.15 30.91 \u00b1 0.19 27.43 \u00b1 0.14 32.20 \u00b1 0.35 12.30 \u00b1 0.11 20.72 \u00b1 0.16 12.77 \u00b1 0.11 15.34 \u00b1 0.10 \u2206 -0.19 \u00b1 0.24 0.21 \u00b1 0.14 0.19 \u00b1 0.13 0.04 \u00b1 0.17 0.05 \u00b1 0.11 -0.08 \u00b1 0.12 -0.13 \u00b1 0.13 0.01 \u00b1 0.07 \u03b2 = 0 33.92 \u00b1 0.10 31.37 \u00b1 0.18 27.65 \u00b1 0.09 32.75 \u00b1 0.32 12.47 \u00b1 0.08 21.14 \u00b1 0.19 13.26 \u00b1 0.07 15.60 \u00b1 0.19 33.97 \u00b1 0.08 31.39 \u00b1 0.09 27.65 \u00b1 0.10 32.65 \u00b1 0.24 12.48 \u00b1 0.09 21.20 \u00b1 0.14 13.16 \u00b1 0.08 15.52 \u00b1 0.07",
"content": "<table><tr><td>Model</td><td>EN\u2192SW</td><td>SW\u2192EN</td><td>EN\u2192TL</td><td>TL\u2192EN</td><td>EN\u2192SO</td><td>SO\u2192EN</td><td>EN\u2192TR</td><td>TR\u2192EN</td></tr><tr><td>\u2206</td><td>0.32 \u00b1 0.12</td><td>0.66 \u00b1 0.11</td><td>0.42 \u00b1 0.16</td><td>0.59 \u00b1 0.13</td><td>0.22 \u00b1 0.04</td><td>0.35 \u00b1 0.15</td><td>0.36 \u00b1 0.09</td><td>0.28 \u00b1 0.11</td></tr><tr><td>\u03b2 = 0.5 \u2206</td><td>0.37 \u00b1 0.09</td><td>0.69 \u00b1 0.11</td><td>0.42 \u00b1 0.11</td><td>0.50 \u00b1 0.08</td><td>0.23 \u00b1 0.03</td><td>0.41 \u00b1 0.13</td><td>0.25 \u00b1 0.09</td><td>0.19 \u00b1 0.05</td></tr></table>",
"type_str": "table",
"num": null
}
}
}
}