ACL-OCL / Base_JSON /prefixD /json /D18 /D18-1035.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D18-1035",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:51:57.291626Z"
},
"title": "A Stable and Effective Learning Strategy for Trainable Greedy Decoding",
"authors": [
{
"first": "Yun",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Hong Kong",
"location": {}
},
"email": ""
},
{
"first": "Victor",
"middle": [
"O K"
],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Hong Kong",
"location": {}
},
"email": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "New York University",
"location": {}
},
"email": "kyunghyun.cho@nyu.edu"
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "New York University",
"location": {}
},
"email": "bowman@nyu.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Beam search is a widely used approximate search strategy for neural network decoders, and it generally outperforms simple greedy decoding on tasks like machine translation. However, this improvement comes at substantial computational cost. In this paper, we propose a flexible new method that allows us to reap nearly the full benefits of beam search with nearly no additional computational cost. The method revolves around a small neural network actor that is trained to observe and manipulate the hidden state of a previouslytrained decoder. To train this actor network, we introduce the use of a pseudo-parallel corpus built using the output of beam search on a base model, ranked by a target quality metric like BLEU. Our method is inspired by earlier work on this problem, but requires no reinforcement learning, and can be trained reliably on a range of models. Experiments on three parallel corpora and three architectures show that the method yields substantial improvements in translation quality and speed over each base system.",
"pdf_parse": {
"paper_id": "D18-1035",
"_pdf_hash": "",
"abstract": [
{
"text": "Beam search is a widely used approximate search strategy for neural network decoders, and it generally outperforms simple greedy decoding on tasks like machine translation. However, this improvement comes at substantial computational cost. In this paper, we propose a flexible new method that allows us to reap nearly the full benefits of beam search with nearly no additional computational cost. The method revolves around a small neural network actor that is trained to observe and manipulate the hidden state of a previouslytrained decoder. To train this actor network, we introduce the use of a pseudo-parallel corpus built using the output of beam search on a base model, ranked by a target quality metric like BLEU. Our method is inspired by earlier work on this problem, but requires no reinforcement learning, and can be trained reliably on a range of models. Experiments on three parallel corpora and three architectures show that the method yields substantial improvements in translation quality and speed over each base system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Neural network sequence decoders yield stateof-the-art results for many text generation tasks, including machine translation (Bahdanau et al., 2015; Luong et al., 2015; Gehring et al., 2017; Vaswani et al., 2017; Dehghani et al., 2018 ), text summarization (Rush et al., 2015; Ranzato et al., 2015; See et al., 2017; Paulus et al., 2017) and image captioning Xu et al., 2015) . These decoders generate tokens from left to right, at each step giving a distribution over possible next tokens, conditioned on both the input and all the tokens generated so far. However, since the space of all possible output sequences is infinite and grows exponentially with sequence length, heuristic search methods such as greedy decod-ing or beam search (Graves, 2012; Boulanger-Lewandowski et al., 2013) must be used at decoding time to select high-probability output sequences. Unlike greedy decoding, which selects the token of the highest probability at each step, beam search expands all possible next tokens at each step, and maintains the k most likely prefixes, where k is the beam size. Greedy decoding is very fast-requiring only a single run of the underlying decoder-while beam search requires an equivalent of k such runs, as well as substantial additional overhead for data management. However, beam search often leads to substantial improvement over greedy decoding. For example, Ranzato et al. (2015) report that beam search (with k = 10) gives a 2.2 BLEU improvement in translation and a 3.5 ROUGE-2 improvement in summarization over greedy decoding.",
"cite_spans": [
{
"start": 125,
"end": 148,
"text": "(Bahdanau et al., 2015;",
"ref_id": "BIBREF3"
},
{
"start": 149,
"end": 168,
"text": "Luong et al., 2015;",
"ref_id": "BIBREF27"
},
{
"start": 169,
"end": 190,
"text": "Gehring et al., 2017;",
"ref_id": "BIBREF15"
},
{
"start": 191,
"end": 212,
"text": "Vaswani et al., 2017;",
"ref_id": "BIBREF39"
},
{
"start": 213,
"end": 234,
"text": "Dehghani et al., 2018",
"ref_id": "BIBREF11"
},
{
"start": 257,
"end": 276,
"text": "(Rush et al., 2015;",
"ref_id": "BIBREF32"
},
{
"start": 277,
"end": 298,
"text": "Ranzato et al., 2015;",
"ref_id": "BIBREF31"
},
{
"start": 299,
"end": 316,
"text": "See et al., 2017;",
"ref_id": "BIBREF33"
},
{
"start": 317,
"end": 337,
"text": "Paulus et al., 2017)",
"ref_id": "BIBREF30"
},
{
"start": 359,
"end": 375,
"text": "Xu et al., 2015)",
"ref_id": "BIBREF42"
},
{
"start": 739,
"end": 753,
"text": "(Graves, 2012;",
"ref_id": "BIBREF16"
},
{
"start": 754,
"end": 789,
"text": "Boulanger-Lewandowski et al., 2013)",
"ref_id": "BIBREF4"
},
{
"start": 1367,
"end": 1401,
"text": "For example, Ranzato et al. (2015)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Various approaches have been explored recently to improve beam search by improving the method by which candidate sequences are scored (Li et al., 2016; Shu and Nakayama, 2017) , the termination criterion (Huang et al., 2017) , or the search function itself . In contrast, Gu et al. (2017) have tried to directly improve greedy decoding to decode for an arbitrary decoding objective. They add a small actor network to the decoder and train it with a version of policy gradient to optimize sequence objectives like BLEU. However, they report that they are seriously limited by the instability of this approach to training.",
"cite_spans": [
{
"start": 134,
"end": 151,
"text": "(Li et al., 2016;",
"ref_id": "BIBREF25"
},
{
"start": 152,
"end": 175,
"text": "Shu and Nakayama, 2017)",
"ref_id": "BIBREF36"
},
{
"start": 204,
"end": 224,
"text": "(Huang et al., 2017)",
"ref_id": "BIBREF20"
},
{
"start": 272,
"end": 288,
"text": "Gu et al. (2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a procedure to modify a trained decoder to allow it to generate text greedily with the level of quality (according to metrics like BLEU) that would otherwise require the relatively expensive use of beam search. To do so, we follow Cho (2016) and Gu et al. (2017) in our use of an actor network which manipulates the decoder's hidden state, but introduce a stable and effective procedure to train this actor. In our training procedure, the actor is trained with ordinary backpropagation on a model-specific artificial parallel corpus. This corpus is generated by running the un-augmented model on the training set with large-beam beam search, and selecting outputs from the resulting k-best list which score highly on our target metric.",
"cite_spans": [
{
"start": 257,
"end": 267,
"text": "Cho (2016)",
"ref_id": "BIBREF8"
},
{
"start": 272,
"end": 288,
"text": "Gu et al. (2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our method can be trained quickly and reliably, is effective, and can be straightforwardly employed with a variety of decoders. We demonstrate this for neural machine translation on three state-of-the-art architectures: RNN-based (Luong et al., 2015) , ConvS2S (Gehring et al., 2017) and Transformer (Vaswani et al., 2017) , and three corpora: IWSLT16 German-English, 1 WMT15 Finnish-English 2 and WMT14 German-English. 3",
"cite_spans": [
{
"start": 230,
"end": 250,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF27"
},
{
"start": 261,
"end": 283,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF15"
},
{
"start": 300,
"end": 322,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In sequence-to-sequence learning, we are given a set of source-target sentence pairs and tasked with learning to generate each target sentence (as a sequence of words or word-parts) from its source sentence. We first use an encoding model such as a recurrent neural network to transform a source sequence into an encoded representation, then generates the target sequence using a neural decoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "2.1"
},
{
"text": "Given a source sentence x = {x 1 , ..., x Ts }, a neural machine translation system models the distribution over possible output sentences y = {y 1 , ..., y T } as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (y|x; \u03b8) = T t=1 P (y t |y <t , x; \u03b8),",
"eq_num": "(1)"
}
],
"section": "Neural Machine Translation",
"sec_num": "2.1"
},
{
"text": "where \u03b8 is the set of model parameters. Given a parallel corpus D x,y of source-target sentence pairs, the neural machine translation model can be trained by maximizing the loglikelihood:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "2.1"
},
{
"text": "\u03b8 = argmax \u03b8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "2.1"
},
{
"text": "x,y \u2208Dx,y log P (y|x; \u03b8) . (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "2.1"
},
{
"text": "Given estimated model parameters\u03b8, the decision rule for finding the translation with the highest 1 https://wit3.fbk.eu/ 2 http://www.statmt.org/wmt15/translation-task.html 3 http://www.statmt.org/wmt14/translation-task probability for a source sentence x is given b\u0177",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y = argmax y P (y|x;\u03b8) .",
"eq_num": "(3)"
}
],
"section": "Decoding",
"sec_num": "2.2"
},
{
"text": "However, since such exact inference requires the intractable enumeration of large and potentially infinite set of candidate sequences, we resort to approximate decoding algorithms such as greedy decoding, beam search, noisy parallel decoding (NPAD; Cho, 2016), or trainable greedy decoding (Gu et al., 2017) .",
"cite_spans": [
{
"start": 290,
"end": 307,
"text": "(Gu et al., 2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding",
"sec_num": "2.2"
},
{
"text": "In this algorithm, we generate a single sequence from left to right, by choosing the token that is most likely at each step. The output\u0177 = {\u0177 1 , ...,\u0177 T } can be represented a\u015d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Greedy Decoding",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y t = argmax y P (y|\u0177 <t , x;\u03b8) .",
"eq_num": "(4)"
}
],
"section": "Greedy Decoding",
"sec_num": null
},
{
"text": "Despite its low computational complexity of O(|V | \u00d7 T ), the translations selected by this method may be far from optimal under the overall distribution given by the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Greedy Decoding",
"sec_num": null
},
{
"text": "Beam Search Beam search decodes from left to right, and maintains k > 1 hypotheses at each step. At each step t, beam search considers all possible next tokens conditioned on the current hypotheses, and picks the k with the overall highest scores t t =1 P (y t |y <t , x;\u03b8). When all the hypotheses are complete (they end in an end-of-thesentence symbol or reach a predetermined length limit), it returns the hypothesis with the highest likelihood. Tuning to find a roughly optimal beam size k can yield improvements in performance with sizes as high as 30 (Koehn and Knowles, 2017; Britz et al., 2017) . However, the complexity of beam search grows linearly in beam size, with high constant terms, making it undesirable in some applications where latency is important, such as in on-device real-time translation.",
"cite_spans": [
{
"start": 557,
"end": 582,
"text": "(Koehn and Knowles, 2017;",
"ref_id": "BIBREF23"
},
{
"start": 583,
"end": 602,
"text": "Britz et al., 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Greedy Decoding",
"sec_num": null
},
{
"text": "NPAD Noisy parallel approximate decoding (NPAD; Cho, 2016) is a parallel decoding algorithm that can be used to improve greedy decoding or beam search. The main idea is that a better translation with a higher probability may be found by injecting unstructured random noise into the hidden state of the decoder network. Positive results with NPAD suggest that small manipulations to the decoder hidden state can correspond to substantial but still reasonable changes to the output sequence. Trainable Greedy Decoding Approximate decoding algorithms generally approximate the maximum-a-posteriori inference described in Equation 3. This is not necessarily the optimal basis on which to generate text, since (i) the conditional log-probability assigned by a trained NMT model does not necessarily correspond well to translation quality (Tu et al., 2017) , and (ii) different application scenarios may demand different decoding objectives (Gu et al., 2017) . To solve this, Gu et al. (2017) extend NPAD by replacing the unstructured noise with a small feedforward actor neural network. This network is trained using a variant of policy gradient reinforcement learning to optimize for a target quality metric like BLEU under greedy decoding, and is then used to guide greedy decoding at test time by modifying the decoder's hidden states. Despite showing gains over the equivalent actorless model, their attempt to directly optimize the quality metric makes training unstable, and makes the model nearly impossible to optimize fully. This paper offers a stable and effective alternative approach to training such an actor, and further develops the architecture of the actor network.",
"cite_spans": [
{
"start": 833,
"end": 850,
"text": "(Tu et al., 2017)",
"ref_id": "BIBREF38"
},
{
"start": 935,
"end": 952,
"text": "(Gu et al., 2017)",
"ref_id": "BIBREF18"
},
{
"start": 970,
"end": 986,
"text": "Gu et al. (2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Greedy Decoding",
"sec_num": null
},
{
"text": "We propose a method for training a small actor neural network, following the trainable greedy decoding approach of Gu et al. (2017) . This actor takes as input the current decoder state h t , an attentional context vector e t for the source sentence, and optionally the previous hidden state s t\u22121 of the actor, and produces a vector-valued action a t which is used to update the decoder hidden state. The actor function can take on a variety of forms, and we explore four: a feedforward network with one hidden layer (ff ), feedforward network with two hidden layers (ff2), a GRU recurrent network (rnn; Cho et al., 2014) , and gated feedforward network (gate).",
"cite_spans": [
{
"start": 115,
"end": 131,
"text": "Gu et al. (2017)",
"ref_id": "BIBREF18"
},
{
"start": 605,
"end": 622,
"text": "Cho et al., 2014)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "The feedforward ff actor function is computed as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "z t = \u03c3([h t , e t ]W i + b i ), a t = tanh(z t W o + b o ),",
"eq_num": "(5)"
}
],
"section": "Methods",
"sec_num": "3"
},
{
"text": "the ff2 actor is computed as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "z 1 t = \u03c3([h t , e t ]W i + b i ), z 2 t = \u03c3(z 1 t W z + b z ), a t = tanh(z 2 t W o + b o ),",
"eq_num": "(6)"
}
],
"section": "Methods",
"sec_num": "3"
},
{
"text": "the rnn actor is computed as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "z t = \u03c3([h t , e t ]U z + s t\u22121 W z ), r t = \u03c3([h t , e t ]U r + s t\u22121 W r ), s t = tanh [h t , e t ]U h + (s t\u22121 \u2022 r t )W h , s t = (1 \u2212 z t ) \u2022s t + z t \u2022 s t\u22121 , a t = s t U,",
"eq_num": "(7)"
}
],
"section": "Methods",
"sec_num": "3"
},
{
"text": "and the gate actor is computed as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "z t = \u03c3([h t , e t ]U z ), a t = z t \u2022 tanh([h t , e t ]U ).",
"eq_num": "(8)"
}
],
"section": "Methods",
"sec_num": "3"
},
{
"text": "Once the action a t has been computed, the hidden state h t is simply replaced with the updated stateh t :h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "t = f (h t , e t ) + a t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "(9) Figure 1 shows a single step of the actor interacting with the underlying neural decoder of each of the three NMT architectures we use: the RNNbased model of Luong et al. (2015) , ConvS2S (Gehring et al., 2017) , and Transformer (Vaswani et al., 2017) . We add the actor at the decoder layer immediately after the computation of the attentional context vector. For the RNN-based NMT, we add the actor network only to the last decoder layer, the only place attention is used. Here, it takes as input the hidden state of the last decoder layer h L t and the source context vector e t , and outputs the action a t , which is added back to the attention vectorh L t . For ConvS2S and Transformer, we add an actor network to each decoder layer. This actor is added to the sublayer which performs multi-head or multi-step attention over the output of the encoder stack. It takes as input the decoder state h l t and the source context vector e l t , and outputs an action a l t which is added back to geth l t . Training To overcome the severe instability reported by Gu et al. (2017) , we introduce the use of a pseudo-parallel corpus generated from the underlying NMT model (Gao and He, 2013; Auli and Gao, 2014; Kim and Rush, 2016; Freitag et al., 2017; Zhang et al., 2017) for actor training. This corpus includes pairs that both (i) have a high model likelihood, so that we can coerce the model to generate them without much additional training or many new parameters and, (ii) represent high-quality translations, measured according to a target metric like BLEU. We do this by generating sentences from the original unaugmented model with large-beam beam search and selecting the best sentence from the resulting kbest list according to the decoding objective.",
"cite_spans": [
{
"start": 162,
"end": 181,
"text": "Luong et al. (2015)",
"ref_id": "BIBREF27"
},
{
"start": 192,
"end": 214,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF15"
},
{
"start": 233,
"end": 255,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF39"
},
{
"start": 1066,
"end": 1082,
"text": "Gu et al. (2017)",
"ref_id": "BIBREF18"
},
{
"start": 1174,
"end": 1192,
"text": "(Gao and He, 2013;",
"ref_id": "BIBREF14"
},
{
"start": 1193,
"end": 1212,
"text": "Auli and Gao, 2014;",
"ref_id": "BIBREF1"
},
{
"start": 1213,
"end": 1232,
"text": "Kim and Rush, 2016;",
"ref_id": "BIBREF21"
},
{
"start": 1233,
"end": 1254,
"text": "Freitag et al., 2017;",
"ref_id": "BIBREF13"
},
{
"start": 1255,
"end": 1274,
"text": "Zhang et al., 2017)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [
{
"start": 4,
"end": 12,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "More specifically, let x, y be a sentence pair in the training data and Z = {z 1 , ..., z k } be the kbest list from beam search on the pretrained NMT model, where k is the beam size. We define the objective score of the translation z w.r.t. the goldstandard translation y according to a target metric such as BLEU (Papineni et al., 2002) , NIST (Doddington, 2002) , negative TER (Snover et al., 2006) , or METEOR (Lavie and Denkowski, 2009) as O(z, y). Then we choose the sentencez that has the highest score to become our new target sentence:z",
"cite_spans": [
{
"start": 315,
"end": 338,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF29"
},
{
"start": 346,
"end": 364,
"text": "(Doddington, 2002)",
"ref_id": "BIBREF12"
},
{
"start": 380,
"end": 401,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF37"
},
{
"start": 414,
"end": 441,
"text": "(Lavie and Denkowski, 2009)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= argmax z=z 1 ,..,z k O(z, y).",
"eq_num": "(10)"
}
],
"section": "Methods",
"sec_num": "3"
},
{
"text": "Once we obtain the pseudo-corpus",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "D x,z = { x i ,z i } n i=1 },",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "we keep the underlying model fixed and train the actor by maximizing the loglikelihood of the actor parameters with these pairs:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b8 a = argmax \u03b8a x,z \u2208Dx,z log P (z|x;\u03b8, \u03b8 a )",
"eq_num": "(11)"
}
],
"section": "Methods",
"sec_num": "3"
},
{
"text": "In this way, the actor network is trained to manipulate the neural decoder's hidden state at decoding time to induce it to produce better-scoring outputs under greedy or small-beam decoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "We evaluate our approach on IWSLT16 German-English, WMT15 Finnish-English, and WMT14 De-En translation in both directions with three strong translation model architectures. For IWSLT16, we use tst2013 and tst2014 for validation and testing, respectively. For WMT15, we use newstest2013 and newstest2015 for validation and testing, respectively. For WMT14, we use newstest2013 and newstest2014 for validation and testing, respectively. All the data are tokenized and segmented into subword symbols using byte-pair encoding (BPE; Sennrich et al., 2016) to restrict the size of the vocabulary. Our primary evaluations use tokenized and cased BLEU. For METEOR and TER evaluations, we use multeval 4 with tokenized and case-insensitive scoring. All the underlying models are trained from scratch, except for ConvS2S WMT14 English-German translation, for which we use the trained model (as well as training data) provided by Gehring et al. (2017 Table 2 : Generation quality (BLEU\u2191) using the proposed trainable greedy decoder without and with beam search (k = 4). Results without beam search (tg) are also appeared in Table 1. RNN We use OpenNMT-py (Klein et al., 2017) 6 to implement our model. It is composed of an encoder with two-layer bidirectional RNN, and a decoder with another twolayer RNN. We refer to OpenNMT's default setting (rnn size = 500, word vec size = 500) and the setting in Artetxe et al. (2018) (rnn size = 600, word vec size = 300), and choose similar hyper-parameters: rnn size = 500, word vec size = 300 for IWSLT16 and rnn size = 600, word vec size = 500 for WMT. We use the input-feeding decoder and global attention with the general alignment function (Luong et al., 2015 ).",
"cite_spans": [
{
"start": 528,
"end": 550,
"text": "Sennrich et al., 2016)",
"ref_id": "BIBREF34"
},
{
"start": 919,
"end": 939,
"text": "Gehring et al. (2017",
"ref_id": "BIBREF15"
},
{
"start": 1144,
"end": 1164,
"text": "(Klein et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 1675,
"end": 1694,
"text": "(Luong et al., 2015",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 940,
"end": 947,
"text": "Table 2",
"ref_id": null
},
{
"start": 1113,
"end": 1121,
"text": "Table 1.",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Setting",
"sec_num": "4.1"
},
{
"text": "6 https://github.com/OpenNMT/OpenNMT-py",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setting",
"sec_num": "4.1"
},
{
"text": "We implement our model based on fairseq-py. 7 We follow the settings in fconv iwslt de en and fconv wmt en de for IWSLT16 and WMT, respectively.",
"cite_spans": [
{
"start": 44,
"end": 45,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ConvS2S",
"sec_num": null
},
{
"text": "Transformer We implement our model based on the code from Gu et al. (2018) . 8 We follow their hyperparameter settings for all experiments.",
"cite_spans": [
{
"start": 58,
"end": 74,
"text": "Gu et al. (2018)",
"ref_id": "BIBREF17"
},
{
"start": 77,
"end": 78,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ConvS2S",
"sec_num": null
},
{
"text": "In the results below, we focus on the gate actor and pseudo-parallel corpora constructed by choosing the sentence with the best BLEU score from the k-best list produced by beam search with k = 35. Experiments motivating these choices are shown later in this section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ConvS2S",
"sec_num": null
},
{
"text": "The results (Table 1) show that the use of the actor makes it practical to replace beam search with greedy decoding in most cases: We lose little or no performance, and doing so yields an increase in decoding efficiency, even accounting for the small overhead added by the actor. Among the three architectures, ConvS2S-the one with the most and largest layers-performs best. We conjecture that this gives the decoder more flexibility with which to guide decoding. In cases where model throughput is less important, our method can also be combined with beam search at test time to yield results somewhat better than either could achieve alone. Table 2 shows the result when combining our method with beam search. The morning also wanted to continue its discussions on migration and integration . beam4",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 21,
"text": "(Table 1)",
"ref_id": "TABREF1"
},
{
"start": 643,
"end": 650,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.2"
},
{
"text": "In the morning , the working group on migration and integration also wanted to continue its discussions . beam35",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.2"
},
{
"text": "In the morning , the migration and integration working group also wanted to continue its discussions . We show translations generated by the underlying transformer using greedy decoding, beam search with k = 4, and beam search with k = 35 and the oracle BLEU scorer ( ). We also show the translations using our trainable greedy decoder both without and with beam search. Phrases of interest are underlined. Examples Table 3 shows a few selected translations from the WMT14 German-English test set. In manual inspection of these examples and others, we find that the actor encourages models to recover missing tokens, optimize word order, and correct prepositions.",
"cite_spans": [],
"ref_spans": [
{
"start": 416,
"end": 423,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.2"
},
{
"text": "Likelihood We also compare word-level likelihood for different decoding results assigned by the base model and the actor-augmented model. For a sentence pair x, y , word-level likelihood is defined as Table 4 shows the word-level likelihood averaged over the test set for IWSLT16 and WMT14 German to English translation with Transformer.",
"cite_spans": [],
"ref_spans": [
{
"start": 201,
"end": 208,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P w = 1 T T t=1 P (y t |y <t , x; \u03b8).",
"eq_num": "(12)"
}
],
"section": "Results and Analysis",
"sec_num": "4.2"
},
{
"text": "Our trainable greedy decoder learns a much more peaked distribution and assigns a much higher probability mass to its greedy decoding result than the base model. When evaluated under the base model, the translations from trainable greedy decoding have smaller likelihood than the translations from greedy decoding using the base model for both datasets. This indicates that the trainable greedy decoder is able to find a sequence that is not highly scored by the underlying model, but that corresponds to a high value of the target metric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.2"
},
{
"text": "We also record the L 2 norm of the action, decoder hidden state, and attentional source context vectors on the validation set. Figure 2 shows these values over the course of training on the IWSLT16 De-En validation set with Transformer. The norm of the action starts small, increases rapidly early in training, and converges to a value well below that of the decoder hidden state. This suggests that the action adjusts the decoders hidden state only slightly, rather than overwriting it.",
"cite_spans": [],
"ref_spans": [
{
"start": 127,
"end": 135,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Magnitude of Action Vector",
"sec_num": null
},
{
"text": "Actor Architecture Figure 3 shows the trainable greedy decoding result on IWSLT16 De-En validation set with different actor architectures. We observe that our approach is stable across different actor architectures and is relatively insensitive to the hyperparameters of the actor. For the same type of actor, the performance increases gradually with the hidden layer size. The use of a recurrent connection within the actor does not meaningfully improve performance, possibly since all actors can use the recurrent connections of the underlying decoder. Since the gate actor contains no additional hyperparameters and was observed to learn quickly and reliably, we use it in all other experiments.",
"cite_spans": [],
"ref_spans": [
{
"start": 19,
"end": 27,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Effects of Model Settings",
"sec_num": "4.3"
},
{
"text": "Here, we also explore a simple alternative to the use of the actor: creating a pseudo-parallel corpus with each model, and then training each model, unmodified and entirety, directly on this new corpus. This experiment (cont. in Figure 3 ) yields results that are comparable to, but not better than, the results seen with the actors. However, this comes with substantially greater computational complexity at training time, and, if the same trained model is to be optimized for multiple target metrics, greater storage costs as well. Beam Size Figure 4a shows the effect of the beam size used to generate the pseudo-parallel corpus on the IWSLT16 De-En validation set with Transformer. Trainable greedy decoding improves over greedy decoding even when we set k = 1, namely, running greedy decoding on the unaugmented model to construct the new training corpus. With increased beam size k, the BLEU score consistently increases, but we observe diminishing returns beyond roughly k = 35, and we use that value elsewhere.",
"cite_spans": [],
"ref_spans": [
{
"start": 229,
"end": 237,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 544,
"end": 553,
"text": "Figure 4a",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effects of Model Settings",
"sec_num": "4.3"
},
{
"text": "There are a variety of ways one might use the output of beam search to construct a pseudo-parallel corpus: We could use the single highest-scoring output (by BLEU, or our target metric) for each input (top1), use all 35 beam search outputs (full), use all those outputs that score higher than the threshold, namely the base model's greedy decoding output (thd), or combine the top1 results with the goldstandard translations (comb.). We show the effect of training corpus construction in Figure 4b . para denotes the baseline approach of training the actor with the original parallel corpus used to train the underlying NMT model. Among the four novel approaches, full obtains the worst performance, since the beam search outputs contain translations that are far from the gold-standard translation. We choose the best performing top1 strategy. Table 5 : Results when trained with different decoding objectives on IWSLT16 De-En translation using Transformer. MTR denotes METEOR. We report greedy decoding and beam search (k = 4) results using the original model, and results with trainable greedy decoding (lower half).",
"cite_spans": [],
"ref_spans": [
{
"start": 488,
"end": 497,
"text": "Figure 4b",
"ref_id": null
},
{
"start": 845,
"end": 852,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training Corpus Construction",
"sec_num": null
},
{
"text": "Decoding Objectives As our approach is capable of using an arbitrary decoding objective, we investigate the effect of different objectives on BLEU, METEOR (MTR) and TER scores with Transformer for IWSLT16 De-En translation. Table 5 shows the final result on the test set. When trained with one objective, our model yields relatively good performance on that objective. For example, negative sentence-level TER (i.e., -sTER) leads to -3.0 TER improvement over greedy decoding and -0.5 TER improvement over beam search. However, since these objectives are all well correlated with each other, training with different objectives do not differ dramatically.",
"cite_spans": [],
"ref_spans": [
{
"start": 224,
"end": 231,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training Corpus Construction",
"sec_num": null
},
{
"text": "Data Distillation Our work is directly inspired by work on knowledge distillation, which uses a similar pseudo-parallel corpus strategy, but aims at training a compact model to approximate the function learned by a larger model or an ensemble of models (Hinton et al., 2015) . Kim and Rush (2016) introduce knowledge distillation in the context of NMT, and show that a smaller student network can be trained to achieve similar performance to a teacher model by learning from pseudo-corpus generated by the teacher model. Zhang et al. (2017) propose a new strategy to generate a pseudo-corpus, namely, fast sequenceinterpolation based on the greedy output of the teacher model and the parallel corpus. Freitag et al. (2017) extend knowledge distillation on an ensemble and oracle BLEU teacher model. However, all these approaches require the expensive procedure of retraining the full student network.",
"cite_spans": [
{
"start": 253,
"end": 274,
"text": "(Hinton et al., 2015)",
"ref_id": "BIBREF19"
},
{
"start": 277,
"end": 296,
"text": "Kim and Rush (2016)",
"ref_id": "BIBREF21"
},
{
"start": 521,
"end": 540,
"text": "Zhang et al. (2017)",
"ref_id": "BIBREF43"
},
{
"start": 701,
"end": 722,
"text": "Freitag et al. (2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Pseudo-Parallel Corpora in Statistical MT Pseudo-parallel corpora generated from beam search have been previously used in statistical Figure 4 : (a) The effect of beam size on the IWSLT16 De-En validation with Transformer and (b) the effect of the training corpus composition in the same setting. para: parallel corpus; full: all 35 beam search outputs; thd: beam search outputs that score higher than the base model's greedy decoding output; top1: beam search output with the highest bleu score; comb.: top1+para. 0.0 corresponds to 33.04 BLEU.",
"cite_spans": [],
"ref_spans": [
{
"start": 134,
"end": 142,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "machine translation (SMT) (Chiang, 2012; Gao and He, 2013; Auli and Gao, 2014; Dakwale and Monz, 2016) . Gao and He (2013) integrate a recurrent neural network language model as an additional feature into a trained phrase-based SMT system and train it by maximizing the expected BLEU on k-best list from the underlying model. Our work revisits a similar idea in the context trainable greedy decoding for neural MT.",
"cite_spans": [
{
"start": 26,
"end": 40,
"text": "(Chiang, 2012;",
"ref_id": "BIBREF7"
},
{
"start": 41,
"end": 58,
"text": "Gao and He, 2013;",
"ref_id": "BIBREF14"
},
{
"start": 59,
"end": 78,
"text": "Auli and Gao, 2014;",
"ref_id": "BIBREF1"
},
{
"start": 79,
"end": 102,
"text": "Dakwale and Monz, 2016)",
"ref_id": "BIBREF10"
},
{
"start": 105,
"end": 122,
"text": "Gao and He (2013)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Decoding for Multiple Objectives Several works have proposed to incorporate different decoding objectives into training. Ranzato et al.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "and Bahdanau et al. (2016) use reinforcement learning to achieve this goal. Shen et al. (2016) and Norouzi et al. (2016) train the model by defining an objective-dependent loss function. Wiseman and Rush (2016) propose a learning algorithm tailored for beam search. Unlike these works that optimize the entire model, introduce an additional network that predicts an arbitrary decoding objective given a source sentence and a prefix of translation. This prediction is used as an auxiliary score in beam search. All of these methods focus primarily on improving beam search results, rather than those with greedy decoding.",
"cite_spans": [
{
"start": 4,
"end": 26,
"text": "Bahdanau et al. (2016)",
"ref_id": "BIBREF2"
},
{
"start": 76,
"end": 94,
"text": "Shen et al. (2016)",
"ref_id": "BIBREF35"
},
{
"start": 99,
"end": 120,
"text": "Norouzi et al. (2016)",
"ref_id": "BIBREF28"
},
{
"start": 187,
"end": 210,
"text": "Wiseman and Rush (2016)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "This paper introduces a novel method, based on an automatically-generated pseudo-parallel corpus, for training an actor-augmented decoder to optimize for greedy decoding. Experiments on three models and three datasets show that the training strategy makes it possible to substantially improve the performance of an arbitrary neural sequence decoder on any reasonable translation metric in either greedy or beam-search decoding, all with only a few trained parameters and minimal additional training time. As our model is agnostic to both the model architecture and the target metric, we see the exploration of more diverse and ambitious model-target metric pairs as a clear avenue for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://github.com/jhclark/multeval 5 https://s3.amazonaws.com/fairseqpy/models/wmt14.v2.en-de.fconv-py.tar.bz2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/facebookresearch/fairseq-py 8 https://github.com/salesforce/nonauto-nmt",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was partly supported by Samsung Advanced Institute of Technology (Next Generation Deep Learning: from pattern recognition to AI), Samsung Electronics (Improving Deep Learning using Latent Structure) and the Facebook Low Resource Neural Machine Translation Award. KC thanks support by eBay, TenCent, NVIDIA and CIFAR. This project has also benefited from financial support to SB by Google and Tencent Holdings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Unsupervised neural machine translation",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural ma- chine translation. In International Conference on Learning Representations.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Decoder integration and expected bleu training for recurrent neural network language models",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "136--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Auli and Jianfeng Gao. 2014. Decoder inte- gration and expected bleu training for recurrent neu- ral network language models. In Proceedings of the 52nd Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers), pages 136-142, Baltimore, Maryland. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "An actor-critic algorithm for sequence prediction",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Philemon",
"middle": [],
"last": "Brakel",
"suffix": ""
},
{
"first": "Kelvin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Anirudh",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Ryan",
"middle": [
"Joseph"
],
"last": "Lowe",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
},
{
"first": "Aaron",
"middle": [
"C"
],
"last": "Courville",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1607.07086"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Joseph Lowe, Joelle Pineau, Aaron C. Courville, and Yoshua Bengio. 2016. An actor-critic algorithm for sequence prediction. preprint arXiv:1607.07086.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of International Conference on Learning Representa- tions (ICLR).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Audio chord recognition with recurrent neural networks",
"authors": [
{
"first": "Nicolas",
"middle": [],
"last": "Boulanger-Lewandowski",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 14th International Society for Music Information Retrieval Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicolas Boulanger-Lewandowski, Yoshua Bengio, and Pascal Vincent. 2013. Audio chord recognition with recurrent neural networks. In Proceedings of the 14th International Society for Music Information Retrieval Conference.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Massive exploration of neural machine translation architectures",
"authors": [
{
"first": "Denny",
"middle": [],
"last": "Britz",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Goldie",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1442--1451",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denny Britz, Anna Goldie, Minh-Thang Luong, and Quoc Le. 2017. Massive exploration of neural machine translation architectures. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1442-1451, Copenhagen, Denmark. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A teacher-student framework for zeroresource neural machine translation",
"authors": [
{
"first": "Yun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1925--1935",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yun Chen, Yang Liu, Yong Cheng, and Victor O.K. Li. 2017. A teacher-student framework for zero- resource neural machine translation. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 1925-1935. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Hope and fear for discriminative training of statistical translation models",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2012,
"venue": "Journal of Machine Learning Research",
"volume": "13",
"issue": "",
"pages": "1159--1187",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2012. Hope and fear for discriminative training of statistical translation models. Journal of Machine Learning Research, 13(Apr):1159-1187.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Noisy parallel approximate decoding for conditional recurrent language model",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1605.03835"
]
},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho. 2016. Noisy parallel approximate decoding for conditional recurrent language model. preprint arXiv:1605.03835.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merrienboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1724--1734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1724- 1734, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Improving statistical machine translation performance by oracle-bleu model re-estimation",
"authors": [
{
"first": "Praveen",
"middle": [],
"last": "Dakwale",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "38--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Praveen Dakwale and Christof Monz. 2016. Improv- ing statistical machine translation performance by oracle-bleu model re-estimation. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers), volume 2, pages 38-44.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Universal transformers",
"authors": [
{
"first": "Mostafa",
"middle": [],
"last": "Dehghani",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1807.03819"
]
},
"num": null,
"urls": [],
"raw_text": "Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and \u0141ukasz Kaiser. 2018. Univer- sal transformers. preprint arXiv:1807.03819.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Automatic evaluation of machine translation quality using n-gram cooccurrence statistics",
"authors": [
{
"first": "George",
"middle": [],
"last": "Doddington",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the second international conference on Human Language Technology Research",
"volume": "",
"issue": "",
"pages": "138--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram co- occurrence statistics. In Proceedings of the second international conference on Human Language Tech- nology Research, pages 138-145. Morgan Kauf- mann Publishers Inc.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Ensemble distillation for neural machine translation",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Freitag",
"suffix": ""
},
{
"first": "Yaser",
"middle": [],
"last": "Al-Onaizan",
"suffix": ""
},
{
"first": "Baskaran",
"middle": [],
"last": "Sankaran",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Freitag, Yaser Al-Onaizan, and Baskaran Sankaran. 2017. Ensemble distillation for neural machine translation. preprint arXiv:702.01802.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Training mrfbased phrase translation models using gradient ascent",
"authors": [
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "450--459",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianfeng Gao and Xiaodong He. 2013. Training mrf- based phrase translation models using gradient as- cent. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 450-459, Atlanta, Georgia. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Convolutional sequence to sequence learning",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Gehring",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Yarats",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann Dauphin. 2017. Convolutional se- quence to sequence learning. In International Con- ference on Machine Learning.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Sequence transduction with recurrent neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves. 2012. Sequence transduction with recur- rent neural networks. preprint arXiv:211.3711.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Nonautoregressive neural machine translation",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, James Bradbury, Caiming Xiong, Vic- tor O. K. Li, and Richard Socher. 2018. Non- autoregressive neural machine translation. In Pro- ceedings of International Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Trainable greedy decoding for neural machine translation",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1968--1978",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Kyunghyun Cho, and Victor O.K. Li. 2017. Trainable greedy decoding for neural machine trans- lation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 1968-1978, Copenhagen, Denmark. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Distilling the knowledge in a neural network",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1503.02531"
]
},
"num": null,
"urls": [],
"raw_text": "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. preprint arXiv:1503.02531.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "When to finish? optimal beam search for neural text generation (modulo beam size)",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Mingbo",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2134--2139",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang, Kai Zhao, and Mingbo Ma. 2017. When to finish? optimal beam search for neural text gen- eration (modulo beam size). In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2134-2139.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Sequencelevel knowledge distillation",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1317--1327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim and Alexander M. Rush. 2016. Sequence- level knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 1317-1327, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Opennmt: Open-source toolkit for neural machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL 2017, System Demonstrations",
"volume": "",
"issue": "",
"pages": "67--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstra- tions, pages 67-72, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Six challenges for neural machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Knowles",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Neural Machine Translation",
"volume": "",
"issue": "",
"pages": "28--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn and Rebecca Knowles. 2017. Six chal- lenges for neural machine translation. In Pro- ceedings of the First Workshop on Neural Machine Translation, pages 28-39, Vancouver. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The meteor metric for automatic evaluation of machine translation. Machine translation",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Denkowski",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "23",
"issue": "",
"pages": "105--115",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alon Lavie and Michael J Denkowski. 2009. The meteor metric for automatic evaluation of machine translation. Machine translation, 23(2-3):105-115.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A simple, fast diverse decoding algorithm for neural generation",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Monroe",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.08562"
]
},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Will Monroe, and Dan Jurafsky. 2016. A sim- ple, fast diverse decoding algorithm for neural gen- eration. preprint arXiv:1611.08562.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Learning to decode for future success",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Monroe",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1701.06549"
]
},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Will Monroe, and Daniel Jurafsky. 2017. Learning to decode for future success. preprint arXiv:1701.06549.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Effective approaches to attention-based neural machine translation",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1412--1421",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 1412-1421, Lis- bon, Portugal. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Reward augmented maximum likelihood for neural structured prediction",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Zhifeng Chen",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Dale",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schuurmans",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems",
"volume": "29",
"issue": "",
"pages": "1723--1731",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Norouzi, Samy Bengio, zhifeng Chen, Navdeep Jaitly, Mike Schuster, Yonghui Wu, and Dale Schuurmans. 2016. Reward augmented max- imum likelihood for neural structured prediction. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Infor- mation Processing Systems 29, pages 1723-1731. Curran Associates, Inc.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A deep reinforced model for abstractive summarization",
"authors": [
{
"first": "Romain",
"middle": [],
"last": "Paulus",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1705.04304"
]
},
"num": null,
"urls": [],
"raw_text": "Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive sum- marization. preprint arXiv:1705.04304.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Sequence level training with recurrent neural networks",
"authors": [
{
"first": "Aurelio",
"middle": [],
"last": "Marc",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zaremba",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.06732"
]
},
"num": null,
"urls": [],
"raw_text": "Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level training with recurrent neural networks. preprint arXiv:1511.06732.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A neural attention model for abstractive sentence summarization",
"authors": [
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "379--389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sen- tence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 379-389, Lisbon, Portugal. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Get to the point: Summarization with pointergenerator networks",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1073--1083",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073- 1083, Vancouver, Canada. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Minimum risk training for neural machine translation",
"authors": [
{
"first": "Shiqi",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1683--1692",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation. In Pro- ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 1683-1692, Berlin, Germany. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Later-stage minimum bayes-risk decoding for neural machine translation",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Shu",
"suffix": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Nakayama",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.03169"
]
},
"num": null,
"urls": [],
"raw_text": "Raphael Shu and Hideki Nakayama. 2017. Later-stage minimum bayes-risk decoding for neural machine translation. preprint arXiv:1704.03169.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A study of translation edit rate with targeted human annotation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Linnea",
"middle": [],
"last": "Micciulla",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of association for machine translation in the Americas",
"volume": "200",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of association for machine transla- tion in the Americas, volume 200.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Neural machine translation with reconstruction",
"authors": [
{
"first": "Zhaopeng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lifeng",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "Xiaohua",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhaopeng Tu, Yang Liu, Lifeng Shang, Xiaohua Liu, and Hang Li. 2017. Neural machine translation with reconstruction.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Show and tell: A neural image caption generator",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Toshev",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Dumitru",
"middle": [],
"last": "Erhan",
"suffix": ""
}
],
"year": 2015,
"venue": "Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on",
"volume": "",
"issue": "",
"pages": "3156--3164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural im- age caption generator. In Computer Vision and Pat- tern Recognition (CVPR), 2015 IEEE Conference on, pages 3156-3164. IEEE.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Sequence-to-sequence learning as beam-search optimization",
"authors": [
{
"first": "Sam",
"middle": [],
"last": "Wiseman",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1296--1306",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sam Wiseman and Alexander M. Rush. 2016. Sequence-to-sequence learning as beam-search opti- mization. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Process- ing, pages 1296-1306, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Show, attend and tell: Neural image caption generation with visual attention",
"authors": [
{
"first": "Kelvin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhudinov",
"suffix": ""
},
{
"first": "Rich",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "2048--2057",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual at- tention. In International Conference on Machine Learning, pages 2048-2057.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Towards compact and fast neural machine translation using a combined method",
"authors": [
{
"first": "Xiaowei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Feng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Shuang",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1475--1481",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaowei Zhang, Wei Chen, Feng Wang, Shuang Xu, and Bo Xu. 2017. Towards compact and fast neu- ral machine translation using a combined method. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 1475-1481, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "A single step of a generic actor interacting with a decoder of each of three types. The dashed arrows denote an optional recurrent connection in the actor network."
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "The norms of the three activation vectors on the IWSLT16 De-En validation set with Transformer. Action, Context and State represent the norm of the action, attentional source context vector and decoder hidden state, respectively."
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "The effect of the actor architecture and hidden state size on trainable greedy decoding results over the IWSLT16 De-En validation set with Transformer (BLEU\u2191), shown with a baseline (cont.) in which the underlying model, rather than the actor, is trained on the pseudo-parallel corpus. The Y-axis starts from 1.0. w.o. indicates an actor with no hidden layer. 0.0 corresponds to 33.04 BLEU."
},
"TABREF1": {
"html": null,
"content": "<table><tr><td/><td/><td colspan=\"2\">BLEU\u2191</td><td/></tr><tr><td/><td>tg</td><td>tg+beam4</td><td>tg</td><td>tg+beam4</td></tr><tr><td>IWSLT16</td><td colspan=\"2\">De \u2192 En</td><td colspan=\"2\">En \u2192 De</td></tr><tr><td>RNN</td><td>23.59</td><td>25.03</td><td>19.88</td><td>20.72</td></tr><tr><td>ConvS2S</td><td>28.74</td><td>29.50</td><td>24.42</td><td>24.74</td></tr><tr><td>Transformer</td><td>28.36</td><td>28.95</td><td>25.46</td><td>25.89</td></tr><tr><td>WMT15</td><td colspan=\"2\">Fi \u2192 En</td><td colspan=\"2\">En \u2192 Fi</td></tr><tr><td>RNN</td><td>13.02</td><td>13.49</td><td>10.57</td><td>11.04</td></tr><tr><td>ConvS2S</td><td>17.17</td><td>17.51</td><td>14.33</td><td>14.87</td></tr><tr><td>Transformer</td><td>14.49</td><td>14.79</td><td>12.95</td><td>13.45</td></tr><tr><td>WMT14</td><td colspan=\"2\">De \u2192 En</td><td colspan=\"2\">En \u2192 De</td></tr><tr><td>RNN</td><td>24.54</td><td>24.86</td><td>19.89</td><td>20.56</td></tr><tr><td>ConvS2S</td><td>28.56</td><td>28.46</td><td>26.04</td><td>26.08</td></tr><tr><td>Transformer</td><td>26.96</td><td>27.21</td><td>22.31</td><td>21.92</td></tr></table>",
"text": "Generation quality (BLEU) and speed (tokens/sec). Speed is measured for sentence-by-sentence generation without mini-batching on the test set on CPU. We show the result by the underlying model with greedy decoding (greedy), beam search with k = 4 (beam4) and our trainable greedy decoder (tg).",
"num": null,
"type_str": "table"
},
"TABREF2": {
"html": null,
"content": "<table><tr><td>greedy</td></tr></table>",
"text": "srcAm Vormittag wollte auch die Arbeitsgruppe Migration und Integration ihre Beratungen fortsetzen . ref During the morning , the Migration and Integration working group also sought to continue its discussions .",
"num": null,
"type_str": "table"
},
"TABREF3": {
"html": null,
"content": "<table><tr><td>src</td><td>Ich suche schon seit einiger Zeit eine neue Wohnung fr meinen Mann und mich .</td></tr><tr><td>ref</td><td>I have been looking for a new home for my husband and myself for some time now .</td></tr><tr><td>greedy</td><td>I have been looking for a new apartment for some time for my husband and myself .</td></tr><tr><td>beam4</td><td>I have been looking for a new apartment for some time for my husband and myself .</td></tr><tr><td>beam35</td><td>I have been looking for a new apartment for my husband and myself for some time now .</td></tr><tr><td>tg</td><td>I have been looking for a new apartment for my husband and myself for some time now .</td></tr><tr><td colspan=\"2\">tg+beam4 I have been looking for a new apartment for my husband and myself for some time now .</td></tr></table>",
"text": "tgThe morning , the Migration and Integration Working Group wanted to continue its discussions . tg+beam4 In the morning , the Migration and Integration Working Group wanted to continue its discussions .src Die meisten Mails werden unterwegs mehrfach von Software-Robotern gelesen . ref The majority of e-mails are read several times by software robots en route to the recipient . greedy Most mails are read by software robots on the go . beam4 Most mails are read by software robots on the go . beam35 Most e-mails are read several times by software robots on the road . tg Most mails are read several times by software robots on the road . tg+beam4 Most mails are read several times by software robots on the road .",
"num": null,
"type_str": "table"
},
"TABREF4": {
"html": null,
"content": "<table/>",
"text": "Translation examples from the WMT14 De-En test set with Transformer.",
"num": null,
"type_str": "table"
},
"TABREF6": {
"html": null,
"content": "<table/>",
"text": "Word-level likelihood (%) averaged by sentence for the IWSLT16 and WMT14 De-En test sets with Transformer. Each row represents the model used to evaluate word-level likelihood, and each column represents a different source of translations, including the reference (ref), greedy decoding on the base model (greedy), beam search with k = 35 on the base model and the BLEU scorer (k35 ), and trainable greedy decoder (tg).",
"num": null,
"type_str": "table"
}
}
}
}