| { |
| "paper_id": "Q18-1011", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:10:28.567048Z" |
| }, |
| "title": "Modeling Past and Future for Neural Machine Translation", |
| "authors": [ |
| { |
| "first": "Zaixiang", |
| "middle": [], |
| "last": "Zheng", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "zhengzx@nlp.nju.edu.cn" |
| }, |
| { |
| "first": "Hao", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Shujian", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "huangsj@nlp.nju.edu.cn" |
| }, |
| { |
| "first": "Lili", |
| "middle": [], |
| "last": "Mou", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Xinyu", |
| "middle": [], |
| "last": "Dai", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Jiajun", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "chenjj@nlp.nju.edu.cn" |
| }, |
| { |
| "first": "Zhaopeng", |
| "middle": [], |
| "last": "Tu", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "zptu@tencent.com" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Existing neural machine translation systems do not explicitly model what has been translated and what has not during the decoding phase. To address this problem, we propose a novel mechanism that separates the source information into two parts: translated PAST contents and untranslated FUTURE contents, which are modeled by two additional recurrent layers. The PAST and FUTURE contents are fed to both the attention model and the decoder states, which provides Neural Machine Translation (NMT) systems with the knowledge of translated and untranslated contents. Experimental results show that the proposed approach significantly improves the performance in Chinese-English, German-English, and English-German translation tasks. Specifically, the proposed model outperforms the conventional coverage model in terms of both the translation quality and the alignment error rate. \u2020 * Equal contributions. \u2020 Our code can be downloaded from https://github. com/zhengzx-nlp/past-and-future-nmt.", |
| "pdf_parse": { |
| "paper_id": "Q18-1011", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Existing neural machine translation systems do not explicitly model what has been translated and what has not during the decoding phase. To address this problem, we propose a novel mechanism that separates the source information into two parts: translated PAST contents and untranslated FUTURE contents, which are modeled by two additional recurrent layers. The PAST and FUTURE contents are fed to both the attention model and the decoder states, which provides Neural Machine Translation (NMT) systems with the knowledge of translated and untranslated contents. Experimental results show that the proposed approach significantly improves the performance in Chinese-English, German-English, and English-German translation tasks. Specifically, the proposed model outperforms the conventional coverage model in terms of both the translation quality and the alignment error rate. \u2020 * Equal contributions. \u2020 Our code can be downloaded from https://github. com/zhengzx-nlp/past-and-future-nmt.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Neural machine translation (NMT) generally adopts an encoder-decoder framework (Kalchbrenner and Blunsom, 2013; Cho et al., 2014; Sutskever et al., 2014) , where the encoder summarizes the source sentence into a source context vector, and the decoder generates the target sentence word-by-word based on the given source. During translation, the decoder implicitly serves several functionalities at the same time:", |
| "cite_spans": [ |
| { |
| "start": 79, |
| "end": 111, |
| "text": "(Kalchbrenner and Blunsom, 2013;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 112, |
| "end": 129, |
| "text": "Cho et al., 2014;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 130, |
| "end": 153, |
| "text": "Sutskever et al., 2014)", |
| "ref_id": "BIBREF33" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "1. Building a language model over the target sentence for translation fluency (LM). 2. Acquiring the most relevant source-side information to generate the current target word (PRESENT).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "been translated (PAST) and what parts have not (FUTURE). However, it may be difficult for a single recurrent neural network (RNN) decoder to accomplish these functionalities simultaneously. A recent successful extension of NMT models is the attention mechanism (Bahdanau et al., 2015; Luong et al., 2015) , which makes a soft selection over source words and yields an attentive vector to represent the most relevant source parts for the current decoding state. In this sense, the attention mechanism separates the PRESENT functionality from the decoder RNN, achieving significant performance improvement.", |
| "cite_spans": [ |
| { |
| "start": 261, |
| "end": 284, |
| "text": "(Bahdanau et al., 2015;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 285, |
| "end": 304, |
| "text": "Luong et al., 2015)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maintaining what parts in the source have", |
| "sec_num": "3." |
| }, |
| { |
| "text": "In addition to PRESENT, we address the importance of modeling PAST and FUTURE contents in machine translation. The PAST contents indicate translated information, whereas the FUTURE contents indicate untranslated information, both being crucial to NMT models, especially to avoid undertranslation and over-translation (Tu et al., 2016) . Ideally, PAST grows and FUTURE declines during the translation process. However, it may be difficult for a single RNN to explicitly model the above processes.", |
| "cite_spans": [ |
| { |
| "start": 317, |
| "end": 334, |
| "text": "(Tu et al., 2016)", |
| "ref_id": "BIBREF34" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maintaining what parts in the source have", |
| "sec_num": "3." |
| }, |
| { |
| "text": "In this paper, we propose a novel neural machine translation system that explicitly models PAST and FUTURE contents with two additional RNN layers. The RNN modeling the PAST contents (called PAST layer) starts from scratch and accumulates the in-formation that is being translated at each decoding step (i.e., the PRESENT information yielded by attention). The RNN modeling the FUTURE contents (called FUTURE layer) begins with holistic source summarization, and subtracts the PRESENT information at each step. The two processes are guided by proposed auxiliary objectives. Intuitively, the RNN state of the PAST layer corresponds to source contents that have been translated at a particular step, and the RNN state of the FUTURE layer corresponds to source contents of untranslated words. At each decoding step, PAST and FUTURE together provide a full summarization of the source information. We then feed the PAST and FUTURE information to both the attention model and decoder states. In this way, our proposed mechanism not only provides coverage information for the attention model, but also gives a holistic view of the source information at each time.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maintaining what parts in the source have", |
| "sec_num": "3." |
| }, |
| { |
| "text": "We conducted experiments on Chinese-English, German-English, and English-German benchmarks. Experiments show that the proposed mechanism yields 2.7, 1.7, and 1.1 improvements of BLEU scores in three tasks, respectively. In addition, it obtains an alignment error rate of 35.90%, significantly lower than the baseline (39.73%) and the coverage model (38.73%) by Tu et al. (2016) . We observe that in traditional attention-based NMT, most errors occur due to over-and under-translation, which is probably because the decoder RNN fails to keep track of what has been translated and what has not. Our model can alleviate such problems by explicitly modeling PAST and FUTURE contents.", |
| "cite_spans": [ |
| { |
| "start": 361, |
| "end": 377, |
| "text": "Tu et al. (2016)", |
| "ref_id": "BIBREF34" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maintaining what parts in the source have", |
| "sec_num": "3." |
| }, |
| { |
| "text": "In this section, we first introduce the standard attention-based NMT, and then motivate our model by several empirical findings.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The attention mechanism, proposed in Bahdanau et al. (2015) , yields a dynamic source context vector for the translation at a particular decoding step, modeling PRESENT information as described in Section 1. This process is illustrated in Figure 1 .", |
| "cite_spans": [ |
| { |
| "start": 37, |
| "end": 59, |
| "text": "Bahdanau et al. (2015)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 239, |
| "end": 247, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Formally, let x = {x 1 , . . . , x I } be a given input sentence. The encoder RNN-generally implemented as a bi-directional RNN (Schuster and Paliwal, 1997 )-transforms the sentence to a sequence ", |
| "cite_spans": [ |
| { |
| "start": 128, |
| "end": 155, |
| "text": "(Schuster and Paliwal, 1997", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "i = \u2212 \u2192 h i ; \u2190 \u2212 h i being the an- notation of x i . ( \u2212 \u2192 h i and \u2190 \u2212 h i refer to RNN's hidden states in both directions.)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Based on the source annotations, another decoder RNN generates the translation by predicting a target word y t at each time step t:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P (y t |y <t , x) = softmax(g(y t\u22121 , s t , c t )),", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "where g(\u2022) is a non-linear activation, and s t is the decoding state for time step t, computed by", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "s t = f (y t\u22121 , s t\u22121 , c t ).", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Here f (\u2022) is an RNN activation function, e.g., the Gated Recurrent Unit (GRU) (Cho et al., 2014) and Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) . c t is a vector summarizing relevant source information. It is computed as a weighted sum of the source annotations:", |
| "cite_spans": [ |
| { |
| "start": 79, |
| "end": 97, |
| "text": "(Cho et al., 2014)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 132, |
| "end": 166, |
| "text": "(Hochreiter and Schmidhuber, 1997)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "c t = I i=1 \u03b1 t,i \u2022 h i ,", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "where the weights (\u03b1 t,i for i = 1 \u2022 \u2022 \u2022 , I) are given by the attention mechanism:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u03b1 t,i = softmax a(s t\u22121 , h i ) .", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Here, a(\u2022) is a scoring function, measuring the degree to which the decoding state and source information match to each other. Intuitively, the attention-based decoder selects source annotations that are most relevant to the decoder state, based on which the current target word is predicted. In other words, c t is some source information for the PRESENT translation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The decoder RNN is initialized with the summarization of the entire source sentence \u2212 \u2192 h I ; \u2190 \u2212 h 1 , given by:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "s 0 = tanh(W s \u2212 \u2192 h I ; \u2190 \u2212 h 1 ).", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "After we analyze existing attention-based NMT in detail, our intuition arises as follows. Ideally, with the source summarization in mind, after generating each target word y t from the source contents c t , the decoder should keep track of (1) translated source contents by accumulating c t , and (2) untranslated source contents by subtracting c t from the source summarization. However, such information is not well learned in practice, as there lacks explicit mechanisms to maintain translated and untranslated contents. Evidence shows that attention-based NMT still suffers from serious over-and under-translation problems (Tu et al., 2016; Tu et al., 2017b) . Examples of under-translation are shown in Table 1a .", |
| "cite_spans": [ |
| { |
| "start": 627, |
| "end": 644, |
| "text": "(Tu et al., 2016;", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 645, |
| "end": 662, |
| "text": "Tu et al., 2017b)", |
| "ref_id": "BIBREF36" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 708, |
| "end": 716, |
| "text": "Table 1a", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Another piece of evidence also shows that the decoder may lack a holistic view of the source infor-mation, as explained below. We conduct a pilot experiment by removing the initialization of the RNN decoder. If the \"holistic\" context is well exploited by the decoder, translation performance would significantly decrease without the initialization. As shown in Table 1b , however, translation performance only decreases slightly after we remove the initialization. This indicates NMT decoders do not make full use of source summarization, that the initialization only helps the prediction at the beginning of the sentence. We attribute the vanishing of such signals to the overloaded use of decoder states (e.g., LM, PAST, and FUTURE functionalities), and hence we propose to explicitly model the holistic source summarization by PAST and FUTURE contents at each decoding step.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 361, |
| "end": 369, |
| "text": "Table 1b", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Our research is built upon an attention-based sequence-to-sequence model (Bahdanau et al., 2015) , but is also related to coverage modeling, future modeling, and functionality separation. We discuss these topics in the following.", |
| "cite_spans": [ |
| { |
| "start": 73, |
| "end": 96, |
| "text": "(Bahdanau et al., 2015)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Coverage Modeling. Tu et al. (2016) and Mi et al. (2016) maintain a coverage vector to indicate which source words have been translated and which source words have not. These vectors are updated by accumulating attention probabilities at each decoding step, which provides an opportunity for the attention model to distinguish translated source words from untranslated ones. Viewing coverage vectors as a (soft) indicator of translated source contents, following this idea, we take one step further. We model translated and untranslated source contents by directly manipulating the attention vector (i.e., the source contents that are being translated) instead of attention probability (i.e., the probability of a source word being translated).", |
| "cite_spans": [ |
| { |
| "start": 19, |
| "end": 35, |
| "text": "Tu et al. (2016)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 40, |
| "end": 56, |
| "text": "Mi et al. (2016)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In addition, we explicitly model both translated (with PAST-RNN) and untranslated (with FUTURE-RNN) instead of using a single coverage vector to indicate translated source words. The difference with Tu et al. (2016) is that the PAST and FUTURE contents in our model are fed not only to the attention mechanism but also the decoder's states.", |
| "cite_spans": [ |
| { |
| "start": 199, |
| "end": 215, |
| "text": "Tu et al. (2016)", |
| "ref_id": "BIBREF34" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In the context of semantic-level coverage, Wang et al. 2016 and Meng et al. (2016) propose a memory-enhanced attention model. Both implement the memory with a Neural Turing Machine (Graves et al., 2014) , in which the reading and writing operations are expected to erase translated contents and highlight untranslated contents. However, their models lack an explicit objective to guide such intuition, which is one of the key ingredients for the success in this work. In addition, we use two separate layers to explicitly model translated and untranslated contents, which is another distinguishing feature of the proposed approach.", |
| "cite_spans": [ |
| { |
| "start": 64, |
| "end": 82, |
| "text": "Meng et al. (2016)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 181, |
| "end": 202, |
| "text": "(Graves et al., 2014)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Future Modeling. Standard neural sequence decoders generate target sentences from left to right, thus failing to estimate some desired properties in the future (e.g., the length of target sentence). To address this problem, actor-critic algorithms are employed to predict future properties Bahdanau et al., 2017) , in their models, an interpolation of the actor (the standard generation policy) and the critic (a value function that estimates the future values) is used for decision making. Concerning the future generation at each decoding step, Weng et al. (2017) guide the decoder's hidden states to not only generate the current target word, but also predict the target words that remain untranslated. Along the direction of future modeling, we introduce a FUTURE layer to maintain the untranslated source contents, which is updated at each decoding step by subtracting the source content being translated (i.e., attention vector) from the last state (i.e., the untranslated source content so far).", |
| "cite_spans": [ |
| { |
| "start": 290, |
| "end": 312, |
| "text": "Bahdanau et al., 2017)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 547, |
| "end": 565, |
| "text": "Weng et al. (2017)", |
| "ref_id": "BIBREF40" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Functionality Separation. Recent work has revealed that the overloaded use of representations makes model training difficult, and such problems can be alleviated by explicitly separating these functions (Reed and Freitas, 2015; Ba et al., 2016; Miller et al., 2016; Gulcehre et al., 2016; Rockt\u00e4schel et al., 2017) . For example, Miller et al. (2016) separate the functionality of look-up keys and memory contents in memory networks (Sukhbaatar et al., 2015) . Rockt\u00e4schel et al. (2017) propose a keyvalue-predict attention model, which outputs three vectors at each step: the first is used to predict the next-word distribution; the second serves as the key for decoding; and the third is used for the attention mechanism. In this work, we further separate PAST and FUTURE functionalities from the decoder's hidden representations.", |
| "cite_spans": [ |
| { |
| "start": 203, |
| "end": 227, |
| "text": "(Reed and Freitas, 2015;", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 228, |
| "end": 244, |
| "text": "Ba et al., 2016;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 245, |
| "end": 265, |
| "text": "Miller et al., 2016;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 266, |
| "end": 288, |
| "text": "Gulcehre et al., 2016;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 289, |
| "end": 314, |
| "text": "Rockt\u00e4schel et al., 2017)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 330, |
| "end": 350, |
| "text": "Miller et al. (2016)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 433, |
| "end": 458, |
| "text": "(Sukhbaatar et al., 2015)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 461, |
| "end": 486, |
| "text": "Rockt\u00e4schel et al. (2017)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In this section, we describe how to separate PAST and FUTURE functions from decoding states. We introduce two additional RNN layers ( Figure 2 ):", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 134, |
| "end": 142, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Modeling PAST and FUTURE for Neural Machine Translation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u2022 FUTURE Layer (Section 4.1) encodes source contents to be translated. \u2022 PAST Layer (Section 4.2) encodes translated source contents. Let us take y = {y 1 , y 2 , y 3 , y 4 } as an example of the target sentence. The initial state of the FUTURE layer is a summarization of the whole source sentence, indicating that all source contents need to be translated. The initial state of the PAST layer is an all-zero vector, indicating no source content is yet", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling PAST and FUTURE for Neural Machine Translation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Neural Network Layer \u2715 Element-wise Multiplication + Element-wise Addition s t-1 s t r t \u2715 + u t ! 1- \u2715 ! tanh \u2715 s t c t F F F (a) GRU Projected Minus - Neural Network Layer \u2715 Element-wise Multiplication + Element-wise Addition s t-1 s t r t \u2715 c t + u t ! 1- \u2715 ! tanh \u2715 s t - tanh F F F (b) GRU-o Projected Minus - Neural Network Layer \u2715 Element-wise Multiplication + Element-wise Addition s t-1 s t r t c t + u t ! 1- \u2715 ! tanh \u2715 s t - \u2715 F F F (c) GRU-i", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling PAST and FUTURE for Neural Machine Translation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Figure 3: Variants of activation functions for the FUTURE layer.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling PAST and FUTURE for Neural Machine Translation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "After c 1 is obtained by the attention mechanism, we (1) update the FUTURE layer by \"subtracting\" c 1 from the previous state, and (2) update the PAST layer state by \"adding\" c 1 to the previous state. The two RNN states are updated as described above at every step of generating y 1 , y 2 , y 3 , and y 4 . In this way, at each time step, the FUTURE layer encodes source contents to be translated in the future steps, while the PAST layer encodes translated source contents up to the current step.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "translated.", |
| "sec_num": null |
| }, |
| { |
| "text": "The advantages of the PAST and the FUTURE layers are two-fold. First, they provide coverage information, which is fed to the attention model and guides NMT systems to pay more attention to untranslated source contents. Second, they provide a holistic view of the source information, since we would anticipate \"PAST + FUTURE = HOLISTIC.\" We describe them in detail in the rest of this section.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "translated.", |
| "sec_num": null |
| }, |
| { |
| "text": "Formally, the FUTURE layer is a recurrent neural network (the first gray layer in Figure 2 ) , and its state at time step t is computed by", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 82, |
| "end": 90, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Modeling FUTURE", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "s F t = F(s F t\u22121 , c t ),", |
| "eq_num": "(6)" |
| } |
| ], |
| "section": "Modeling FUTURE", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "where F is the activation function for the FUTURE layer. We have several variants of F, aiming to better model the expected subtraction, as described in Section 4.1.1. The FUTURE RNN is initialized with the summarization of the whole source sentence, as computed by Equation 5. When calculating attention context at time step t, we feed the attention model with the FUTURE state from the last time step, which encodes source contents to be translated. We rewrite Equation 4 as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling FUTURE", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u03b1 t,i = softmax a(s t\u22121 , h i , s F t\u22121 ) .", |
| "eq_num": "(7)" |
| } |
| ], |
| "section": "Modeling FUTURE", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "After obtaining attention context c t , we update FUTURE states via Equation 6, and feed both of them to decoder states:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling FUTURE", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "s t = f (s t\u22121 , y t\u22121 , c t , s F t ),", |
| "eq_num": "(8)" |
| } |
| ], |
| "section": "Modeling FUTURE", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "where c t encodes the source context of the present translation, and s F t encodes the source context on the future translation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling FUTURE", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We design several variants of RNN activation functions to better model the subtractive operation (Figure 3 ): GRU. A natural choice is the standard GRU 1 , which learns subtraction directly from the data:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 97, |
| "end": 106, |
| "text": "(Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Activation Functions for Subtraction", |
| "sec_num": "4.1.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "s F t = GRU(s F t\u22121 , c t ) (9) = u t \u2022 s F t\u22121 + (1 \u2212 u t ) \u2022s F t ; s F t = tanh(U (r t \u2022 s F t\u22121 ) + W c t );", |
| "eq_num": "(10)" |
| } |
| ], |
| "section": "Activation Functions for Subtraction", |
| "sec_num": "4.1.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "r t = \u03c3(U r s F t\u22121 + W r c t ); (11) u t = \u03c3(U u s F t\u22121 + W u c t ),", |
| "eq_num": "(12)" |
| } |
| ], |
| "section": "Activation Functions for Subtraction", |
| "sec_num": "4.1.1" |
| }, |
| { |
| "text": "where r t is a reset gate determining the combination of the input with the previous state, and u t is an update gate defining how much of the previous state to keep around. The standard GRU uses a feed-forward neural network (Equation 10) to model the subtraction without any explicit operation, which may lead to the difficulty of the training. In the following two variants, we provide GRU with explicit subtraction operations, which are inspired by the well known phenomenon that minus operation can be applied to the semantics of word embeddings (Mikolov et al., 2013) . Therefore we subtract the semantics being translated from the untranslated FUTURE contents at each decoding step.", |
| "cite_spans": [ |
| { |
| "start": 551, |
| "end": 573, |
| "text": "(Mikolov et al., 2013)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Activation Functions for Subtraction", |
| "sec_num": "4.1.1" |
| }, |
| { |
| "text": "GRU with Outside Minus (GRU-o). Instead of directly feeding c t to GRU, we compute the current untranslated contents M(s F t\u22121 , c t ) with an explicit minus operation, and then feed it to GRU:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Activation Functions for Subtraction", |
| "sec_num": "4.1.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "s F t = GRU(s F t\u22121 , M(s F t\u22121 , c t )); (13) M(s F t\u22121 , c t ) = tanh(U m s F t\u22121 \u2212 W m c t ).", |
| "eq_num": "(14)" |
| } |
| ], |
| "section": "Activation Functions for Subtraction", |
| "sec_num": "4.1.1" |
| }, |
| { |
| "text": "GRU with Inside Minus (GRU-i). We can alternatively integrate a minus operation into the calculation ofs F t :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Activation Functions for Subtraction", |
| "sec_num": "4.1.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "s F t = tanh(U s F t\u22121 \u2212 W (r t \u2022 c t )).", |
| "eq_num": "(15)" |
| } |
| ], |
| "section": "Activation Functions for Subtraction", |
| "sec_num": "4.1.1" |
| }, |
| { |
| "text": "Compared with Equation 10, the differences between GRU-i and standard GRU are: 1. Minus operation is applied to produce the energy of the intermediate candidate states F t ; 2. The reset gate r t is used to control the amount of information flowing from inputs instead of from the previous state s F t\u22121 . Note that for both GRU-o and GRU-i, we leave enough freedom for GRU to decide the extent of integrating with subtraction operations. In other words, the information subtraction is \"soft.\"", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Activation Functions for Subtraction", |
| "sec_num": "4.1.1" |
| }, |
| { |
| "text": "Formally, the PAST layer is another recurrent neural network (the second gray layer in Figure 2 ), and its state at time step t is calculated by:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 87, |
| "end": 95, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Modeling PAST", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "s P t = GRU(s P t\u22121 , c t ).", |
| "eq_num": "(16)" |
| } |
| ], |
| "section": "Modeling PAST", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Initially, s P t is an all-zero vector, which denotes no source content is yet translated. We choose GRU as the activation function for the PAST layer, since the internal structure of GRU is in accord with the \"addition\" operation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling PAST", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We feed the PAST state from last time step to both attention model and decoder state:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling PAST", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u03b1 t,i = softmax a(s t\u22121 , h i , s P t\u22121 ) ; (17) s t = f (s t\u22121 , y t\u22121 , c t , s P t\u22121 ).", |
| "eq_num": "(18)" |
| } |
| ], |
| "section": "Modeling PAST", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We integrate PAST and FUTURE layers together in our final model ( Figure 2) :", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 66, |
| "end": 75, |
| "text": "Figure 2)", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Modeling PAST and FUTURE", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u03b1 t,i = softmax a(s t\u22121 , h i , s F t\u22121 , s P t\u22121 ) ;", |
| "eq_num": "(19)" |
| } |
| ], |
| "section": "Modeling PAST and FUTURE", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "s t = f (s t\u22121 , y t\u22121 , c t , s F t\u22121 , s P t\u22121 ). (20)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling PAST and FUTURE", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "In this way, both the attention model and the decoder state are aware of what has, and what has not yet been translated.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling PAST and FUTURE", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We introduce additional loss functions to estimate the semantic subtraction and addition, which guide the training of the FUTURE layer and PAST layer, respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Loss Function for Subtraction. As described above, the FUTURE layer models the future semantics in a declining way:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "\u2206 F t = s F t\u22121 \u2212 s F t \u2248 c t .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Since source and target sides contain equivalent semantic information in machine translation (Tu et al., 2017a) : c t \u2248 E(y t ), we directly measure the consistence between \u2206 F t and E(y t ), which guides the subtraction to learn the right thing:", |
| "cite_spans": [ |
| { |
| "start": 93, |
| "end": 111, |
| "text": "(Tu et al., 2017a)", |
| "ref_id": "BIBREF35" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "loss(\u2206 F t , E(y t )) = \u2212 log exp l(\u2206 F t ,E(yt)) y exp l(\u2206 F t ,E(y)) ; l(u, v) = u W v + b.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "In other words, we explicitly guide the FUTURE layer by this subtractive loss, expecting \u2206 F t to be discriminative of the current word y t .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Loss Function for Addition. Likewise, we introduce another loss function to measure the information incrementation of the PAST layer. Notice that", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "\u2206 P t = s P t \u2212 s P t\u22121 \u2248 c t , is defined similarly to \u2206 F", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Training Objective. We train the proposed model \u03b8 on a set of training examples {[x n , y n ]} N n=1 , and the training objective i\u015d", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "\u03b8 = arg min \u03b8 N n=1 |y| t=1 \u2212 log P (y t |y <t , x; \u03b8)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "neg. log-likelihood", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "+ loss(\u2206 F t , E(y t )|\u03b8) FUTURE loss + loss(\u2206 P t , E(y t )|\u03b8) PAST loss .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Dataset. We conduct experiments on Chinese-English (Zh-En), German-English (De-En), and English-German (En-De) translation tasks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "For Zh-En, the training set consists of 1.6m sentence pairs, which are extracted from the LDC corpora 3 . The NIST 2003 (MT03) dataset is our development set; the NIST 2002 (MT02), 2004 (MT04), 2005 (MT05), 2006 (MT06) datasets are test sets. We also evaluate the alignment performance on the standard benchmark of Liu and Sun (2015) , which contains 900 manually aligned sentence pairs. We measure the alignment quality with the alignment error rate (Och and Ney, 2003) .", |
| "cite_spans": [ |
| { |
| "start": 315, |
| "end": 333, |
| "text": "Liu and Sun (2015)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 451, |
| "end": 470, |
| "text": "(Och and Ney, 2003)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "For De-En and En-De, we conduct experiments on the WMT17 (Bojar et al., 2017) corpus. The dataset consists of 5.6M sentence pairs. We use newstest2016 as our development set, and newstest2017 as our testset. We follow Sennrich et al. (2017a) to segment both German and English words into subwords using byte-pair encoding (Sennrich et al., 2016, BPE) .", |
| "cite_spans": [ |
| { |
| "start": 57, |
| "end": 77, |
| "text": "(Bojar et al., 2017)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 322, |
| "end": 350, |
| "text": "(Sennrich et al., 2016, BPE)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We measure the translation quality with BLEU scores (Papineni et al., 2002) . We use the multi-bleu script for Zh-En 4 , and the multi-bleu-detok script for De-En and En-De 5 .", |
| "cite_spans": [ |
| { |
| "start": 52, |
| "end": 75, |
| "text": "(Papineni et al., 2002)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Training Details. We use the Nematus 6 (Sennrich et al., 2017b), implementing a baseline translation system, RNNSEARCH. For Zh-En, we limit the vocabulary size to 30K. For De-En and En-De, the number of joint BPE operations is 90,000. We use the total BPE vocabulary for each side.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We tie the weights of the target-side embeddings and the output weight matrix (Press and Wolf, 2017) for De-En. All out-of-vocabulary words are mapped to a special token UNK.", |
| "cite_spans": [ |
| { |
| "start": 78, |
| "end": 100, |
| "text": "(Press and Wolf, 2017)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We train each model with sentences lengths of up to 50 words in the training data. The dimension of word embeddings is 512, and all hidden sizes are 1024. In training, we set the batch size to 80 for Zh-En, and 64 for De-En and En-De. We set the beam size to 12 in testing. We shuffle the training corpus after each epoch.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We use Adam (Kingma and Ba, 2014) with annealing (Denkowski and Neubig, 2017) as our optimization algorithm. We set the initial learning rate as 0.0005, which halves when the validation crossentropy does not decrease.", |
| "cite_spans": [ |
| { |
| "start": 49, |
| "end": 77, |
| "text": "(Denkowski and Neubig, 2017)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "For the proposed model, we use the same setting as the baseline model. The FUTURE and PAST layer sizes are 1024. We employ a two-pass strategy for training the proposed model, which has proven useful to ease training difficulty when the model is relatively complicated (Shen et al., 2016; Wang et al., 2017; Wang et al., 2018) . Model parameters shared with the baseline are initialized by the baseline model.", |
| "cite_spans": [ |
| { |
| "start": 269, |
| "end": 288, |
| "text": "(Shen et al., 2016;", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 289, |
| "end": 307, |
| "text": "Wang et al., 2017;", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 308, |
| "end": 326, |
| "text": "Wang et al., 2018)", |
| "ref_id": "BIBREF39" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We first evaluate the proposed model on the Chinese-English translation and alignment tasks. Table 2 shows the translation performances on Chinese-English. Clearly the proposed approach significantly improves the translation quality in all cases, although there are still considerable differences among different variants. FUTURE Layer. (Rows 1-4 ). All the activation functions for the FUTURE layer obtain BLEU score improvements: GRU +0.52, GRU-o +1.03, and GRU-i +1.12. Specifically, GRU-o is better than Table 2 : Case-insensitive BLEU on Chinese-English Translation. \"LOSS\" means applying loss functions for FUTURE layer (FRNN) and PAST layer (PRNN).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 93, |
| "end": 100, |
| "text": "Table 2", |
| "ref_id": null |
| }, |
| { |
| "start": 337, |
| "end": 346, |
| "text": "(Rows 1-4", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 508, |
| "end": 515, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results on Chinese-English", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "a regular GRU for its minus operation, and GRUi is the best, which shows that our elaborately designed architecture is more proper for modeling the decreasing phenomenon of the future semantics. Adding subtractive loss gives an extra 0.68 BLEU score improvement, which indicates that adding g is beneficial guided objective for FRNN to learn the minus operation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Translation Quality", |
| "sec_num": "5.1.1" |
| }, |
| { |
| "text": "PAST Layer. (Rows 5-6). We observe the same trend on introducing the PAST layer: using it alone achieves a significant improvement (+1.19), and with the additional objective, it further improves the translation performance (+0.57).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Translation Quality", |
| "sec_num": "5.1.1" |
| }, |
| { |
| "text": "Stacking the FUTURE and the PAST Together. (Rows 7-8). The model's final architecture outperforms our intermediate models (1-6) by combining FRNN and PRNN. By further separating the functionaries of past content modeling and language modeling into different neural components, the final model is more flexible, obtaining a 0.91 BLEU improvement over the best intermediate model (Row 4) and an improvement of 2.71 BLEU points over the RNNSEARCH baseline.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Translation Quality", |
| "sec_num": "5.1.1" |
| }, |
| { |
| "text": "Comparison with Other Work. (Rows 9-11). We also conduct experiments with multi-layer decoders to see whether the NMT system can automatically model the translated and untranslated contents with additional decoder lay-ers (Rows 9-10). However, we find that the performance is not improved using a two-layer decoder (Row 9), until a deeper version (three-layer decoder, Row 10) is used. This indicates that enhancing performance by simply adding more RNN layers into the decoder without any explicit instruction is nontrivial, which is consistent with the observation of Britz et al. (2017) .", |
| "cite_spans": [ |
| { |
| "start": 570, |
| "end": 589, |
| "text": "Britz et al. (2017)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Translation Quality", |
| "sec_num": "5.1.1" |
| }, |
| { |
| "text": "Our model also outperforms the word-level COV-ERAGE (Tu et al., 2016) , which considers the coverage information of the source words independently. Our proposed model can be regarded as a high-level coverage model, which captures higher level coverage information, and gives more specific signals for the decision of attention and target prediction. Our model is more deeply involved in generating target words, by being fed not only to the attention model as in Tu et al. (2016) , but also to the decoder state.", |
| "cite_spans": [ |
| { |
| "start": 52, |
| "end": 69, |
| "text": "(Tu et al., 2016)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 463, |
| "end": 479, |
| "text": "Tu et al. (2016)", |
| "ref_id": "BIBREF34" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Translation Quality", |
| "sec_num": "5.1.1" |
| }, |
| { |
| "text": "Following Tu et al. (2016) , we conduct subjective evaluations to validate the benefit of modeling the PAST and the FUTURE (Table 3) . Four human evaluators are asked to evaluate the translations of 100 source sentences, which are randomly sampled from the testsets without knowing from which system the translation is selected. For the BASE system, 1.7% of the source words are over-translated and 8.8% are under-translated. Our proposed model alleviates these problems by explicitly modeling the dynamic Table 3 : Subjective evaluation on over-and undertranslation for Chinese-English. \"Ratio\" denotes the percentage of source words which are overor under-translated, \"\u2206\" indicates relative improvement. \"BASE\" denotes RNNSEARCH and \"OURS\" denotes \"+ FRNN (GRU-i) + PRNN + LOSS\".", |
| "cite_spans": [ |
| { |
| "start": 10, |
| "end": 26, |
| "text": "Tu et al. (2016)", |
| "ref_id": "BIBREF34" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 123, |
| "end": 132, |
| "text": "(Table 3)", |
| "ref_id": null |
| }, |
| { |
| "start": 506, |
| "end": 513, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Subjective Evaluation", |
| "sec_num": "5.1.2" |
| }, |
| { |
| "text": "source contents by the PAST and the FUTURE layers, reducing 11.8% and 35.2% of over-translation and under-translation errors, respectively. The proposed model is especially effective for alleviating the under-translation problem, which is a more serious translation problem for NMT systems, and is mainly caused by lacking necessary coverage information (Tu et al., 2016) . Table 4 lists the alignment performances of our proposed model. We find that the COVERAGE model does improve attention model. But our model can produce much better alignments compared to the word level coverage (Tu et al., 2016) . Our model distinguishes the PAST and FUTURE directly, which is a higher level coverage mechanism than the word coverage model. Table 4 : Evaluation of the alignment quality. The lower the score, the better the alignment quality.", |
| "cite_spans": [ |
| { |
| "start": 354, |
| "end": 371, |
| "text": "(Tu et al., 2016)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 585, |
| "end": 602, |
| "text": "(Tu et al., 2016)", |
| "ref_id": "BIBREF34" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 374, |
| "end": 381, |
| "text": "Table 4", |
| "ref_id": null |
| }, |
| { |
| "start": 732, |
| "end": 739, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Subjective Evaluation", |
| "sec_num": "5.1.2" |
| }, |
| { |
| "text": "We also evaluate our model on the WMT17 benchmarks for both De-En and En-De. As shown in Table 5 , our baseline gives comparable BLEU scores to the state-of-the-art NMT systems of WMT17. Our proposed model improves the strong baseline on both De-En and En-De. This shows that our proposed model works well across different language pairs. Rikters et al. (2017) and Sennrich et al. (2017a) obtain higher BLEU scores than our model, because they use additional large scale synthetic data (about 10M) for training. It maybe unfair to compare our model to theirs directly.", |
| "cite_spans": [ |
| { |
| "start": 340, |
| "end": 361, |
| "text": "Rikters et al. (2017)", |
| "ref_id": null |
| }, |
| { |
| "start": 366, |
| "end": 389, |
| "text": "Sennrich et al. (2017a)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 89, |
| "end": 97, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results on German-English", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We conduct analyses on Zh-En, to better understand our model from different perspectives.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Parameters and Speeds. As shown in Table 6 , the baseline model (BASE) has 80M parameters. A single FUTURE or PAST layer introduces 15M to 17M parameters, and the corresponding objective introduces 18M parameters. In this work, the most complex model introduces 65M parameters, which leads to a relatively slower training speed. However, our proposed model does not significant slow down the decoding speed. The most time consuming part is the calculation of the subtraction and addition losses. As we show in the next paragraph, our system works well by only using the losses in training, which further improve the decoding speed of our model.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 35, |
| "end": 42, |
| "text": "Table 6", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Adding subtraction and addition loss functions helps twofold: (1) guiding the training of the proposed subtraction and addition operation;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Effectiveness of Subtraction and Addition Loss.", |
| "sec_num": null |
| }, |
| { |
| "text": "(2) enabling better reranking of generated candidates in testing. Table 7 lists the improvements from the two perspectives. When applied only in training, the two loss functions lead to an improvement of 0.48 BLEU points by better modeling subtraction and addition operations. On top of that, reranking with FUTURE and PAST loss scores in testing further improves the performance by +0.99 BLEU points.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Effectiveness of Subtraction and Addition Loss.", |
| "sec_num": null |
| }, |
| { |
| "text": "Initialization of the FUTURE Layer. The baseline model does not obtain abundant accuracy improvement by feeding the source summarization into the decoder (Table 1) . We also experiment to not feed the source summarization into the decoder of the proposed model, which leads to a significant BLEU score drop on Zh-En. This shows that our proposed model better use the source summarization with explicitly modeling the FUTURE compared to the conventional encoder-decoder baseline.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 154, |
| "end": 163, |
| "text": "(Table 1)", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Effectiveness of Subtraction and Addition Loss.", |
| "sec_num": null |
| }, |
| { |
| "text": "Case Study. We also compare the translation cases for the baseline, word level coverage and our proposed models. As shown in Table 9 , our baseline system suffers from the over-translation problems (case 1), which is consistent with the results of human evaluation (Section 3). The BASE system also incorrectly translates \"the royal family\" into \"the people of hong kong\", which is totally irrelevant here. We attribute the former case to the lack of untranslated future modeling, and the latter one to the overloaded use of the decoder state where the language modeling of the decoder leads to the fluent but wrong predictions. In contrast, the proposed approach almost addresses the errors in these cases.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 125, |
| "end": 132, |
| "text": "Table 9", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Effectiveness of Subtraction and Addition Loss.", |
| "sec_num": null |
| }, |
| { |
| "text": "Modeling source contents well is crucial for encoder-decoder based NMT systems. However, current NMT models suffer from distinguishing translated and untranslated translation contents, due to the lack of explicitly modeling past and future translations. In this paper, we separate PAST and FUTURE functionalities from decoder states, which can maintain a dynamic yet holistic view of the source content at each decoding step. Experimental results show that the proposed approach signifi-Source \u5e03\u4ec0 \u8fd8 \u8868\u793a , \u5e94 \u5df4\u57fa\u65af\u5766 \u548c \u5370\u5ea6 \u653f\u5e9c \u7684 \u9080\u8bf7 , \u4ed6 \u5c06 \u4e8e 3\u6708\u4efd \u5bf9 \u5df4\u57fa\u65af\u5766 \u548c \u5370\u5ea6 \u8fdb\u884c \u8bbf\u95ee \u3002 Reference bush also said that at the invitation of the pakistani and indian governments , he would visit pakistan and india in march . BASE bush also said that he would visit pakistan and india in march .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "bush also said that at the invitation of pakistan and india , he will visit pakistan and india in march .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "COVERAGE", |
| "sec_num": null |
| }, |
| { |
| "text": "OURS bush also said that at the invitation of the pakistani and indian governments , he will visit pakistan and india in march .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "COVERAGE", |
| "sec_num": null |
| }, |
| { |
| "text": "Source \u6240\u4ee5 \u6709 \u4e0d\u5c11 \u4eba \u8ba4\u4e3a \u8bf4 , \u5982\u679c \u662f \u8fd9\u6837 \u7684 \u8bdd , \u5bf9 \u7687\u5ba4 \u3001 \u5bf9 \u65e5\u672c \u7684 \u793e\u4f1a \u4e5f \u662f \u4f1a \u6709 \u5f88 \u5927 \u7684 \u5f71\u54cd \u7684 \u3002 Reference therefore , many people say that it will have a great impact on the royal family and japanese society . BASE therefore , many people are of the view that if this is the case , it will also have a great impact on the people of hong kong and the japanese society . COVERAGE therefore , many people think that if this is the case , there will be great impact on the royal and japanese society .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "COVERAGE", |
| "sec_num": null |
| }, |
| { |
| "text": "OURS therefore , many people think that if this is the case , it will have a great impact on the royal and japanese society . Table 9 : Comparison on Translation Examples. We italicize some translation errors and highlight the correct ones in bold.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 126, |
| "end": 133, |
| "text": "Table 9", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "COVERAGE", |
| "sec_num": null |
| }, |
| { |
| "text": "cantly improves translation performances across different language pairs. With better modeling of past and future translations, our approach performs much better than the standard attention-based NMT, reducing the errors of under and over translations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "COVERAGE", |
| "sec_num": null |
| }, |
| { |
| "text": "Our work focuses on GRU, but can be applied to any RNN architectures such as LSTM.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "E(\"King\") \u2212 E(\"Man\") = E(\"Queen\") \u2212 E(\"Woman\"), where E(\u2022) is the embedding of a word.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "t except a minus sign. In this way, we can reasonably assume the FUTURE and PAST layers are indeed doing subtraction and addition, respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The corpora includes LDC2002E18, LDC2003E07, LDC2003E14,Hansards portion of LDC2004T07, LDC2004T08 and LDC2005T06 4 https://github.com/moses-smt/ mosesdecoder/blob/master/scripts/generic/ multi-bleu.perl 5 https://github.com/EdinburghNLP/ nematus/blob/master/data/ multi-bleu-detok.perl", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/EdinburghNLP/nematus", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We would like to thank the anonymous reviewers as well as the Action Editor, Philipp Koehn, for insightful comments and suggestions. Shujian Huang is the corresponding author. This work is supported by the National Science Foundation of China (No. 61672277, 61772261) , the Jiangsu Provincial Research Foundation for Basic Research (No. BK20170074).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 243, |
| "end": 267, |
| "text": "(No. 61672277, 61772261)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Acknowledgement", |
| "sec_num": "7" |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Using fast weights to attend to the recent past", |
| "authors": [ |
| { |
| "first": "Jimmy", |
| "middle": [], |
| "last": "Ba", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [], |
| "last": "Hinton", |
| "suffix": "" |
| }, |
| { |
| "first": "Volodymyr", |
| "middle": [], |
| "last": "Mnih", |
| "suffix": "" |
| }, |
| { |
| "first": "Joel", |
| "middle": [ |
| "Z" |
| ], |
| "last": "Leibo", |
| "suffix": "" |
| }, |
| { |
| "first": "Catalin", |
| "middle": [], |
| "last": "Ionescu", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jimmy Ba, Geoffrey Hinton, Volodymyr Mnih, Joel Z Leibo, and Catalin Ionescu. 2016. Using fast weights to attend to the recent past. In NIPS 2016.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Neural machine translation by jointly learning to align and translate", |
| "authors": [ |
| { |
| "first": "Dzmitry", |
| "middle": [], |
| "last": "Bahdanau", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR 2015.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "An actor-critic algorithm for sequence prediction", |
| "authors": [ |
| { |
| "first": "Dzmitry", |
| "middle": [], |
| "last": "Bahdanau", |
| "suffix": "" |
| }, |
| { |
| "first": "Philemon", |
| "middle": [], |
| "last": "Brakel", |
| "suffix": "" |
| }, |
| { |
| "first": "Kelvin", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Anirudh", |
| "middle": [], |
| "last": "Goyal", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Lowe", |
| "suffix": "" |
| }, |
| { |
| "first": "Joelle", |
| "middle": [], |
| "last": "Pineau", |
| "suffix": "" |
| }, |
| { |
| "first": "Aaron", |
| "middle": [], |
| "last": "Courville", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. An actor-critic algorithm for sequence prediction. In ICLR 2017.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Proceedings of the second conference on machine translation", |
| "authors": [ |
| { |
| "first": "Ond\u0159ej", |
| "middle": [], |
| "last": "Bojar", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Buck", |
| "suffix": "" |
| }, |
| { |
| "first": "Rajen", |
| "middle": [], |
| "last": "Chatterjee", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Federmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Yvette", |
| "middle": [], |
| "last": "Graham", |
| "suffix": "" |
| }, |
| { |
| "first": "Barry", |
| "middle": [], |
| "last": "Haddow", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthias", |
| "middle": [], |
| "last": "Huck", |
| "suffix": "" |
| }, |
| { |
| "first": "Antonio", |
| "middle": [ |
| "Jimeno" |
| ], |
| "last": "Yepes", |
| "suffix": "" |
| }, |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "Julia", |
| "middle": [], |
| "last": "Kreutzer", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the Second Conference on Machine Translation. Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ond\u0159ej Bojar, Christian Buck, Rajen Chatterjee, Chris- tian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, and Julia Kreutzer. 2017. Proceedings of the second conference on machine translation. In Proceedings of the Second Conference on Machine Translation. Asso- ciation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Massive exploration of neural machine translation architectures", |
| "authors": [ |
| { |
| "first": "Denny", |
| "middle": [], |
| "last": "Britz", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Goldie", |
| "suffix": "" |
| }, |
| { |
| "first": "Minh-Thang", |
| "middle": [], |
| "last": "Luong", |
| "suffix": "" |
| }, |
| { |
| "first": "Quoc", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Denny Britz, Anna Goldie, Minh-Thang Luong, and Quoc Le. 2017. Massive exploration of neural ma- chine translation architectures. In EMNLP 2017.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation", |
| "authors": [ |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Bart", |
| "middle": [], |
| "last": "Van Merrienboer", |
| "suffix": "" |
| }, |
| { |
| "first": "Caglar", |
| "middle": [], |
| "last": "Gulcehre", |
| "suffix": "" |
| }, |
| { |
| "first": "Dzmitry", |
| "middle": [], |
| "last": "Bahdanau", |
| "suffix": "" |
| }, |
| { |
| "first": "Fethi", |
| "middle": [], |
| "last": "Bougares", |
| "suffix": "" |
| }, |
| { |
| "first": "Holger", |
| "middle": [], |
| "last": "Schwenk", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase represen- tations using RNN encoder-decoder for statistical ma- chine translation. In EMNLP 2014.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Stronger baselines for trustable results in neural machine translation", |
| "authors": [], |
| "year": null, |
| "venue": "Proceedings of the First Workshop on Neural Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stronger baselines for trustable results in neural ma- chine translation. In Proceedings of the First Work- shop on Neural Machine Translation.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "The TALP-UPC neural machine translation system for German/Finnish-English using the inverse direction model in rescoring", |
| "authors": [ |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "Escolano", |
| "suffix": "" |
| }, |
| { |
| "first": "Marta", |
| "middle": [ |
| "R" |
| ], |
| "last": "Costa-Juss\u00e0", |
| "suffix": "" |
| }, |
| { |
| "first": "Jos\u00e9", |
| "middle": [ |
| "A R" |
| ], |
| "last": "Fonollosa", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the Second Conference on Machine Translation", |
| "volume": "2", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Carlos Escolano, Marta R. Costa-juss\u00e0, and Jos\u00e9 A. R. Fonollosa. 2017. The TALP-UPC neural machine translation system for German/Finnish-English using the inverse direction model in rescoring. In Proceed- ings of the Second Conference on Machine Transla- tion, Volume 2: Shared Task Papers.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Neural turing machines", |
| "authors": [ |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Graves", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Wayne", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivo", |
| "middle": [], |
| "last": "Danihelka", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1410.5401" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. arXiv:1410.5401.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Dynamic neural turing machine with soft and hard addressing schemes", |
| "authors": [ |
| { |
| "first": "Caglar", |
| "middle": [], |
| "last": "Gulcehre", |
| "suffix": "" |
| }, |
| { |
| "first": "Sarath", |
| "middle": [], |
| "last": "Chandar", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1607.00036" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, and Yoshua Bengio. 2016. Dynamic neural tur- ing machine with soft and hard addressing schemes. arXiv:1607.00036.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Long short-term memory", |
| "authors": [ |
| { |
| "first": "Sepp", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00fcrgen", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural computation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Recurrent continuous translation models", |
| "authors": [ |
| { |
| "first": "Nal", |
| "middle": [], |
| "last": "Kalchbrenner", |
| "suffix": "" |
| }, |
| { |
| "first": "Phil", |
| "middle": [], |
| "last": "Blunsom", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "EMNLP 2013", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In EMNLP 2013.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Adam: A method for stochastic optimization", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Diederik", |
| "suffix": "" |
| }, |
| { |
| "first": "Jimmy", |
| "middle": [], |
| "last": "Kingma", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ba", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. ICLR 2014.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Learning to decode for future success", |
| "authors": [ |
| { |
| "first": "Jiwei", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Will", |
| "middle": [], |
| "last": "Monroe", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1701.06549" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jiwei Li, Will Monroe, and Daniel Jurafsky. 2017. Learning to decode for future success. arXiv:1701.06549.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Contrastive unsupervised word alignment with non-local features", |
| "authors": [ |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Maosong", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "AAAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yang Liu and Maosong Sun. 2015. Contrastive unsu- pervised word alignment with non-local features. In AAAI 2015.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Effective approaches to attention-based neural machine translation", |
| "authors": [ |
| { |
| "first": "Thang", |
| "middle": [], |
| "last": "Luong", |
| "suffix": "" |
| }, |
| { |
| "first": "Hieu", |
| "middle": [], |
| "last": "Pham", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In EMNLP 2015.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Interactive attention for neural machine translation", |
| "authors": [ |
| { |
| "first": "Fandong", |
| "middle": [], |
| "last": "Meng", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhengdong", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "" |
| }, |
| { |
| "first": "Hang", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Qun", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fandong Meng, Zhengdong Lu, Hang Li, and Qun Liu. 2016. Interactive attention for neural machine transla- tion. In COLING 2016.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Coverage embedding models for neural machine translation", |
| "authors": [ |
| { |
| "first": "Haitao", |
| "middle": [], |
| "last": "Mi", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhiguo", |
| "middle": [], |
| "last": "Baskaran Sankaran", |
| "suffix": "" |
| }, |
| { |
| "first": "Abe", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ittycheriah", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Haitao Mi, Baskaran Sankaran, Zhiguo Wang, and Abe Ittycheriah. 2016. Coverage embedding models for neural machine translation. EMNLP 2016.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Efficient estimation of word representations in vector space", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Greg Corrado, Kai Chen, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. ICLR 2013.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Key-value memory networks for directly reading documents", |
| "authors": [ |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Fisch", |
| "suffix": "" |
| }, |
| { |
| "first": "Jesse", |
| "middle": [], |
| "last": "Dodge", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Amir-Hossein", |
| "suffix": "" |
| }, |
| { |
| "first": "Antoine", |
| "middle": [], |
| "last": "Karimi", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Bordes", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexander Miller, Adam Fisch, Jesse Dodge, Amir- Hossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-value memory networks for directly read- ing documents. In EMNLP 2016.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "A Systematic Comparison of Various Statistical Alignment Models", |
| "authors": [ |
| { |
| "first": "Josef", |
| "middle": [], |
| "last": "Franz", |
| "suffix": "" |
| }, |
| { |
| "first": "Hermann", |
| "middle": [], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Franz Josef Och and Hermann Ney. 2003. A Systematic Comparison of Various Statistical Alignment Models. Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "BLEU: a method for automatic evaluation of machine translation", |
| "authors": [ |
| { |
| "first": "Kishore", |
| "middle": [], |
| "last": "Papineni", |
| "suffix": "" |
| }, |
| { |
| "first": "Salim", |
| "middle": [], |
| "last": "Roukos", |
| "suffix": "" |
| }, |
| { |
| "first": "Todd", |
| "middle": [], |
| "last": "Ward", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei-Jing", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic eval- uation of machine translation. In ACL 2002.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Using the output embedding to improve language models", |
| "authors": [ |
| { |
| "first": "Ofir", |
| "middle": [], |
| "last": "Press", |
| "suffix": "" |
| }, |
| { |
| "first": "Lior", |
| "middle": [], |
| "last": "Wolf", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ofir Press and Lior Wolf. 2017. Using the output embed- ding to improve language models. In EACL 2017.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Neural programmer-interpreters", |
| "authors": [ |
| { |
| "first": "Scott", |
| "middle": [], |
| "last": "Reed", |
| "suffix": "" |
| }, |
| { |
| "first": "Nando De", |
| "middle": [], |
| "last": "Freitas", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Computer Science", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Scott Reed and Nando De Freitas. 2015. Neural programmer-interpreters. Computer Science.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Maksym Del, and Mark Fishel. 2017. C-3MA: Tartu-Riga-Zurich translation systems for WMT17", |
| "authors": [ |
| { |
| "first": "Mat\u012bss", |
| "middle": [], |
| "last": "Rikters", |
| "suffix": "" |
| }, |
| { |
| "first": "Chantal", |
| "middle": [], |
| "last": "Amrhein", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mat\u012bss Rikters, Chantal Amrhein, Maksym Del, and Mark Fishel. 2017. C-3MA: Tartu-Riga-Zurich trans- lation systems for WMT17.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Frustratingly short attention spans in neural language modeling", |
| "authors": [ |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Rockt\u00e4schel", |
| "suffix": "" |
| }, |
| { |
| "first": "Johannes", |
| "middle": [], |
| "last": "Welbl", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Riedel", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tim Rockt\u00e4schel, Johannes Welbl, and Sebastian Riedel. 2017. Frustratingly short attention spans in neural lan- guage modeling. In ICLR 2017.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Bidirectional recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Schuster", |
| "suffix": "" |
| }, |
| { |
| "first": "Kuldip", |
| "middle": [ |
| "K" |
| ], |
| "last": "Paliwal", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "IEEE Transactions on Signal Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mike Schuster and Kuldip K. Paliwal. 1997. Bidirec- tional recurrent neural networks. IEEE Transactions on Signal Processing.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Neural machine translation of rare words with subword units", |
| "authors": [ |
| { |
| "first": "Rico", |
| "middle": [], |
| "last": "Sennrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Barry", |
| "middle": [], |
| "last": "Haddow", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandra", |
| "middle": [], |
| "last": "Birch", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Computer Science", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. Computer Science.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "The University of Edinburgh's Neural MT Systems for WMT17", |
| "authors": [ |
| { |
| "first": "Rico", |
| "middle": [], |
| "last": "Sennrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandra", |
| "middle": [], |
| "last": "Birch", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Currey", |
| "suffix": "" |
| }, |
| { |
| "first": "Ulrich", |
| "middle": [], |
| "last": "Germann", |
| "suffix": "" |
| }, |
| { |
| "first": "Barry", |
| "middle": [], |
| "last": "Haddow", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenneth", |
| "middle": [], |
| "last": "Heafield", |
| "suffix": "" |
| }, |
| { |
| "first": "Antonio", |
| "middle": [], |
| "last": "Valerio Miceli", |
| "suffix": "" |
| }, |
| { |
| "first": "Philip", |
| "middle": [], |
| "last": "Barone", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Williams", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the Second Conference on Machine Translation", |
| "volume": "2", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rico Sennrich, Alexandra Birch, Anna Currey, Ulrich Germann, Barry Haddow, Kenneth Heafield, An- tonio Valerio Miceli Barone, and Philip Williams. 2017a. The University of Edinburgh's Neural MT Sys- tems for WMT17. In Proceedings of the Second Con- ference on Machine Translation, Volume 2: Shared Task Papers in ACL.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Nematus: a toolkit for neural machine translation", |
| "authors": [ |
| { |
| "first": "Rico", |
| "middle": [], |
| "last": "Sennrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Orhan", |
| "middle": [], |
| "last": "Firat", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandra", |
| "middle": [], |
| "last": "Birch", |
| "suffix": "" |
| }, |
| { |
| "first": "Barry", |
| "middle": [], |
| "last": "Haddow", |
| "suffix": "" |
| }, |
| { |
| "first": "Julian", |
| "middle": [], |
| "last": "Hitschler", |
| "suffix": "" |
| }, |
| { |
| "first": "Marcin", |
| "middle": [], |
| "last": "Junczys-Dowmunt", |
| "suffix": "" |
| }, |
| { |
| "first": "Samuel", |
| "middle": [], |
| "last": "L\u00e4ubli", |
| "suffix": "" |
| }, |
| { |
| "first": "Antonio", |
| "middle": [], |
| "last": "Valerio Miceli", |
| "suffix": "" |
| }, |
| { |
| "first": "Jozef", |
| "middle": [], |
| "last": "Barone", |
| "suffix": "" |
| }, |
| { |
| "first": "Maria", |
| "middle": [], |
| "last": "Mokry", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Nadejde", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rico Sennrich, Orhan Firat, Kyunghyun Cho, Alexan- dra Birch, Barry Haddow, Julian Hitschler, Marcin Junczys-Dowmunt, Samuel L\u00e4ubli, Antonio Valerio Miceli Barone, Jozef Mokry, and Maria Nadejde. 2017b. Nematus: a toolkit for neural machine trans- lation. In EACL 2017.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Minimum risk training for neural machine translation", |
| "authors": [ |
| { |
| "first": "Shiqi", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| }, |
| { |
| "first": "Yong", |
| "middle": [], |
| "last": "Cheng", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhongjun", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Hua", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Maosong", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation. In ACL 2016.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "End-to-end memory networks", |
| "authors": [ |
| { |
| "first": "Sainbayar", |
| "middle": [], |
| "last": "Sukhbaatar", |
| "suffix": "" |
| }, |
| { |
| "first": "Arthur", |
| "middle": [], |
| "last": "Szlam", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| }, |
| { |
| "first": "Rob", |
| "middle": [], |
| "last": "Fergus", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory networks. In NIPS 2015.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Sequence to sequence learning with neural networks", |
| "authors": [ |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Oriol", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Quoc", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In NIPS 2014.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Modeling coverage for neural machine translation", |
| "authors": [ |
| { |
| "first": "Zhaopeng", |
| "middle": [], |
| "last": "Tu", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhengdong", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "" |
| }, |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaohua", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Hang", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In ACL 2016.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Context gates for neural machine translation", |
| "authors": [ |
| { |
| "first": "Zhaopeng", |
| "middle": [], |
| "last": "Tu", |
| "suffix": "" |
| }, |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhengdong", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaohua", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Hang", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhaopeng Tu, Yang Liu, Zhengdong Lu, Xiaohua Liu, and Hang Li. 2017a. Context gates for neural ma- chine translation. Transactions of the Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Neural machine translation with reconstruction", |
| "authors": [ |
| { |
| "first": "Zhaopeng", |
| "middle": [], |
| "last": "Tu", |
| "suffix": "" |
| }, |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Lifeng", |
| "middle": [], |
| "last": "Shang", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaohua", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Hang", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhaopeng Tu, Yang Liu, Lifeng Shang, Xiaohua Liu, and Hang Li. 2017b. Neural machine translation with re- construction. In AAAI 2017.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Memory-enhanced decoder for neural machine translation", |
| "authors": [ |
| { |
| "first": "Mingxuan", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhengdong", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "" |
| }, |
| { |
| "first": "Hang", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Qun", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mingxuan Wang, Zhengdong Lu, Hang Li, and Qun Liu. 2016. Memory-enhanced decoder for neural machine translation. In EMNLP 2016.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Neural machine translation advised by statistical machine translation", |
| "authors": [ |
| { |
| "first": "Xing", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhengdong", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhaopeng", |
| "middle": [], |
| "last": "Tu", |
| "suffix": "" |
| }, |
| { |
| "first": "Hang", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Deyi", |
| "middle": [], |
| "last": "Xiong", |
| "suffix": "" |
| }, |
| { |
| "first": "Min", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xing Wang, Zhengdong Lu, Zhaopeng Tu, Hang Li, Deyi Xiong, and Min Zhang. 2017. Neural machine trans- lation advised by statistical machine translation. In AAAI 2017.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "Translating pro-drop languages with reconstruction models", |
| "authors": [ |
| { |
| "first": "Longyue", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhaopeng", |
| "middle": [], |
| "last": "Tu", |
| "suffix": "" |
| }, |
| { |
| "first": "Shuming", |
| "middle": [], |
| "last": "Shi", |
| "suffix": "" |
| }, |
| { |
| "first": "Tong", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yvette", |
| "middle": [], |
| "last": "Graham", |
| "suffix": "" |
| }, |
| { |
| "first": "Qun", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Longyue Wang, Zhaopeng Tu, Shuming Shi, Tong Zhang, Yvette Graham, and Qun Liu. 2018. Trans- lating pro-drop languages with reconstruction models. In AAAI 2018.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Neural machine translation with word predictions", |
| "authors": [ |
| { |
| "first": "Rongxiang", |
| "middle": [], |
| "last": "Weng", |
| "suffix": "" |
| }, |
| { |
| "first": "Shujian", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Zaixiang", |
| "middle": [], |
| "last": "Zheng", |
| "suffix": "" |
| }, |
| { |
| "first": "Xin-Yu", |
| "middle": [], |
| "last": "Dai", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiajun", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rongxiang Weng, Shujian Huang, Zaixiang Zheng, Xin- Yu Dai, and Jiajun Chen. 2017. Neural machine trans- lation with word predictions. In EMNLP 2017.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "Google's neural machine translation system: Bridging the gap between human and machine translation", |
| "authors": [ |
| { |
| "first": "Yonghui", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Schuster", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhifeng", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Quoc", |
| "middle": [ |
| "V" |
| ], |
| "last": "Le", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohammad", |
| "middle": [], |
| "last": "Norouzi", |
| "suffix": "" |
| }, |
| { |
| "first": "Wolfgang", |
| "middle": [], |
| "last": "Macherey", |
| "suffix": "" |
| }, |
| { |
| "first": "Maxim", |
| "middle": [], |
| "last": "Krikun", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuan", |
| "middle": [], |
| "last": "Cao", |
| "suffix": "" |
| }, |
| { |
| "first": "Qin", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "Klaus", |
| "middle": [], |
| "last": "Macherey", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1609.08144" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine trans- lation. arXiv:1609.08144.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "Architecture of attention-based NMT. of annotations with h", |
| "num": null |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "propose a memory-enhanced decoder NMT decoder augmented with PAST and FUTURE layers.", |
| "num": null |
| }, |
| "TABREF1": { |
| "text": "", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>: Evidence shows that attention-based NMT</td></tr><tr><td>fails to make full use of source information, thus los-</td></tr><tr><td>ing the holistic picture of source contents.</td></tr></table>" |
| }, |
| "TABREF6": { |
| "text": "Statistics of parameters, training and testing speeds (sentences per second).", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table/>" |
| }, |
| "TABREF8": { |
| "text": "", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>: Influence of initialization of FRNN</td></tr><tr><td>layer (GRU-i)</td></tr></table>" |
| } |
| } |
| } |
| } |