{ "paper_id": "Q17-1007", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:12:12.989973Z" }, "title": "Context Gates for Neural Machine Translation", "authors": [ { "first": "Zhaopeng", "middle": [], "last": "Tu", "suffix": "", "affiliation": { "laboratory": "Noah's Ark Lab", "institution": "Huawei Technologies", "location": { "settlement": "Hong Kong" } }, "email": "tu.zhaopeng@huawei.com" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tsinghua University", "location": { "settlement": "Beijing" } }, "email": "" }, { "first": "Zhengdong", "middle": [], "last": "Lu", "suffix": "", "affiliation": { "laboratory": "Noah's Ark Lab", "institution": "Huawei Technologies", "location": { "settlement": "Hong Kong" } }, "email": "lu.zhengdong@huawei.com" }, { "first": "Xiaohua", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "Noah's Ark Lab", "institution": "Huawei Technologies", "location": { "settlement": "Hong Kong" } }, "email": "liuxiaohua3@huawei.com" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "Noah's Ark Lab", "institution": "Huawei Technologies", "location": { "settlement": "Hong Kong" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In neural machine translation (NMT), generation of a target word depends on both source and target contexts. We find that source contexts have a direct impact on the adequacy of a translation while target contexts affect the fluency. Intuitively, generation of a content word should rely more on the source context and generation of a functional word should rely more on the target context. Due to the lack of effective control over the influence from source and target contexts, conventional NMT tends to yield fluent but inadequate translations. To address this problem, we propose context gates which dynamically control the ratios at which source and target contexts contribute to the generation of target words. In this way, we can enhance both the adequacy and fluency of NMT with more careful control of the information flow from contexts. Experiments show that our approach significantly improves upon a standard attentionbased NMT system by +2.3 BLEU points.", "pdf_parse": { "paper_id": "Q17-1007", "_pdf_hash": "", "abstract": [ { "text": "In neural machine translation (NMT), generation of a target word depends on both source and target contexts. We find that source contexts have a direct impact on the adequacy of a translation while target contexts affect the fluency. Intuitively, generation of a content word should rely more on the source context and generation of a functional word should rely more on the target context. Due to the lack of effective control over the influence from source and target contexts, conventional NMT tends to yield fluent but inadequate translations. To address this problem, we propose context gates which dynamically control the ratios at which source and target contexts contribute to the generation of target words. In this way, we can enhance both the adequacy and fluency of NMT with more careful control of the information flow from contexts. Experiments show that our approach significantly improves upon a standard attentionbased NMT system by +2.3 BLEU points.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Neural machine translation (NMT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015) has made significant progress in the past several years. Its goal is to construct and utilize a single large neural network to accomplish the entire translation task. One great advantage of NMT is that the translation system can be completely constructed by learning from data without human involvement (cf., feature engineering in statistical machine translation (SMT)). The encoderdecoder architecture is widely employed (Cho et al., input j\u012bnni\u00e1n qi\u00e1n li\u01ceng yu\u00e8 gu\u01cengd\u014dng g\u0101ox\u012bn j\u00ecsh\u00f9 ch\u01cenp\u01d0n ch\u016bk\u01d2u 37.6y\u00ec m\u011biyu\u00e1n NMT in the first two months of this year , the export of new high level technology product was UNK -billion us dollars src china 's guangdong hi -tech exports hit 58 billion dollars tgt china 's export of high and new hi -tech exports of the export of the export of the export of the export of the export of the export of the export of the export of \u2022 \u2022 \u2022 Table 1 : Source and target contexts are highly correlated to translation adequacy and fluency, respectively. src and tgt denote halving the contributions from the source and target contexts when generating the translation, respectively.", "cite_spans": [ { "start": 33, "end": 65, "text": "(Kalchbrenner and Blunsom, 2013;", "ref_id": "BIBREF9" }, { "start": 66, "end": 89, "text": "Sutskever et al., 2014;", "ref_id": "BIBREF16" }, { "start": 90, "end": 112, "text": "Bahdanau et al., 2015)", "ref_id": "BIBREF0" }, { "start": 536, "end": 548, "text": "(Cho et al.,", "ref_id": null } ], "ref_spans": [ { "start": 987, "end": 994, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2014; Sutskever et al., 2014) , in which the encoder summarizes the source sentence into a vector representation, and the decoder generates the target sentence word-by-word from the vector representation. The representation of the source sentence and the representation of the partially generated target sentence (translation) at each position are referred to as source context and target context, respectively. The generation of a target word is determined jointly by the source context and target context. Several techniques in NMT have proven to be very effective, including gating (Hochreiter and Schmidhuber, 1997; and attention (Bahdanau et al., 2015) which can model long-distance dependencies and complicated align-ment relations in the translation process. Using an encoder-decoder framework that incorporates gating and attention techniques, it has been reported that the performance of NMT can surpass the performance of traditional SMT as measured by BLEU score (Luong et al., 2015) .", "cite_spans": [ { "start": 6, "end": 29, "text": "Sutskever et al., 2014)", "ref_id": "BIBREF16" }, { "start": 585, "end": 619, "text": "(Hochreiter and Schmidhuber, 1997;", "ref_id": "BIBREF6" }, { "start": 634, "end": 657, "text": "(Bahdanau et al., 2015)", "ref_id": "BIBREF0" }, { "start": 974, "end": 994, "text": "(Luong et al., 2015)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Despite this success, we observe that NMT usually yields fluent but inadequate translations. 1 We attribute this to a stronger influence of target context on generation, which results from a stronger language model than that used in SMT. One question naturally arises: what will happen if we change the ratio of influences from the source or target contexts? Table 1 shows an example in which an attentionbased NMT system (Bahdanau et al., 2015) generates a fluent yet inadequate translation (e.g., missing the translation of \"gu\u01cengd\u014dng\"). When we halve the contribution from the source context, the result further loses its adequacy by missing the partial translation \"in the first two months of this year\". One possible explanation is that the target context takes a higher weight and thus the system favors a shorter translation. In contrast, when we halve the contribution from the target context, the result completely loses its fluency by repeatedly generating the translation of \"ch\u016bk\u01d2u\" (i.e., \"the export of\") until the generated translation reaches the maximum length. Therefore, this example indicates that source and target contexts in NMT are highly correlated to translation adequacy and fluency, respectively.", "cite_spans": [ { "start": 93, "end": 94, "text": "1", "ref_id": null }, { "start": 422, "end": 445, "text": "(Bahdanau et al., 2015)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 359, "end": 366, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In fact, conventional NMT lacks effective control on the influence of source and target contexts. At each decoding step, NMT treats the source and target contexts equally, and thus ignores the different needs of the contexts. For example, content words in the target sentence are more related to the translation adequacy, and thus should depend more on the source context. In contrast, function words in the target sentence are often more related to the translation fluency (e.g., \"of\" after \"is fond\"), and thus should depend more on the target context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we propose to use context gates to control the contributions of source and target contexts on the generation of target words (decoding) in NMT. Context gates are non-linear gating units which can dynamically select the amount of context information in the decoding process. Specifically, at each decoding step, the context gate examines both the source and target contexts, and outputs a ratio between zero and one to determine the percentages of information to utilize from the two contexts. In this way, the system can balance the adequacy and fluency of the translation with regard to the generation of a word at each position.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Experimental results show that introducing context gates leads to an average improvement of +2.3 BLEU points over a standard attention-based NMT system (Bahdanau et al., 2015 ). An interesting finding is that we can replace the GRU units in the decoder with conventional RNN units and in the meantime utilize context gates. The translation performance is comparable with the standard NMT system with GRU, but the system enjoys a simpler structure (i.e., uses only a single gate and half of the parameters) and a faster decoding (i.e., requires only half the matrix computations for decoding). 2", "cite_spans": [ { "start": 152, "end": 174, "text": "(Bahdanau et al., 2015", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Suppose that x = x 1 , . . . x j , . . . x J represents a source sentence and y = y 1 , . . . y i , . . . y I a target sentence. NMT directly models the probability of translation from the source sentence to the target sentence word by word:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (y|x) = I i=1 P (y i |y