{ "paper_id": "P17-1013", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:17:34.961709Z" }, "title": "Deep Neural Machine Translation with Linear Associative Unit", "authors": [ { "first": "Mingxuan", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tencent Technology Co", "location": { "settlement": "Ltd" } }, "email": "wangmingxuan@ict.ac.cn" }, { "first": "Zhengdong", "middle": [], "last": "Lu", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Jie", "middle": [], "last": "Zhou", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Deeplycurious", "middle": [], "last": "Ai", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Deep Neural Networks (DNNs) have provably enhanced the state-of-the-art Neural Machine Translation (NMT) with their capability in modeling complex functions and capturing complex linguistic structures. However NMT systems with deep architecture in their encoder or decoder RNNs often suffer from severe gradient diffusion due to the non-linear recurrent activations, which often make the optimization much more difficult. To address this problem we propose novel linear associative units (LAU) to reduce the gradient propagation length inside the recurrent unit. Different from conventional approaches (LSTM unit and GRU), LAUs utilizes linear associative connections between input and output of the recurrent unit, which allows unimpeded information flow through both space and time direction. The model is quite simple, but it is surprisingly effective. Our empirical study on Chinese-English translation shows that our model with proper configuration can improve by 11.7 BLEU upon Groundhog and the best reported results in the same setting. On WMT14 English-German task and a larger WMT14 English-French task, our model achieves comparable results with the state-of-the-art.", "pdf_parse": { "paper_id": "P17-1013", "_pdf_hash": "", "abstract": [ { "text": "Deep Neural Networks (DNNs) have provably enhanced the state-of-the-art Neural Machine Translation (NMT) with their capability in modeling complex functions and capturing complex linguistic structures. However NMT systems with deep architecture in their encoder or decoder RNNs often suffer from severe gradient diffusion due to the non-linear recurrent activations, which often make the optimization much more difficult. To address this problem we propose novel linear associative units (LAU) to reduce the gradient propagation length inside the recurrent unit. Different from conventional approaches (LSTM unit and GRU), LAUs utilizes linear associative connections between input and output of the recurrent unit, which allows unimpeded information flow through both space and time direction. The model is quite simple, but it is surprisingly effective. Our empirical study on Chinese-English translation shows that our model with proper configuration can improve by 11.7 BLEU upon Groundhog and the best reported results in the same setting. On WMT14 English-German task and a larger WMT14 English-French task, our model achieves comparable results with the state-of-the-art.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Neural Machine Translation (NMT) is an endto-end learning approach to machine transla-tion which has recently shown promising results on multiple language pairs (Luong et al., 2015; Shen et al., 2015; Wu et al., 2016; Tu et al., 2016; Zhang and Zong, 2016; Jean et al., 2015; Meng et al., 2015) . Unlike conventional Statistical Machine Translation (SMT) systems (Koehn et al., 2003; Chiang, 2005; Xiong et al., 2006; Mi et al., 2008) which consist of multiple separately tuned components, NMT aims at building upon a single and large neural network to directly map input text to associated output text. Typical NMT models consists of two recurrent neural networks (RNNs), an encoder to read and encode the input text into a distributed representation and a decoder to generate translated text conditioned on the input representation .", "cite_spans": [ { "start": 161, "end": 181, "text": "(Luong et al., 2015;", "ref_id": "BIBREF13" }, { "start": 182, "end": 200, "text": "Shen et al., 2015;", "ref_id": "BIBREF20" }, { "start": 201, "end": 217, "text": "Wu et al., 2016;", "ref_id": "BIBREF26" }, { "start": 218, "end": 234, "text": "Tu et al., 2016;", "ref_id": "BIBREF24" }, { "start": 235, "end": 256, "text": "Zhang and Zong, 2016;", "ref_id": "BIBREF30" }, { "start": 257, "end": 275, "text": "Jean et al., 2015;", "ref_id": "BIBREF9" }, { "start": 276, "end": 294, "text": "Meng et al., 2015)", "ref_id": "BIBREF15" }, { "start": 363, "end": 383, "text": "(Koehn et al., 2003;", "ref_id": "BIBREF11" }, { "start": 384, "end": 397, "text": "Chiang, 2005;", "ref_id": "BIBREF2" }, { "start": 398, "end": 417, "text": "Xiong et al., 2006;", "ref_id": "BIBREF27" }, { "start": 418, "end": 434, "text": "Mi et al., 2008)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Driven by the breakthrough achieved in computer vision Srivastava et al., 2015) , research in NMT has recently turned towards studying Deep Neural Networks (DNNs). Wu et al. (2016) and Zhou et al. (2016) found that deep architectures in both the encoder and decoder are essential for capturing subtle irregularities in the source and target languages. However, training a deep neural network is not as simple as stacking layers. Optimization often becomes increasingly difficult with more layers. One reasonable explanation is the notorious problem of vanishing/exploding gradients which was first studied in the context of vanilla RNNs (Pascanu et al., 2013b) . Most prevalent approaches to solve this problem rely on short-cut connections between adjacent layers such as residual or fastforward connections Srivastava et al., 2015; Zhou et al., 2016) . Differ-ent from previous work, we choose to reduce the gradient path inside the recurrent units and propose a novel Linear Associative Unit (LAU) which creates a fusion of both linear and nonlinear transformations of the input. Through this design, information can flow across several steps both in time and in space with little attenuation. The mechanism makes it easy to train deep stack RNNs which can efficiently capture the complex inherent structures of sentences for NMT. Based on LAUs, we also propose a NMT model , called DEEPLAU, with deep architecture in both the encoder and decoder.", "cite_spans": [ { "start": 55, "end": 79, "text": "Srivastava et al., 2015)", "ref_id": "BIBREF21" }, { "start": 164, "end": 180, "text": "Wu et al. (2016)", "ref_id": "BIBREF26" }, { "start": 185, "end": 203, "text": "Zhou et al. (2016)", "ref_id": "BIBREF31" }, { "start": 637, "end": 660, "text": "(Pascanu et al., 2013b)", "ref_id": "BIBREF19" }, { "start": 809, "end": 833, "text": "Srivastava et al., 2015;", "ref_id": "BIBREF21" }, { "start": 834, "end": 852, "text": "Zhou et al., 2016)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although DEEPLAU is fairly simple, it gives remarkable empirical results. On the NIST Chinese-English task, DEEPLAU with proper settings yields the best reported result and also a 4.9 BLEU improvement over a strong NMT baseline with most known techniques (e.g, dropout) incorporated. On WMT English-German and English-French tasks, it also achieves performance superior or comparable to the state-of-the-art.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A typical neural machine translation system is a single and large neural network which directly models the conditional probability p(y|x) of translating a source sentence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural machine translation", "sec_num": "2" }, { "text": "x = {x 1 , x 2 , \u2022 \u2022 \u2022 , x Tx } to a target sentence y = {y 1 , y 2 , \u2022 \u2022 \u2022 , y Ty }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural machine translation", "sec_num": "2" }, { "text": "Attention-based NMT, with RNNsearch as its most popular representative, generalizes the conventional notion of encoder-decoder in using an array of vectors to represent the source sentence and dynamically addressing the relevant segments of them during decoding. The process can be explicitly split into an encoding part, a decoding part and an attention mechanism. The model first encodes the source sentence x into a sequence of vectors c", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural machine translation", "sec_num": "2" }, { "text": "= {h 1 , h 2 , \u2022 \u2022 \u2022 , h Tx }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural machine translation", "sec_num": "2" }, { "text": "In general, h i is the annotation of x i from a bi-directional RNN which contains information about the whole sentence with a strong focus on the parts of x i . Then, the RNNsearch model decodes and generates the target translation y based on the context c and the partial traslated sequence y