ACL-OCL / Base_JSON /prefixD /json /D18 /D18-1036.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D18-1036",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:52:34.756627Z"
},
"title": "Addressing Troublesome Words in Neural Machine Translation",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "CAS",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "CAS",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Baidu Inc",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": "",
"affiliation": {
"laboratory": "National Laboratory of Pattern Recognition",
"institution": "CAS",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "cqzong@nlpr.ia.ac.cn"
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Baidu Inc",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "wuhua@baidu.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "One of the weaknesses of Neural Machine Translation (NMT) is in handling lowfrequency and ambiguous words, which we refer as troublesome words. To address this problem, we propose a novel memoryenhanced NMT method. First, we investigate different strategies to define and detect the troublesome words. Then, a contextual memory is constructed to memorize which target words should be produced in what situations. Finally, we design a hybrid model to dynamically access the contextual memory so as to correctly translate the troublesome words. The extensive experiments on Chineseto-English and English-to-German translation tasks demonstrate that our method significantly outperforms the strong baseline models in translation quality, especially in handling troublesome words.",
"pdf_parse": {
"paper_id": "D18-1036",
"_pdf_hash": "",
"abstract": [
{
"text": "One of the weaknesses of Neural Machine Translation (NMT) is in handling lowfrequency and ambiguous words, which we refer as troublesome words. To address this problem, we propose a novel memoryenhanced NMT method. First, we investigate different strategies to define and detect the troublesome words. Then, a contextual memory is constructed to memorize which target words should be produced in what situations. Finally, we design a hybrid model to dynamically access the contextual memory so as to correctly translate the troublesome words. The extensive experiments on Chineseto-English and English-to-German translation tasks demonstrate that our method significantly outperforms the strong baseline models in translation quality, especially in handling troublesome words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Neural machine translation (NMT) based on the encoder-decoder architecture becomes the new state-of-the-art due to distributed representation and end-to-end learning (Cho et al., 2014; Bahdanau et al., 2015; Junczys-Dowmunt et al., 2016; Gehring et al., 2017; Vaswani et al., 2017) .",
"cite_spans": [
{
"start": 166,
"end": 184,
"text": "(Cho et al., 2014;",
"ref_id": "BIBREF3"
},
{
"start": 185,
"end": 207,
"text": "Bahdanau et al., 2015;",
"ref_id": "BIBREF1"
},
{
"start": 208,
"end": 237,
"text": "Junczys-Dowmunt et al., 2016;",
"ref_id": "BIBREF14"
},
{
"start": 238,
"end": 259,
"text": "Gehring et al., 2017;",
"ref_id": "BIBREF7"
},
{
"start": 260,
"end": 281,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, the current NMT is a global model that maximizes the performance on the overall data and has problems in handling low-frequency words and ambiguous words 1 , we refer these words as troublesome words and define them in Section 3.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Some previous work attempt to tackle the translation problem of low-frequency words. Sennrich et al. (2016) propose to decompose the words into subwords which are used as translation units so Source: \u963f\u5c14\u5361\u7279 \u5ba3\u79f0 \u53bb\u5e74 \u7b2c\u56db \u5b63 \u9500\u552e \u6210\u957f \u8fd1 \u767e\u5206\u4e4b\u4e09\u5341 Pinyin: aerkate cheng qunian disi ji xiaoshou chengzhang jin baifenzhisanshi Reference: alcatel says sales in fourth quarter last year grew nearly 30 % NMT: he said sales grew nearly 30 percent in fourth quarter of last year NMT+LexiconTable: alcatel said sales growth nearly 30 percent in fourth quarter of last year Figure 1 : The NMT model produces a wrong translation for the low-frequency word \"aerkat\". While introducing an external lexicon table without contextual information, the model incorrectly translates the ambiguous word \"chengzhang\" into \"growth\". that the low-frequency words can be represented by frequent subword sequences. Arthur et al. (2016) and Feng et al. (2017) try to incorporate a translation lexicon into NMT in order to obtain the correct translation of low-frequency words. However, the former method still faces the lowfrequency problem of subwords. And the latter one has a drawback that they use lexicons without considering specific contexts. Fig. 1 shows an example, in which \"aerkate\" is an infrequent word and the baseline NMT incorrectly translates it into a pronoun \"he\". Incorporation of bilingual lexicon rectifies the mistake but wrongly converts \"chengzhang\" into an incorrect target word \"growth\" since an entry \"(chengzhang, growth)\" in the bilingual lexicon is somewhat wrongly used without taking the contexts into account. Furthermore, these two kinds of methods mainly focus on low-frequency words that are just a part of the troublesome words.",
"cite_spans": [
{
"start": 85,
"end": 107,
"text": "Sennrich et al. (2016)",
"ref_id": "BIBREF21"
},
{
"start": 874,
"end": 894,
"text": "Arthur et al. (2016)",
"ref_id": "BIBREF0"
},
{
"start": 899,
"end": 917,
"text": "Feng et al. (2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 548,
"end": 556,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1208,
"end": 1214,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we categorize the words (including infrequent words and ambiguous words) which are difficult to translate as troublesome words and propose a novel memory-augmented framework to address them. Our method first investigates different strategies to define the troublesome words. Then, these words and their contexts in the training data are memorized with a contextual memory which is finally accessed dynamically during decoding to solve the translation problem of the troublesome words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Specifically, we first decode all the source sentences of the bilingual training data with baseline NMT and define the troublesome source words according to the distance between the predicted words and the gold words. The troublesome words associated with their hidden contextual representations are stored in a memory which memorizes the correct translation and the corresponding contextual information. During decoding, we activate the contextual memory when we encounter the troublesome words and employ the contextual similarity between the test sentence and the memory to determine appropriate target words. We test our methods on Chinese-to-English and Englishto-German translation tasks. The experimental results demonstrate that the translation performance can be significantly improved and a large portion of troublesome words can be correctly translated. The contributions are listed as follows: 1) We are the first to define and handle the troublesome words in neural machine translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2) We propose to memorize not only the bilingual lexicons but also their contexts with a contextual memory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3) We design a dynamic approach to correctly translate the troublesome words by combining the contextual memory and the NMT model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "NMT contains an encoder and a decoder. The encoder transforms a source sentence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "2"
},
{
"text": "X = {x 1 , x 2 , ..., x T x } into a set of context vectors C = (h m 1 , h m 2 , .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "2"
},
{
"text": ".., h m T x ) by using m stacked Long Short Term Memory (LSTM) layers (Hochreiter and Schmidhuber, 1997) . h m j is the hidden state of the top layer in encoder. The bottom layer of encoder is a bi-direction LSTM layer to collect the context from the left side and right side.",
"cite_spans": [
{
"start": 70,
"end": 104,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "2"
},
{
"text": "The decoder generates one target word at a time by computing p N i (y i |y <i , C) as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "2"
},
{
"text": "p N i (y i |y <i , C) = sof tmax(W y i z i + b s ) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "2"
},
{
"text": "where z i is the attention output:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "z i = tanh(W z [z m i ; c i ])",
"eq_num": "(2)"
}
],
"section": "Neural Machine Translation",
"sec_num": "2"
},
{
"text": "c i can be calculated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "2"
},
{
"text": "c i = T x j=1 a ij h m i (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "2"
},
{
"text": "where a i,j is the attention weight:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "2"
},
{
"text": "a i,j = h m j z m i j h m j z m i (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "2"
},
{
"text": "where z m i is the hidden state of the top layer in decoder. More detailed introduction can be found in (Luong et al., 2015) .",
"cite_spans": [
{
"start": 104,
"end": 124,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "2"
},
{
"text": "Notation. In this paper, we denote the whole source vocabulary by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "2"
},
{
"text": "V S = {s m } |V S | m=1 and target vocabulary by V T = {t n } |V T | n=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "2"
},
{
"text": ", where s m is the source word and t n is the target word. We denote a source sentence by X and a target sentence by Y . Each source word in X is denoted by x j . Each target word in Y is denoted by y i . Accordingly, a target word can be denoted not only by t n , but also by y i . This does not contradict. t n means this target word is the n th word in vocabulary V T , and y i means this target word is the i th word in sentence Y . Similarly, we denote a source word by s m and x j .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "2"
},
{
"text": "Our method contains three parts: 1) definition and detection of the troublesome words (Section 3.1); 2) contextual memory construction (Section 3.2); and 3) hybrid approach combining contextual memory and baseline NMT model (Section 3.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method Description",
"sec_num": "3"
},
{
"text": "Generally speaking, troublesome words are those that are difficult to translate for the baseline NMT system BN M T . Fig. 2 shows the main process to detect the troublesome words. Given each training sentence pair (X, Y ), BN M T decodes the source sentence X and outputs the predicted probability of each gold target word p N i (y i ). We call y i an exception if p N i (y i ) satisfies the predefined exception criteria introduced below. The source word x j is an exception (a candidate troublesome word) if (x j , y i ) is an entry in the word alignment A 2 . Suppose x j appears N times in the training data and there are M exceptions among all its aligned gold target words. Then, the exception rate r(x j ) will be M/N . Definition: x j is a troublesome word if r(x j ) > in which is a predefined threshold.",
"cite_spans": [],
"ref_spans": [
{
"start": 117,
"end": 123,
"text": "Fig. 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Troublesome Word Definition",
"sec_num": "3.1"
},
{
"text": "Exception Criteria. As discussed before, we need an exception criterion to measure whether a gold target word is an exception or not. In this paper, we investigate three exception criteria. Here, we introduce each of them through a toy example shown in Fig. 3 , in which the source sentence is X = {x 1 , x 2 , x 3 } and the gold target sentence is Y = {y 1 , y 2 , y 3 }. The left shows the probability distribution of all target vocabulary p N i (V T ) at each decoding step i, where the probability of the gold target word is highlighted in yellow. The right shows the word alignments between X and Y . 1) Absolute Criterion. A gold target word y i is an exception if its predicted probability p N i (y i ) is lower than a predefined threshold, namely",
"cite_spans": [],
"ref_spans": [
{
"start": 253,
"end": 259,
"text": "Fig. 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Troublesome Word Definition",
"sec_num": "3.1"
},
{
"text": "p N i (y i ) < p 0 . In Fig. 3, p N i (y i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Troublesome Word Definition",
"sec_num": "3.1"
},
{
"text": "at each decoding step is respectively 0.8, 0.31 and 0.2. If we set p 0 = 0.5, p N 2 (y 2 ) and p N 3 (y 3 ) are lower than threshold p 0 . x 1 and x 3 are both exceptions according to the alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Troublesome Word Definition",
"sec_num": "3.1"
},
{
"text": "2) Gap Criterion. For this criterion, we utilize the predicted probability gap between the gold target word and the top one. Specifically, the gap can be calculated by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Troublesome Word Definition",
"sec_num": "3.1"
},
{
"text": "g(y i ) = max(p N i (V T )) \u2212 p N i (y i ) (5) where max(p N i (V T ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Troublesome Word Definition",
"sec_num": "3.1"
},
{
"text": "is the top one in the probability distribution at the i th decoding step. y i is an exception if g(y i ) > g 0 . In Fig. 3 , the largest predicted probabilities at each decoding step max(p N i (V T )) are respectively 0.8, 0.35 and 0.75. Thus, the gap is 0.0, 0.04 and 0.55. If g 0 = 0.1, x 3 is an exception since g(y 3 ) > g 0 and x 3 aligns to y 3 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 116,
"end": 122,
"text": "Fig. 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Troublesome Word Definition",
"sec_num": "3.1"
},
{
"text": "y 3 x 1 x 2 x 3 x 1 x 2 x 3 y 1 y 2 y 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Troublesome Word Definition",
"sec_num": "3.1"
},
{
"text": "Figure 3: A toy example to show the process: if p N i (y i ) (left) satisfies the predefined exception criteria and x j aligns to y i , then x j is an exception.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Troublesome Word Definition",
"sec_num": "3.1"
},
{
"text": "3) Ranking Criterion. This criterion is based on the ranking of Fig. 3 , the ranking of each gold target word is 1, 3 and 2. If we set rank 0 = 2, then rank(y 2 ) = 3 > rank 0 and x 1 is an exception due to the alignment between x 1 and y 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 64,
"end": 70,
"text": "Fig. 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Troublesome Word Definition",
"sec_num": "3.1"
},
{
"text": "p N i (y i ) in p N i (V T ) (denoted by rank(y i )). If rank(y i ) > rank 0 , then y i is an exception. In",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Troublesome Word Definition",
"sec_num": "3.1"
},
{
"text": "Using the above exception criteria and the definition of troublesome words, we can detect all the source-side troublesome words in the bilingual training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Troublesome Word Definition",
"sec_num": "3.1"
},
{
"text": "For a troublesome word, we now introduce how to build a contextual memory M to store its translation knowledge. Specifically, the contextual memory contains five elements:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Memory Construction",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s m , t n , c(s m , t n ), p L (s m , t n ), r(s m )",
"eq_num": "(6)"
}
],
"section": "Contextual Memory Construction",
"sec_num": "3.2"
},
{
"text": "each of them is described as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Memory Construction",
"sec_num": "3.2"
},
{
"text": "\u2022 s m is a troublesome source word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Memory Construction",
"sec_num": "3.2"
},
{
"text": "\u2022 t n is a gold target word for s m .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Memory Construction",
"sec_num": "3.2"
},
{
"text": "\u2022 c(s m , t n ) is the context of lexicon pair (s m , t n ). Here, we use the hidden states of encoder h j to represent the context, since it contains the information from left ( \u2212 \u2192 h j ) and right ( \u2190 \u2212 h j ). Note that when we traverse the training data and memorize the contexts of all troublesome words, there must be many cases in which the same pair (s m , t n ) appears in different contexts. In order to reduce the memory size and fuse different contexts of a same lexicon pair, we merge these memories by averaging the contexts. Assume there are K different contexts for (s m , t n ), and they ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Memory Construction",
"sec_num": "3.2"
},
{
"text": "+ x 1 x j x n P M P N P F x 1 x j x n y i h 1 h n h j z i i c i i i Figure 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Memory Construction",
"sec_num": "3.2"
},
{
"text": "The architecture of contextual memoryaugmented NMT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Memory Construction",
"sec_num": "3.2"
},
{
"text": "are denoted by h k (s m , t n ). The average context of (s m , t n ) can be calculated by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Memory Construction",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c(s m , t n ) = K k=1 h k (s m , t n ) K",
"eq_num": "(7)"
}
],
"section": "Contextual Memory Construction",
"sec_num": "3.2"
},
{
"text": "Note that the context here is defined on the source side.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Memory Construction",
"sec_num": "3.2"
},
{
"text": "\u2022 p L (s m , t n )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Memory Construction",
"sec_num": "3.2"
},
{
"text": "is the lexicon translation probability. It is the average of source-to-target and target-to-source probabilities calculated through maximum likelihood estimation on word alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Memory Construction",
"sec_num": "3.2"
},
{
"text": "\u2022 r(s m ) is the exception rate of s m introduced in Section 3.1 and it can indicate the translation difficulty of a source word. We will use r(s m ) to determine the dynamic weights of contextual memories in Section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Memory Construction",
"sec_num": "3.2"
},
{
"text": "Noise Reduction. As we know, the training data and word alignments are not perfect and may introduce noise to the contextual memory. To reduce the noise, we employ two strategies. 1) To improve the quality of the alignments A, we derive the alignment results from source-totarget and target-to-source, respectively. We only save the alignments which exist in both directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Memory Construction",
"sec_num": "3.2"
},
{
"text": "2) We eliminate the lexicon pairs whose translation probabilities are too small. For a lexicon pair (s m , t n ), if its lexicon translation probability is smaller than 0.01, we treat this lexicon pair as a noisy sample and eliminate it from our memory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Memory Construction",
"sec_num": "3.2"
},
{
"text": "In this section, we integrate the contextual memory into NMT to handle troublesome words. The overall framework is depicted in Fig. 4 and the integration process can be divided into four steps:",
"cite_spans": [],
"ref_spans": [
{
"start": 127,
"end": 133,
"text": "Fig. 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Integrating Contextual Memory into NMT",
"sec_num": "3.3"
},
{
"text": "Step 1. Given a test sentence X, the first step is to find the troublesome words in X and collect corresponding local memories from the global contextual memory M. For each source word x j , we retrieve from M if it is a troublesome word and obtain the local memory as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Contextual Memory into NMT",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x j , t n , c(x j , t n ), p L (x j , t n ), r(x j )",
"eq_num": "(8)"
}
],
"section": "Integrating Contextual Memory into NMT",
"sec_num": "3.3"
},
{
"text": "Step 2. The next step is to measure the contextual similarity between the context in the test sentence X and the context in M. For the troublesome word x j \u2208 X, we still use the encoder hidden state h j to represent the context in X. The corresponding context in M is c(x j , t n ) in Eq. (8).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Contextual Memory into NMT",
"sec_num": "3.3"
},
{
"text": "Here, we use a feed-forward network to measure this similarity 3 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Contextual Memory into NMT",
"sec_num": "3.3"
},
{
"text": "d j (t n ) = sigmoid(v T d * tanh(W h * h j + W c * c(x j , t n ))) (9) where v d , W h and W c are learnable parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Contextual Memory into NMT",
"sec_num": "3.3"
},
{
"text": "The sigmoid function guarantees the similarity score is in the range (0, 1). This similarity d j (t n ) will determine whether or not to adopt the target translation word t n in M.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Contextual Memory into NMT",
"sec_num": "3.3"
},
{
"text": "Step 3. The next task is calculating the probability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Contextual Memory into NMT",
"sec_num": "3.3"
},
{
"text": "p M i (t n ) of t n at each decoding step i. p M i (t n )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Contextual Memory into NMT",
"sec_num": "3.3"
},
{
"text": "is the probability predicted by the contextual memory M and is calculated by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Contextual Memory into NMT",
"sec_num": "3.3"
},
{
"text": "p M i (t n ) = T x j a i,j * d j (t n ) * p L (x j , t n ) (10)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Contextual Memory into NMT",
"sec_num": "3.3"
},
{
"text": "where a i,j is the attention weight, d j (t n ) is the context similarity in Eq. (9), and p L (x j , t n ) is the lexicon translation probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Contextual Memory into NMT",
"sec_num": "3.3"
},
{
"text": "Step 4. The final task is to combine the memory predicted probability (p M i in Eq. (10)) and the NMT predicted one (p N i in Eq. (1)). Here, we propose a dynamic strategy to balance these two probabilities:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Contextual Memory into NMT",
"sec_num": "3.3"
},
{
"text": "p F i (t n ) = \u03bb i * p M i (t n ) + (1 \u2212 \u03bb i ) * p N i (t n ) (11) where p F i (t n )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Contextual Memory into NMT",
"sec_num": "3.3"
},
{
"text": "is the final probability of the target word t n , \u03bb i is the dynamic weight to adjust the contribution from the memory and NMT. Here we explain the reason why we apply the dynamic manner. Recall that for each source troublesome word s m , we calculate its exception rate (similar to error rate). If a troublesome word has a lower exception rate, indicating that this source word is easier to be translated for the neural model. In this case, p N i is more reliable. Thus we design the dynamic weight \u03bb i according to the exception rate r(x j ):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Contextual Memory into NMT",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03bb i = sigmoid(\u03b2 \u03b3 * \u03b3 i ) \u03b3 i = T x j a i,j * r(x j )",
"eq_num": "(12)"
}
],
"section": "Integrating Contextual Memory into NMT",
"sec_num": "3.3"
},
{
"text": "where \u03b2 \u03b3 is a learnable parameter. From Eq. (12), the dynamic weight \u03bb i is determined by both of the attention weight a i,j , and the exception rate r(x j ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Contextual Memory into NMT",
"sec_num": "3.3"
},
{
"text": "Training the parameters. As discussed above, our method contains some parameters (v d , W h , W c and \u03b2 \u03b3 ) to be learned. We denote the parameters introduced by our method by \u03b8 M and the parameters in NMT by \u03b8 N . To make it efficient, given the aligned training data D =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Contextual Memory into NMT",
"sec_num": "3.3"
},
{
"text": "X (d) , Y (d) |D|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Contextual Memory into NMT",
"sec_num": "3.3"
},
{
"text": "d=1 , we keep \u03b8 N unchanged and optimize \u03b8 M by maximizing the following objective function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Contextual Memory into NMT",
"sec_num": "3.3"
},
{
"text": "L(\u03b8 M ) = 1 |D| |D| d=1 Ty i log p F i (y (d) i ; \u03b8 M ) (13)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Contextual Memory into NMT",
"sec_num": "3.3"
},
{
"text": "where p F i can be calculated by Eq. (11).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Contextual Memory into NMT",
"sec_num": "3.3"
},
{
"text": "We test the proposed methods on Chinese-to-English (CH-EN) and English-to-German (EN-DE) translation. In CH-EN translation, we use LDC corpus which includes 2.1M sentence pairs for training. NIST 2003 dataset is used for validation. NIST04-06 and 08 datasets are used for testing. In EN-DE translation, we use WMT 2014 EN-DE dataset, which includes 4.5M sentence pairs for training. 2012-2013 datasets are used for validation and 2014 dataset is used for testing. We use the Zoph RNN toolkit 4 to implement all described methods. In all experiments, the encoder and decoder include two stacked LSTM layers. The word embedding dimension and the size of hidden layers are both set to 1,000. The minibatch size is set to 128. We discard the training sentence pairs whose length exceeds 100. We run a total of 20 iterations for all translation tasks. We test all methods based on two granularities: words and sub-words. For word granularity, we limit the vocabulary to 30K (CH-EN) and 50K (EN-DE) for both the source and target languages. For subword granularity, we use the BPE method (Sennrich et al., 2016) to merge 30K (CH-EN) and 32K (EN-DE) steps. The beam size is set to 12. We use case-insensitive 4-gram BLEU (Papineni et al., 2002) for translation quality evaluation.",
"cite_spans": [
{
"start": 1082,
"end": 1105,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF21"
},
{
"start": 1214,
"end": 1237,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4"
},
{
"text": "We compare our method with other relevant methods as follows: 1) Baseline: It is the baseline NMT system with global attention (Luong et al., 2015; Zoph and Knight, 2016; Jean et al., 2015) .",
"cite_spans": [
{
"start": 127,
"end": 147,
"text": "(Luong et al., 2015;",
"ref_id": "BIBREF17"
},
{
"start": 148,
"end": 170,
"text": "Zoph and Knight, 2016;",
"ref_id": "BIBREF36"
},
{
"start": 171,
"end": 189,
"text": "Jean et al., 2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4"
},
{
"text": "2) Arthur: It is the state-of-the-art method which incorporates discrete translation lexicons into NMT (Arthur et al., 2016) . We implement Arthur et al. (2016)'s method in two different ways. In the first way, we fix the Baseline unchanged, and utilize Arthur et al. (2016) 's method in the test phase. We denote this system by Arthur(test) . In second way, we allow Baseline to be retrained by Arthur et al. (2016) 's method, and denote the system by Arthur(train+test). We replicate the Arthurs work using the bias method with the hyper parameter being set to 0.001 as reported in their paper.",
"cite_spans": [
{
"start": 103,
"end": 124,
"text": "(Arthur et al., 2016)",
"ref_id": "BIBREF0"
},
{
"start": 254,
"end": 274,
"text": "Arthur et al. (2016)",
"ref_id": "BIBREF0"
},
{
"start": 329,
"end": 341,
"text": "Arthur(test)",
"ref_id": null
},
{
"start": 396,
"end": 416,
"text": "Arthur et al. (2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4"
},
{
"text": "3) X+MEM: It is our proposed memory augment method for any neural model X, in which we define the troublesome word by using the gap criterion with threshold g 0 = 0.1. We set threshold = 0.05, which is fine-tuned in validation set. It means if the exception rate of a source word exceeds 0.05, we treat this word as a troublesome word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4"
},
{
"text": "5 Results on CH-EN Translation 5.1 Our methods vs. Baseline Table 1 reports the main translation results of CH-EN translation. We first compare Baseline+MEM with Baseline. As shown in row 1 and row 5 in Table 1, Baseline+MEM can improve over Baseline on all test datasets, and the average improvement is 1.37 BLEU points. The results show that our method could significantly outperform the baseline model. shows the BLEU points improvement of system \"X+MEM\" than system X. \"*\" indicates that system \"X+MEM\" is statistically significant better (p < 0.05) than system X and \" \u2020\" indicates p < 0.01. ",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 67,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4"
},
{
"text": "We also test the proposed method when the translation unit is sub-word. The baseline and our method using sub-word as translation unit are respectively denoted by Baseline(subword) and Baseline(sub-word)+MEM. The results are shown in row 4 and row 7. From the results, Baseline(sub-word)+MEM outperforms Baseline(sub-word) by 1.01 BLEU points, indicating that adopting sub-words as translation units still faces the problem of troublesome tokens, and our method could alleviate this problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on Sub-words",
"sec_num": "5.2"
},
{
"text": "We also compare our method with Arthur et al. (2016) 's method which incorporates a translation lexicon into NMT. Here, the comparison is conducted in two ways based on whether the baseline neural model is fixed or retrained. Fixed Baseline. Comparing Arthur(test) (row 2 in Table 1 ) and Baseline+MEM (row 5 in Table 1), we can see that our proposed method can surpass Arthur(test) with 1.05 BLEU points. As there are three differences between our methods and Arthur(test), we take the following experiments to evaluate the effect of each difference.",
"cite_spans": [
{
"start": 32,
"end": 52,
"text": "Arthur et al. (2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 275,
"end": 282,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Our Method vs. Method Using Translation Lexicon",
"sec_num": "5.3"
},
{
"text": "The first difference is that our memory only Arthur: alcatel says sales growth nearly 30 percent in fourth quarter of last year Baseline+MEM: alcatel says sales grew nearly 30 percent in fourth quarter of last year Source: \u963f\u5c14\u5361\u7279 \u5ba3\u79f0 \u53bb\u5e74 \u7b2c\u56db \u5b63 \u9500\u552e \u6210\u957f \u8fd1 \u767e\u5206\u4e4b\u4e09\u5341 Pinyin: aerkate cheng qunian disi ji xiaoshou chengzhang jin baifenzhisanshi Reference: alcatel says sales in fourth quarter last year grew nearly 30 % Baseline: he said sales grew nearly 30 percent in fourth quarter of last year Arthur: alcatel says sales growth nearly 30 percent in fourth quarter of last year Baseline+MEM: alcatel says sales grew nearly 30 percent in fourth quarter of last year stores the lexicon pairs for troublesome words, while Arthur(test) utilizes all the available lexicon pairs. We implement another system which is similar to Arthur(test), except that we only utilize the troublesome lexicon pairs. We denote the system by Tword. The results are reported in Table 2 . From the results, we can find that Tword obtains better translation results than Arthur(test) while using much fewer lexicon pairs (125K vs. 938K).",
"cite_spans": [],
"ref_spans": [
{
"start": 942,
"end": 949,
"text": "Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Our Method vs. Method Using Translation Lexicon",
"sec_num": "5.3"
},
{
"text": "The second difference is that we take the context into consideration. When we add the context on the basis of Tword (denoted by +Context), it further improves the baseline system by 1.03 BLEU points, indicating the importance of the context. Fig.5 shows the mentioned example in Section 1, in which Arthur(test) translates chengzhang into a wrong target word growth, while Baseline+MEM could overcome this mistake with the help of the context modeling.",
"cite_spans": [],
"ref_spans": [
{
"start": 242,
"end": 247,
"text": "Fig.5",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Our Method vs. Method Using Translation Lexicon",
"sec_num": "5.3"
},
{
"text": "We also implement another system, in which we build the contextual memory for all source words. Figure 6 : The comparison of different criteria. The gap criterion outperforms others with the increase of the memory size. We denote the system by All+Context and the results are reported in Table 2 . As shown in Table 2 , All+Context surpasses Arthur(test) with 0.75 BLEU points while at the cost of 6.4G memory footprint and 1.829s time consuming. However, if we only build the contextual memory for the troublesome words, comparing to All+Context, there is only a slighter BLEU points decline (40.19 vs. 40.23) while sharply reduces memory size to 893M and decoding time to 0.511s, showing that our strategy of only building the contextual memory for troublesome words is effective.",
"cite_spans": [],
"ref_spans": [
{
"start": 96,
"end": 104,
"text": "Figure 6",
"ref_id": null
},
{
"start": 288,
"end": 295,
"text": "Table 2",
"ref_id": "TABREF5"
},
{
"start": 310,
"end": 318,
"text": "Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Our Method vs. Method Using Translation Lexicon",
"sec_num": "5.3"
},
{
"text": "The final difference is that we employ the dynamic strategy to balance between NMT and the contextual memory. When we employ this dynamic strategy (denoted by +Dynamic), the improvement can further reach 1.37 BLEU points.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Memory Size",
"sec_num": null
},
{
"text": "Retrained Baseline. In the second comparison, we allow the baseline model to be retrained by Arthur's method (Arthur(train+test)). We then implement our method using Arthur(train+test) as baseline (denoted by Arthur(train+test)+MEM). Comparing the results of these two methods in Table 1 (line 3 and 6), our method is still effective on the retrained model. The average gains are 0.92 BLEU points.",
"cite_spans": [],
"ref_spans": [
{
"start": 280,
"end": 287,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Memory Size",
"sec_num": null
},
{
"text": "In our method, we investigate three exception criteria to define the troublesome words. The following experiment is conducted to compare their performances. For fairness, the comparison of the three criteria is conducted under the same number of contextual memory, which can be achieved by adjusting the respective thresholds (p 0 , g 0 and rank 0 ). The results are reported in Fig. 6 , in which the x axis represents the size of contextual memory, the y axis denotes BLEU score, and the numbers in the bracket from left to right are the respective thresholds of gap, absolute and ranking. As shown in Fig. 6 , all the three criteria can improve the translation quality. When the memory size is relatively small, absolute criterion performs best. With the size increases, the gap criterion achieves a higher performance than others. Note that our current criteria only consider one single factor. The combination of different criteria may be more beneficial, and we leave this as our future work.",
"cite_spans": [],
"ref_spans": [
{
"start": 379,
"end": 385,
"text": "Fig. 6",
"ref_id": null
},
{
"start": 603,
"end": 609,
"text": "Fig. 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effects of Different Exception Criteria",
"sec_num": "5.4"
},
{
"text": "We further analyze our method on specific troublesome words, such as low-frequency words and ambiguous words. Here, we use the following definition in our analysis. Low-frequency words: The words whose frequency is lower than 100.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on Low-Frequency Words and Ambiguous Words",
"sec_num": "5.5"
},
{
"text": "Ambiguous words: Assume a word s m contains K candidate translations with a probability p L k . If the entropy of probability distribution \u2212 K k=1 p L k logp L k > E 0 ( E 0 = 1.5 in this paper), we treat this word as an ambiguous word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on Low-Frequency Words and Ambiguous Words",
"sec_num": "5.5"
},
{
"text": "Therefore, the sentences containing troublesome words can be divided into four different parts: 1) sentences which contain both lowfrequency and ambiguous words (Low+Amb,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on Low-Frequency Words and Ambiguous Words",
"sec_num": "5.5"
},
{
"text": "Rectify Deterio Arthur(test) 51 17 Baseline+MEM 70 11 Table 5 : The numbers of rectification (Rectify) and deterioration (Deterio) caused by different models. 986 sentences), 2) sentences which contain lowfrequency words but no ambiguous words (Low, 1427 sentences). 3) sentences which contain ambiguous words while do not contain low-frequency words (Amb, 1301 sentences). 4) Other sentences (Others, 832 sentences). The results are reported in Table 3 . From this table, we observe that our proposed method improves the translation quality on all kinds of sentences. Low+Amb performs best (Low the second), indicating that our method is most effective in dealing with low-frequency words. The improvement on Amb is 0.81 BLEU points, showing that our method can also well handle the ambiguous words. We also conduct a manual analysis to figure out how many troublesome words could be rectified by our method. We randomly select 200 testing sentences, and count the following three numbers: 1) the number of troublesome words in the sentence (Tword), 2) the number of mistakes produced by Baseline (Error), 3) the number (ratio) of rectification using our method (Rectify). 4) The number of deterioration caused by our method (Deterio). The statistics are reported in Table 4 . From the results, we can get similar conclusions that our method is most effective on low-frequency and ambiguous words with the rectification rate 50.8% and 41.7% respectively.",
"cite_spans": [
{
"start": 16,
"end": 28,
"text": "Arthur(test)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 54,
"end": 61,
"text": "Table 5",
"ref_id": null
},
{
"start": 446,
"end": 453,
"text": "Table 3",
"ref_id": "TABREF7"
},
{
"start": 1268,
"end": 1275,
"text": "Table 4",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "We can notice that the proposed method also produces 11 deterioration cases (Deterio) when rectifying the troublesome words. As a comparison, we also count the total rectification and deterioration numbers of Arthur(test). The results are reported in Table 5 . These results show that our method could rectify more words (51 vs. 70) with less deterioration (17 vs. 11) than Arthur(test).",
"cite_spans": [],
"ref_spans": [
{
"start": 251,
"end": 258,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "We also test our method on EN-DE translation and the results are reported in Table 6 . We can see that our method is still effective on EN-DE translation. Specifically, when the translation unit is word, the proposed method improves the baseline by 1.13 BLEU points. The improvement is 0.76 BLEU points when the translation unit is sub-word. Table 6 : The results on EN-DE translation. \"*\" indicates that it is statistically significantly better (p < 0.05) than system X and \" \u2020\" indicates p < 0.01.",
"cite_spans": [],
"ref_spans": [
{
"start": 77,
"end": 84,
"text": "Table 6",
"ref_id": null
},
{
"start": 342,
"end": 349,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results on EN-DE Translation",
"sec_num": "6"
},
{
"text": "The related work can be divided into three categories and we describe each of them as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "Neural Turing Machine for NMT. Our idea is first inspired by the Neural Turing Machine (NTM) (Graves et al., 2014 (Graves et al., , 2016 and memory network (Weston et al., 2014) . (Wang et al., 2017a) used special NTM memory to extend the decoder in the attention-based NMT. In their method, the memory is used to provide temporary information from source to assist the decoding process. In contrast, our work uses memory to store contextual knowledge in the training data.",
"cite_spans": [
{
"start": 93,
"end": 113,
"text": "(Graves et al., 2014",
"ref_id": "BIBREF8"
},
{
"start": 114,
"end": 136,
"text": "(Graves et al., , 2016",
"ref_id": "BIBREF9"
},
{
"start": 156,
"end": 177,
"text": "(Weston et al., 2014)",
"ref_id": null
},
{
"start": 180,
"end": 200,
"text": "(Wang et al., 2017a)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "Smaller translation granularity. Our work is also inspired by the other studies to deal with the low-frequency and ambiguous words (Vickrey et al., 2005; Zhai et al., 2013; Rios et al., 2017; Carpuat and Wu, 2007; . Among them, the most relevant is the work that decomposes the low-frequency words into smaller granularities, e.g, hybrid word-character model (Luong and Manning, 2016), sub-word model (Sennrich et al., 2016) or word piece model . These methods mainly focus on lowfrequency words that are just a subset of the troublesome words. Furthermore, our experimental results show that even using a smaller translation unit, the NMT model still faces the problem of troublesome tokens and our method could alleviate this problem.",
"cite_spans": [
{
"start": 131,
"end": 153,
"text": "(Vickrey et al., 2005;",
"ref_id": "BIBREF24"
},
{
"start": 154,
"end": 172,
"text": "Zhai et al., 2013;",
"ref_id": "BIBREF31"
},
{
"start": 173,
"end": 191,
"text": "Rios et al., 2017;",
"ref_id": "BIBREF20"
},
{
"start": 192,
"end": 213,
"text": "Carpuat and Wu, 2007;",
"ref_id": "BIBREF2"
},
{
"start": 401,
"end": 424,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "Combining SMT and NMT. Our ideas are also inspired by the work which combines SMT and NMT. Earlier studies were mostly based on the SMT framework, and have been deeply discussed by the review paper in Zhang and Zong (2015) . Later, the researchers transfer to NMT framework, e.g. (Wang et al., 2017b; Zhou et al., 2017; Tu et al., 2016; Mi et al., 2016; He et al., 2016; Dahlmann et al., 2017; Wang et al., 2017c,d; Gu et al., 2018; Zhao et al., 2018) . The most relevant studies are Arthur et al. (2016) and Feng et al. (2017) . They incorporate the lexicon pairs into NMT to improve the translation quality. There are three differences between our method and theirs. First, we only utilize the lexicon pairs for the troublesome words, rather than using all lexicon pairs. Second, we take contextual information into consideration for memory construction. Third, we design a dynamic strategy to balance the memory and NMT. The experiments show the superiority of our proposed methods.",
"cite_spans": [
{
"start": 201,
"end": 222,
"text": "Zhang and Zong (2015)",
"ref_id": "BIBREF32"
},
{
"start": 280,
"end": 300,
"text": "(Wang et al., 2017b;",
"ref_id": "BIBREF26"
},
{
"start": 301,
"end": 319,
"text": "Zhou et al., 2017;",
"ref_id": "BIBREF35"
},
{
"start": 320,
"end": 336,
"text": "Tu et al., 2016;",
"ref_id": "BIBREF22"
},
{
"start": 337,
"end": 353,
"text": "Mi et al., 2016;",
"ref_id": "BIBREF18"
},
{
"start": 354,
"end": 370,
"text": "He et al., 2016;",
"ref_id": "BIBREF11"
},
{
"start": 371,
"end": 393,
"text": "Dahlmann et al., 2017;",
"ref_id": "BIBREF4"
},
{
"start": 394,
"end": 415,
"text": "Wang et al., 2017c,d;",
"ref_id": null
},
{
"start": 416,
"end": 432,
"text": "Gu et al., 2018;",
"ref_id": "BIBREF10"
},
{
"start": 433,
"end": 451,
"text": "Zhao et al., 2018)",
"ref_id": "BIBREF34"
},
{
"start": 484,
"end": 504,
"text": "Arthur et al. (2016)",
"ref_id": "BIBREF0"
},
{
"start": 509,
"end": 527,
"text": "Feng et al. (2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "To address troublesome words in NMT, we have proposed a novel memory-enhanced framework. We first define and detect the troublesome words, then construct a contextual memory to store the translation knowledge and finally access the contextual memory dynamically to correctly translate the troublesome words. The extensive experiments on Chinese-to-English and English-to-German translation tasks demonstrate that our method significantly outperforms the strong baseline models in translation quality, especially in handling the troublesome words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "In this work, we consider a source word is ambiguous if it has multiple translations with high entropy of probability distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The word alignments A is extracted using the fast-align tool(Dyer et al., 2013) on the bilingual training data with both source-to-target and target-to-source directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In our preliminary experiment, we also try cosine distance to measure this similarity, while the performance of cosine distance is lower than the current feed-forward network method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/isi-nlp/Zoph_RNN. We extend this toolkit with global attention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The research work described in this paper has been supported by the National Key ",
"cite_spans": [],
"ref_spans": [
{
"start": 77,
"end": 80,
"text": "Key",
"ref_id": null
}
],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Incorporating discrete translation lexicons into neural machine translation",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Arthur",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Nakamura",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of EMNLP 2016",
"volume": "",
"issue": "",
"pages": "1557--1567",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Arthur, Graham Neubig, and Satoshi Nakamura. 2016. Incorporating discrete translation lexicons into neural machine translation. In Proceedings of EMNLP 2016, pages 1557-1567.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR 2015.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Improving statistical machine translation using word sense disambiguation",
"authors": [
{
"first": "Marine",
"middle": [],
"last": "Carpuat",
"suffix": ""
},
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2007,
"venue": "proceedings of EMNLP 2007",
"volume": "",
"issue": "",
"pages": "61--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marine Carpuat and Dekai Wu. 2007. Improving sta- tistical machine translation using word sense disam- biguation. In In proceedings of EMNLP 2007, pages 61-72.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer Caglar Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP 2014",
"volume": "",
"issue": "",
"pages": "1724--1734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of EMNLP 2014, pages 1724-1734.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Neural machine translation leveraging phrase-based models in a hybrid search",
"authors": [
{
"first": "Leonard",
"middle": [],
"last": "Dahlmann",
"suffix": ""
},
{
"first": "Evgeny",
"middle": [],
"last": "Matusov",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Petrushkov",
"suffix": ""
},
{
"first": "Shahram",
"middle": [],
"last": "Khadivi",
"suffix": ""
}
],
"year": 2017,
"venue": "proceedings of EMNLP 2017",
"volume": "",
"issue": "",
"pages": "1422--1431",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leonard Dahlmann, Evgeny Matusov, Pavel Petrushkov, and Shahram Khadivi. 2017. Neu- ral machine translation leveraging phrase-based models in a hybrid search. In proceedings of EMNLP 2017, pages 1422-1431.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A simple, fast, and effective reparameterization of ibm model 2",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Chahuneau",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer, Victor Chahuneau, and Noah A Smith. 2013. A simple, fast, and effective reparameteriza- tion of ibm model 2. In In proceedings of NAACL 2013.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Memory-augmented neural machine translation",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Shiyue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Andi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Abel",
"suffix": ""
}
],
"year": 2017,
"venue": "proceedings of EMNLP 2017",
"volume": "",
"issue": "",
"pages": "1401--1410",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Feng, Shiyue Zhang, Andi Zhang, Dong Wang, and Andrew Abel. 2017. Memory-augmented neu- ral machine translation. In proceedings of EMNLP 2017, pages 1401-1410.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Convolutional sequence to sequence learning",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Gehring",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Yarats",
"suffix": ""
},
{
"first": "Yann",
"middle": [
"N"
],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1601.03317"
]
},
"num": null,
"urls": [],
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, De- nis Yarats, and Yann N. Dauphin. 2017. Convolu- tional sequence to sequence learning. arXiv preprint arXiv:1601.03317.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Neural turing machines",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Wayne",
"suffix": ""
},
{
"first": "Ivo",
"middle": [],
"last": "Danihelka",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1410.5401"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. arXiv preprint arXiv:1410.5401.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Hybrid computing using a neural network with dynamic external memory",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Wayne",
"suffix": ""
},
{
"first": "Malcolm",
"middle": [],
"last": "Reynolds",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Harley",
"suffix": ""
},
{
"first": "Ivo",
"middle": [],
"last": "Danihelka",
"suffix": ""
},
{
"first": "Agnieszka",
"middle": [],
"last": "Grabska-Barwi\u0144ska",
"suffix": ""
},
{
"first": "Sergio",
"middle": [
"G\u00f3mez"
],
"last": "Colmenarejo",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Tiago",
"middle": [],
"last": "Ramalho",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Agapiou",
"suffix": ""
}
],
"year": 2016,
"venue": "Nature",
"volume": "538",
"issue": "7626",
"pages": "471--476",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska- Barwi\u0144ska, Sergio G\u00f3mez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. 2016. Hybrid computing using a neural net- work with dynamic external memory. Nature, 538(7626):471-476.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Search engine guided nonparametric neural machine translation",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Yong Wang, Kyunghyun Cho, and Vic- tor O. K. Li. 2018. Search engine guided non- parametric neural machine translation. In proceed- ings of AAAI 2018.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Improved neural machine translation with smt features",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei He, Zhongjun He, Hua Wu, and Haifeng Wang. 2016. Improved neural machine translation with smt features. In Proceedings of AAAI 2016.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "On using very large target vocabulary for neural machine translation",
"authors": [
{
"first": "S\u00e9bastien",
"middle": [],
"last": "Jean",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Memisevic",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL 2015",
"volume": "",
"issue": "",
"pages": "124--129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S\u00e9bastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large tar- get vocabulary for neural machine translation. In Proceedings of ACL 2015, pages 124-129.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Is neural machine translation ready for deployment? a case study on 30 translation directions",
"authors": [
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
},
{
"first": "Tomasz",
"middle": [],
"last": "Dwojak",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1610.01108"
]
},
"num": null,
"urls": [],
"raw_text": "Marcin Junczys-Dowmunt, Tomasz Dwojak, and Hieu Hoang. 2016. Is neural machine translation ready for deployment? a case study on 30 translation di- rections. arXiv preprint arXiv:1610.01108.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Towards zero unknown word in neural machine translation",
"authors": [
{
"first": "Xiaoqing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of IJCAI 2016",
"volume": "",
"issue": "",
"pages": "2852--2858",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaoqing Li, Jiajun Zhang, and Chengqing Zong. 2016. Towards zero unknown word in neural machine translation. In Proceedings of IJCAI 2016, pages 2852-2858.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Achieving open vocabulary neural machine translation with hybrid word-character models",
"authors": [
{
"first": "Minh",
"middle": [
"Thang"
],
"last": "Luong",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2016,
"venue": "proceedings of EMNLP 2016",
"volume": "",
"issue": "",
"pages": "1054--1063",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh Thang Luong and Christopher D. Manning. 2016. Achieving open vocabulary neural machine translation with hybrid word-character models. In In proceedings of EMNLP 2016, pages 1054-1063.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Effective approaches to attentionbased neural machine translation",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP 2015",
"volume": "",
"issue": "",
"pages": "1412--1421",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention- based neural machine translation. In Proceedings of EMNLP 2015, pages 1412-1421.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Coverage embedding model for neural machine translation",
"authors": [
{
"first": "Haitao",
"middle": [],
"last": "Mi",
"suffix": ""
},
{
"first": "Zhiguo",
"middle": [],
"last": "Baskaran Sankaran",
"suffix": ""
},
{
"first": "Abe",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ittycheriah",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of EMNLP 2016",
"volume": "",
"issue": "",
"pages": "955--960",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haitao Mi, Baskaran Sankaran, Zhiguo Wang, and Abe Ittycheriah. 2016. Coverage embedding model for neural machine translation. In Proceedings of EMNLP 2016, pages 955-960.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL 2002",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of ACL 2002, pages 311-318.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Improving word sense disambiguation in neural machine translation with sense embeddings",
"authors": [
{
"first": "Annette",
"middle": [],
"last": "Rios",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Mascarell",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
}
],
"year": 2017,
"venue": "In proceedings of WMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annette Rios, Laura Mascarell, and Rico Sennrich. 2017. Improving word sense disambiguation in neu- ral machine translation with sense embeddings. In In proceedings of WMT 2017.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ACL 2016",
"volume": "",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of ACL 2016, pages 1715-1725.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Coverage-based neural machine translation",
"authors": [
{
"first": "Zhaopeng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xiaohua",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ACL 2016",
"volume": "",
"issue": "",
"pages": "76--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Coverage-based neural machine translation. In Proceedings of ACL 2016, pages 76- 85.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kaiser",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1601.03317"
]
},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, and ukasz Kaiser. 2017. Attention is all you need. arXiv preprint arXiv:1601.03317.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Word-sense disambiguation for machine translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Vickrey",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Biewald",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Teyssier",
"suffix": ""
},
{
"first": "Daphne",
"middle": [],
"last": "Koller",
"suffix": ""
}
],
"year": 2005,
"venue": "proceedings of ACL 2005",
"volume": "",
"issue": "",
"pages": "771--778",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Vickrey, Luke Biewald, Marc Teyssier, and Daphne Koller. 2005. Word-sense disambiguation for machine translation. In In proceedings of ACL 2005, pages 771-778.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Memory-enhanced decoder for neural machine translation",
"authors": [
{
"first": "Mingxuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "proceedings of EMNLP 2017",
"volume": "",
"issue": "",
"pages": "278--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mingxuan Wang, Zhengdong Lu, Hang Li, and Qun Liu. 2017a. Memory-enhanced decoder for neu- ral machine translation. In proceedings of EMNLP 2017, pages 278-286.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Neural machine translation advised by statistical machine translation",
"authors": [
{
"first": "Xing",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Zhaopeng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Deyi",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xing Wang, Zhengdong Lu, Zhaopeng Tu, Hang Li, Deyi Xiong, and Min Zhang. 2017b. Neural machine translation advised by statistical machine translation. In proceedings of AAAI 2017.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Translating phrases in neural machine translation",
"authors": [
{
"first": "Xing",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhaopeng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Deyi",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2017,
"venue": "proceedings of EMNLP 2017",
"volume": "",
"issue": "",
"pages": "1432--1442",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xing Wang, Zhaopeng Tu, Deyi Xiong, and Min Zhang. 2017c. Translating phrases in neural ma- chine translation. In proceedings of EMNLP 2017, pages 1432-1442.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Towards neural machine translation with partially aligned corpora",
"authors": [
{
"first": "Yining",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
},
{
"first": "Zhengshan",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yining Wang, Yang Zhao, Jiajun Zhang, Chengqing Zong, and Zhengshan Xue. 2017d. Towards neural machine translation with partially aligned corpora. In proceedings of IJCNLP 2017.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.08144"
]
},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural ma- chine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Handling ambiguities of bilingual predicate-argument structures for statistical machine translation",
"authors": [
{
"first": "Feifei",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2013,
"venue": "proceedings of ACL 2013",
"volume": "",
"issue": "",
"pages": "1127--1136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Feifei Zhai, Jiajun Zhang, Yu Zhou, and Chengqing Zong. 2013. Handling ambiguities of bilingual predicate-argument structures for statistical machine translation. In In proceedings of ACL 2013, pages 1127-1136.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Deep neural networks in machine translation: An overview",
"authors": [
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2015,
"venue": "IEEE Intelligent Systems",
"volume": "30",
"issue": "5",
"pages": "16--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiajun Zhang and Chengqing Zong. 2015. Deep neu- ral networks in machine translation: An overview. IEEE Intelligent Systems, 30(5):16-25.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Exploiting source-side monolingual data in neural machine translation",
"authors": [
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of EMNLP 2016",
"volume": "",
"issue": "",
"pages": "1535--1545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiajun Zhang and Chengqing Zong. 2016. Exploit- ing source-side monolingual data in neural machine translation. In Proceedings of EMNLP 2016, pages 1535-1545.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Phrase table as recommendation memory for neural machine translation",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yining",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2018,
"venue": "proceedings of IJCAI 2018",
"volume": "",
"issue": "",
"pages": "4609--4615",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Zhao, Yining Wang, Jiajun Zhang, and Chengqing Zong. 2018. Phrase table as recommen- dation memory for neural machine translation. In proceedings of IJCAI 2018, pages 4609-4615.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Neural system combination for machine translation",
"authors": [
{
"first": "Long",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wenpeng",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL 2017",
"volume": "",
"issue": "",
"pages": "378--384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Long Zhou, Wenpeng Hu, Jiajun Zhang, and Chengqing Zong. 2017. Neural system combina- tion for machine translation. In Proceedings of ACL 2017, pages 378-384.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Multi-source neural translation",
"authors": [
{
"first": "Barret",
"middle": [],
"last": "Zoph",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of NAACL 2016",
"volume": "",
"issue": "",
"pages": "30--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barret Zoph and Kevin Knight. 2016. Multi-source neural translation. In Proceedings of NAACL 2016, pages 30-34.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "The main process to detect an exception.",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "An example to show that considering context could produce a better translation result. In the example, Arthur(test) translates chengzhang into a wrong target word growth, while Baseline+MEM could overcome this mistake with the help of the context modeling.",
"type_str": "figure"
},
"TABREF2": {
"content": "<table><tr><td># Model</td><td>03</td><td>04</td><td>05</td><td>06</td><td>08</td><td>Avg.</td><td/></tr><tr><td>1 Baseline</td><td>41.01</td><td>42.94</td><td>40.31</td><td>40.57</td><td>30.96</td><td>39.16</td><td>-</td></tr><tr><td>2 Arthur(test)</td><td>41.34</td><td>43.31</td><td>40.79</td><td>40.84</td><td>31.11</td><td>39.48</td><td>-</td></tr><tr><td>3 Arthur(train+test)</td><td>41.88</td><td>43.75</td><td>41.16</td><td>41.63</td><td>31.47</td><td>39.98</td><td>-</td></tr><tr><td>4 Baseline(sub-word)</td><td>43.93</td><td>44.74</td><td>42.46</td><td>43.01</td><td>32.53</td><td>41.33</td><td>-</td></tr><tr><td>5 Baseline+MEM</td><td>42.74</td><td/><td/><td/><td/><td/><td/></tr></table>",
"num": null,
"type_str": "table",
"text": "\u2020 43.94 \u2020 42.15 \u2020 41.94 \u2020 31.86 \u2020 40.53 +1.37 6 Arthur(train+test)+MEM 43.04 \u2020 44.65 * 42.19 \u2020 42.59 \u2020 32.05 * 40.90 +0.92 7 Baseline(sub-word)+MEM 44.98 \u2020 45.51 \u2020 43.93 \u2020 43.95 \u2020 33.33 \u2020 42.34 +1.01",
"html": null
},
"TABREF3": {
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "The main results of CH-EN translation.",
"html": null
},
"TABREF5": {
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "",
"html": null
},
"TABREF7": {
"content": "<table><tr><td>Type</td><td colspan=\"2\">Tword Error</td><td>Rectify</td><td>Deterio</td></tr><tr><td>Total</td><td>374</td><td>153</td><td>70 (45.8%)</td><td>11</td></tr><tr><td>Low+Amb</td><td>77</td><td>30</td><td>16 (53.3%)</td><td>3</td></tr><tr><td>Low</td><td>144</td><td>59</td><td>30 (50.8%)</td><td>4</td></tr><tr><td>Amb</td><td>117</td><td>48</td><td>20 (41.7%)</td><td>4</td></tr><tr><td>Others</td><td>36</td><td>16</td><td>4 (25%)</td><td>0</td></tr></table>",
"num": null,
"type_str": "table",
"text": "The BLEU score on different kinds of sentences. Low denotes low frequency words, Amb denotes ambiguous words. Low+Amb denotes the low frequency and ambiguous words.",
"html": null
},
"TABREF8": {
"content": "<table><tr><td>: The manual analysis on the word level. Col-</td></tr><tr><td>umn Tword shows the number of troublesome words</td></tr><tr><td>in sentence. Column Error shows the number of er-</td></tr><tr><td>rors made by Baseline when translates the troublesome</td></tr><tr><td>words. The number (ratio) of rectification caused by</td></tr><tr><td>our method is reported in column Rectify. Column</td></tr><tr><td>Deterio shows the number of deterioration (the orig-</td></tr><tr><td>inal translation is correct, while our method produces</td></tr><tr><td>the incorrect translation) caused by our method.</td></tr></table>",
"num": null,
"type_str": "table",
"text": "",
"html": null
},
"TABREF9": {
"content": "<table><tr><td>Model</td><td>Unit</td><td colspan=\"2\">EN-DE dev test</td></tr><tr><td>Baseline</td><td>word</td><td>20.28</td><td>21.04</td></tr><tr><td>Baseline+MEM</td><td>word</td><td colspan=\"2\">21.34 \u2020 22.17 \u2020</td></tr><tr><td>Baseline</td><td>sub-word</td><td>22.10</td><td>22.85</td></tr></table>",
"num": null,
"type_str": "table",
"text": "Baseline+MEM sub-word 23.05 \u2020 23.61 *",
"html": null
}
}
}
}