{ "paper_id": "P16-1014", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:55:47.243286Z" }, "title": "Pointing the Unknown Words", "authors": [ { "first": "Caglar", "middle": [], "last": "Gulcehre", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Ramesh", "middle": [], "last": "Nallapati", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Ibm", "middle": [ "T J" ], "last": "Watson Research", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The problem of rare and unknown words is an important issue that can potentially effect the performance of many NLP systems, including traditional count-based and deep learning models. We propose a novel way to deal with the rare and unseen words for the neural network models using attention. Our model uses two softmax layers in order to predict the next word in conditional language models: one predicts the location of a word in the source sentence, and the other predicts a word in the shortlist vocabulary. At each timestep, the decision of which softmax layer to use is adaptively made by an MLP which is conditioned on the context. We motivate this work from a psychological evidence that humans naturally have a tendency to point towards objects in the context or the environment when the name of an object is not known. Using our proposed model, we observe improvements on two tasks, neural machine translation on the Europarl English to French parallel corpora and text summarization on the Gigaword dataset.", "pdf_parse": { "paper_id": "P16-1014", "_pdf_hash": "", "abstract": [ { "text": "The problem of rare and unknown words is an important issue that can potentially effect the performance of many NLP systems, including traditional count-based and deep learning models. We propose a novel way to deal with the rare and unseen words for the neural network models using attention. Our model uses two softmax layers in order to predict the next word in conditional language models: one predicts the location of a word in the source sentence, and the other predicts a word in the shortlist vocabulary. At each timestep, the decision of which softmax layer to use is adaptively made by an MLP which is conditioned on the context. We motivate this work from a psychological evidence that humans naturally have a tendency to point towards objects in the context or the environment when the name of an object is not known. Using our proposed model, we observe improvements on two tasks, neural machine translation on the Europarl English to French parallel corpora and text summarization on the Gigaword dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Words are the basic input/output units in most of the NLP systems, and thus the ability to cover a large number of words is a key to building a robust NLP system. However, considering that (i) the number of all words in a language including named entities is very large and that (ii) language itself is an evolving system (people create new words), this can be a challenging problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A common approach followed by the recent neural network based NLP systems is to use a softmax output layer where each of the output di-mension corresponds to a word in a predefined word-shortlist. Because computing high dimensional softmax is computationally expensive, in practice the shortlist is limited to have only top-K most frequent words in the training corpus. All other words are then replaced by a special word, called the unknown word (UNK).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The shortlist approach has two fundamental problems. The first problem, which is known as the rare word problem, is that some of the words in the shortlist occur less frequently in the training set and thus are difficult to learn a good representation, resulting in poor performance. Second, it is obvious that we can lose some important information by mapping different words to a single dummy token UNK. Even if we have a very large shortlist including all unique words in the training set, it does not necessarily improve the test performance, because there still exists a chance to see an unknown word at test time. This is known as the unknown word problem. In addition, increasing the shortlist size mostly leads to increasing rare words due to Zipf's Law.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "These two problems are particularly critical in language understanding tasks such as factoid question answering (Bordes et al., 2015) where the words that we are interested in are often named entities which are usually unknown or rare words.", "cite_spans": [ { "start": 112, "end": 133, "text": "(Bordes et al., 2015)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In a similar situation, where we have a limited information on how to call an object of interest, it seems that humans (and also some primates) have an efficient behavioral mechanism of drawing attention to the object: pointing (Matthews et al., 2012) . Pointing makes it possible to deliver information and to associate context to a particular object without knowing how to call it. In particular, human infants use pointing as a fundamental communication tool (Tomasello et al., 2007) .", "cite_spans": [ { "start": 228, "end": 251, "text": "(Matthews et al., 2012)", "ref_id": null }, { "start": 462, "end": 486, "text": "(Tomasello et al., 2007)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, inspired by the pointing behavior of humans and recent advances in the atten-tion mechanism (Bahdanau et al., 2014) and the pointer networks , we propose a novel method to deal with the rare or unknown word problem. The basic idea is that we can see many NLP problems as a task of predicting target text given context text, where some of the target words appear in the context as well. We observe that in this case we can make the model learn to point a word in the context and copy it to the target text, as well as when to point. For example, in machine translation, we can see the source sentence as the context, and the target sentence as what we need to predict. In Figure 1 , we show an example depiction of how words can be copied from source to target in machine translation. Although the source and target languages are different, many of the words such as named entities are usually represented by the same characters in both languages, making it possible to copy. Similarly, in text summarization, it is natural to use some words in the original text in the summarized text as well.", "cite_spans": [ { "start": 107, "end": 130, "text": "(Bahdanau et al., 2014)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 686, "end": 694, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Specifically, to predict a target word at each timestep, our model first determines the source of the word generation, that is, whether to take one from a predefined shortlist or to copy one from the context. For the former, we apply the typical softmax operation, and for the latter, we use the attention mechanism to obtain the pointing softmax probability over the context words and pick the one of high probability. The model learns this decision so as to use the pointing only when the context includes a word that can be copied to the target. This way, our model can predict even the words which are not in the shortlist, as long as it appears in the context. Although some of the words still need to be labeled as UNK, i.e., if it is neither in the shortlist nor in the context, in experiments we show that this learning when and where to point improves the performance in machine translation and text summarization. Guillaume and Cesar have a blue car in Lausanne.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Copy Copy", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Copy", "sec_num": null }, { "text": "English:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "French:", "sec_num": null }, { "text": "Figure 1: An example of how copying can happen for machine translation. Common words that appear both in source and the target can directly be copied from input to source. The rest of the unknown in the target can be copied from the input after being translated with a dictionary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "French:", "sec_num": null }, { "text": "The rest of the paper is organized as follows. In the next section, we review the related works including pointer networks and previous approaches to the rare/unknown problem. In Section 3, we review the neural machine translation with attention mechanism which is the baseline in our experiments. Then, in Section 4, we propose our method dealing with the rare/unknown word problem, called the Pointer Softmax (PS). The experimental results are provided in the Section 5 and we conclude our work in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "French:", "sec_num": null }, { "text": "The attention-based pointing mechanism is introduced first in the pointer networks . In the pointer networks, the output space of the target sequence is constrained to be the observations in the input sequence (not the input space). Instead of having a fixed dimension softmax output layer, softmax outputs of varying dimension is dynamically computed for each input sequence in such a way to maximize the attention probability of the target input. However, its applicability is rather limited because, unlike our model, there is no option to choose whether to point or not; it always points. In this sense, we can see the pointer networks as a special case of our model where we always choose to point a context word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Several approaches have been proposed towards solving the rare words/unknown words problem, which can be broadly divided into three categories. The first category of the approaches focuses on improving the computation speed of the softmax output so that it can maintain a very large vocabulary. Because this only increases the shortlist size, it helps to mitigate the unknown word problem, but still suffers from the rare word problem. The hierarchical softmax (Morin and Bengio, 2005) , importance sampling (Bengio and Sen\u00e9cal, 2008; Jean et al., 2014) , and the noise contrastive estimation (Gutmann and Hyv\u00e4rinen, 2012; Mnih and Kavukcuoglu, 2013) methods are in the class.", "cite_spans": [ { "start": 461, "end": 485, "text": "(Morin and Bengio, 2005)", "ref_id": "BIBREF13" }, { "start": 508, "end": 534, "text": "(Bengio and Sen\u00e9cal, 2008;", "ref_id": "BIBREF1" }, { "start": 535, "end": 553, "text": "Jean et al., 2014)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The second category, where our proposed method also belongs to, uses information from the context. Notable works are (Luong et al., 2015) and (Hermann et al., 2015) . In particular, applying to machine translation task, (Luong et al., 2015) learns to point some words in source sentence and copy it to the target sentence, similarly to our method. However, it does not use attention mechanism, and by having fixed sized soft-max output over the relative pointing range (e.g., -7, . . . , -1, 0, 1, . . . , 7), their model (the Positional All model) has a limitation in applying to more general problems such as summarization and question answering, where, unlike machine translation, the length of the context and the pointing locations in the context can vary dramatically. In question answering setting, (Hermann et al., 2015) have used placeholders on named entities in the context. However, the placeholder id is directly predicted in the softmax output rather than predicting its location in the context.", "cite_spans": [ { "start": 117, "end": 137, "text": "(Luong et al., 2015)", "ref_id": "BIBREF11" }, { "start": 142, "end": 164, "text": "(Hermann et al., 2015)", "ref_id": "BIBREF9" }, { "start": 220, "end": 240, "text": "(Luong et al., 2015)", "ref_id": "BIBREF11" }, { "start": 806, "end": 828, "text": "(Hermann et al., 2015)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The third category of the approaches changes the unit of input/output itself from words to a smaller resolution such as characters (Graves, 2013) or bytecodes (Sennrich et al., 2015; Gillick et al., 2015) . Although this approach has the main advantage that it could suffer less from the rare/unknown word problem, the training usually becomes much harder because the length of sequences significantly increases.", "cite_spans": [ { "start": 131, "end": 145, "text": "(Graves, 2013)", "ref_id": "BIBREF6" }, { "start": 159, "end": 182, "text": "(Sennrich et al., 2015;", "ref_id": "BIBREF17" }, { "start": 183, "end": 204, "text": "Gillick et al., 2015)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Simultaneously to our work, (Gu et al., 2016) and (Cheng and Lapata, 2016) proposed models that learn to copy from source to target and both papers analyzed their models on summarization tasks.", "cite_spans": [ { "start": 28, "end": 45, "text": "(Gu et al., 2016)", "ref_id": "BIBREF7" }, { "start": 50, "end": 74, "text": "(Cheng and Lapata, 2016)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "As the baseline neural machine translation system, we use the model proposed by (Bahdanau et al., 2014) that learns to (soft-)align and translate jointly. We refer this model as NMT. The encoder of the NMT is a bidirectional RNN (Schuster and Paliwal, 1997). The forward RNN reads input sequence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model with Attention", "sec_num": "3" }, { "text": "x = (x 1 , . . . , x T ) in left-to-right direction, resulting in a sequence of hidden states ( \u2212 \u2192 h 1 , . . . , \u2212 \u2192 h T ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model with Attention", "sec_num": "3" }, { "text": "The backward RNN reads x in the reversed direction and outputs", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model with Attention", "sec_num": "3" }, { "text": "( \u2190 \u2212 h 1 , . . . , \u2190 \u2212 h T ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model with Attention", "sec_num": "3" }, { "text": "We then concatenate the hidden states of forward and backward RNNs at each time step and obtain a sequence of annotation vectors", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model with Attention", "sec_num": "3" }, { "text": "(h 1 , . . . , h T ) where h j = \u2212 \u2192 h j || \u2190 \u2212 h j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model with Attention", "sec_num": "3" }, { "text": "Here, || denotes the concatenation operator. Thus, each annotation vector h j encodes information about the j-th word with respect to all the other surrounding words in both directions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model with Attention", "sec_num": "3" }, { "text": "In the decoder, we usually use gated recurrent unit (GRU) Chung et al., 2014) . Specifically, at each time-step t, the softalignment mechanism first computes the relevance weight e tj which determines the contribution of annotation vector h j to the t-th target word. We use a non-linear mapping f (e.g., MLP) which takes h j , the previous decoder's hidden state s t\u22121 and the previous output y t\u22121 as input:", "cite_spans": [ { "start": 58, "end": 77, "text": "Chung et al., 2014)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model with Attention", "sec_num": "3" }, { "text": "e tj = f (s t\u22121 , h j , y t\u22121 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model with Attention", "sec_num": "3" }, { "text": "The outputs e tj are then normalized as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model with Attention", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "l tj = exp(e tj ) T k=1 exp(e tk ) .", "eq_num": "(1)" } ], "section": "Neural Machine Translation Model with Attention", "sec_num": "3" }, { "text": "We call l tj as the relevance score, or the alignment weight, of the j-th annotation vector.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model with Attention", "sec_num": "3" }, { "text": "The relevance scores are used to get the context vector c t of the t-th target word in the translation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model with Attention", "sec_num": "3" }, { "text": "c t = T j=1 l tj h j ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model with Attention", "sec_num": "3" }, { "text": "The hidden state of the decoder s t is computed based on the previous hidden state s t\u22121 , the context vector c t and the output word of the previous time-step y t\u22121 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model with Attention", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s t = f r (s t\u22121 , y t\u22121 , c t ),", "eq_num": "(2)" } ], "section": "Neural Machine Translation Model with Attention", "sec_num": "3" }, { "text": "where f r is GRU. We use a deep output layer (Pascanu et al., 2013) to compute the conditional distribution over words:", "cite_spans": [ { "start": 45, "end": 67, "text": "(Pascanu et al., 2013)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model with Attention", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(y t = a|y