ACL-OCL / Base_JSON /prefixK /json /knlp /2020.knlp-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:09:11.874698Z"
},
"title": "Dialogue over Context and Structured Knowledge using a Neural Network Model with External Memories",
"authors": [
{
"first": "Yuri",
"middle": [
"Murayama"
],
"last": "Lis",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ochanomizu University",
"location": {
"addrLine": "2-1-1 Otsuka, Bunkyo-ku",
"settlement": "Tokyo",
"country": "Japan"
}
},
"email": ""
},
{
"first": "Kanashiro",
"middle": [],
"last": "Pereira",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ochanomizu University",
"location": {
"addrLine": "2-1-1 Otsuka, Bunkyo-ku",
"settlement": "Tokyo",
"country": "Japan"
}
},
"email": "kanashiro.pereira@ocha.ac.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The Differentiable Neural Computer (DNC), a neural network model with an addressable external memory, can solve algorithmic and question answering tasks. There are various improved versions of DNC, such as rsDNC and DNC-DMS. However, how to integrate structured knowledge into these DNC models remains a challenging research question. We incorporate an architecture for knowledge into such DNC models, i.e. DNC, rsDNC and DNC-DMS, to improve the ability to generate correct responses using both contextual information and structured knowledge. Our improved rsDNC model improves the mean accuracy by approximately 20% to the original rsDNC on tasks requiring knowledge in the dialog bAbI tasks. In addition, our improved rs-DNC and DNC-DMS models also yield better performance than their original models in the Movie Dialog dataset.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The Differentiable Neural Computer (DNC), a neural network model with an addressable external memory, can solve algorithmic and question answering tasks. There are various improved versions of DNC, such as rsDNC and DNC-DMS. However, how to integrate structured knowledge into these DNC models remains a challenging research question. We incorporate an architecture for knowledge into such DNC models, i.e. DNC, rsDNC and DNC-DMS, to improve the ability to generate correct responses using both contextual information and structured knowledge. Our improved rsDNC model improves the mean accuracy by approximately 20% to the original rsDNC on tasks requiring knowledge in the dialog bAbI tasks. In addition, our improved rs-DNC and DNC-DMS models also yield better performance than their original models in the Movie Dialog dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recently, deep neural networks have made significant progress in complex pattern matching of various tasks such as computer vision and natural language processing. However, these models are limited in their ability to represent data structures such as graphs and trees, to use variables, and to handle representations over long sequences. Recurrent neural networks (RNNs) can capture the long-range dependencies of sequential data and are also known as Turing-Complete (Siegelmann and Sontag, 1995) and therefore are capable of simulating arbitrary procedures, if properly wired. However, RNNs struggled with the vanishing gradients problem (Bengio et al., 1994) . The long short-term memory (LSTM) architecture addressed this problem by taking gating mechanisms into RNN architectures and calculating the gradients by element-wise multiplication with the gate value at every time-step (Hochreiter and Schmidhuber, 1997) . LSTMs became quite successful and helped to outperform traditional models because of the sequence to sequence model (Sutskever et al., 2014) and attention mechanisms (Bahdanau et al., 2014; Luong et al., 2015 ). Yet LSTM based models have not reached a real solution to the problems mentioned as the limitations of deep neural networks.",
"cite_spans": [
{
"start": 469,
"end": 498,
"text": "(Siegelmann and Sontag, 1995)",
"ref_id": "BIBREF20"
},
{
"start": 641,
"end": 662,
"text": "(Bengio et al., 1994)",
"ref_id": "BIBREF2"
},
{
"start": 886,
"end": 920,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF10"
},
{
"start": 1039,
"end": 1063,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF22"
},
{
"start": 1089,
"end": 1112,
"text": "(Bahdanau et al., 2014;",
"ref_id": "BIBREF0"
},
{
"start": 1113,
"end": 1131,
"text": "Luong et al., 2015",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "On the other hand, in the von Neumann architecture, programs can be run by three fundamental mechanisms: a processing unit that performs simple arithmetic operations and logic ones, a control unit that takes control flow instructions such as sequential execution, conditional branch and loop, and a memory unit that data and instructions are written to and read from during computation (Neumann, 1945) . This architecture can represent complex data structures and learn to perform algorithmic tasks. It separates computation by a processor (i.e. a processing unit and a control one) and memory, whereas neural networks mix computation and memory into the network weights. To improve the performance of standard neural networks, Graves et al. (2014) proposed a neural network model with an addressable external memory called Neural Turing Machine (NTM). The whole architecture of NTM is differentiable, therefore can be trained end-to-end including how to access to the memory. Further, improved the memory access mechanism and proposed Differentiable Neural Computer (DNC). It solved algorithmic tasks over structured data such as traversal and shortest-path tasks of a route map and an inference task of a family tree. In the experiment on question answering with premises, input sequences were written to the memory and necessary information to infer the answer was read from the memory, hence representing variables. DNC was also able to learn long sequences by the dynamic memory access mechanism. There are various improved versions of DNC, such as rsDNC (Franke et al., 2018) and DNC-DMS (Csord\u00e1s and Schmidhuber, 2019) .",
"cite_spans": [
{
"start": 386,
"end": 401,
"text": "(Neumann, 1945)",
"ref_id": "BIBREF15"
},
{
"start": 728,
"end": 748,
"text": "Graves et al. (2014)",
"ref_id": "BIBREF8"
},
{
"start": 1560,
"end": 1581,
"text": "(Franke et al., 2018)",
"ref_id": "BIBREF7"
},
{
"start": 1594,
"end": 1625,
"text": "(Csord\u00e1s and Schmidhuber, 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, how to integrate structured knowledge into these DNC models remains a challenging research question. This paper investigates how to incorporate structured knowledge into DNC models. We extend single-memory unit DNC models to a multiple-memory architecture that leverages both contextual and structured knowledge information. We add an extra memory unit that encodes knowledge from knowledge bases. In contrast with RNNs, the memory-augmentation of DNC allows an explicit storage and manipulation of complex data structures over a long time-scale (Franke et al., 2018) . Our main contributions are as follows:",
"cite_spans": [
{
"start": 555,
"end": 576,
"text": "(Franke et al., 2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We incorporate a knowledge memory architecture into DNC models, i.e. DNC, rsDNC and DNC-DMS, to improve the ability to generate correct responses using both contextual information and structured knowledge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Our improved rsDNC model improves the mean accuracy by approximately 20% to the original rsDNC on tasks requiring knowledge in the dialog bAbI tasks .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 In addition, our improved rsDNC and DNC-DMS models also yield better performance than their original models in the Movie Dialog dataset .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The whole paper is organized as follows. Section 2 briefly introduces the DNC, rsDNC and DNC-DMS models. We describe our proposed model in Section 3 and our experiments and detailed analysis in Section 4. Section 5 introduces related works. Finally, we conclude this paper and explore the future work in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The Differentiable Neural Computer (DNC) is a neural network coupled to an external memory matrix M \u2208 R N \u00d7W , as shown in Figure 1 . It uses attention mechanisms to define weightings over N locations of the memory matrix M that represent which locations should be read or written mainly.",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 131,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Differentiable Neural Computer",
"sec_num": "2"
},
{
"text": "For the read operation, the read vector r is computed as a weighted sum over the memory locations ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Differentiable Neural Computer",
"sec_num": "2"
},
{
"text": "r = N i=1 M [i, \u2022]w r [i]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Differentiable Neural Computer",
"sec_num": "2"
},
{
"text": "where the '\u2022' denotes all j = 1, ..., W .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Differentiable Neural Computer",
"sec_num": "2"
},
{
"text": "For the write operation, the memory M is modified by using a write weighting w w to first erase with an erase vector e, then add a write vector v:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Differentiable Neural Computer",
"sec_num": "2"
},
{
"text": "M [i, j] \u2190 M [i, j](1 \u2212 w w [i]e[j]) + w w [i]v[j]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Differentiable Neural Computer",
"sec_num": "2"
},
{
"text": "The weightings are defined by the following three attention mechanisms:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Differentiable Neural Computer",
"sec_num": "2"
},
{
"text": "\u2022 Content-based addressing: compares a key vector to the content of each location in memory and calculates similarity scores to define a read weighting for associative recall or a write weighting to modify a relevant vector in memory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Differentiable Neural Computer",
"sec_num": "2"
},
{
"text": "\u2022 Temporal memory linkage: records tracks of consecutively written memory locations to be able to read sequences in the order of which locations were written to.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Differentiable Neural Computer",
"sec_num": "2"
},
{
"text": "\u2022 Dynamic memory allocation: frees and allocates memory as needed for writing by representing the degree of each location usage which can be increased with each write and decreased after each read and reallocating memory with a low degree of usage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Differentiable Neural Computer",
"sec_num": "2"
},
{
"text": "The whole system is differentiable and can be learned with gradient descent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Differentiable Neural Computer",
"sec_num": "2"
},
{
"text": "Recently, variations of the DNC model have been proposed, such as the robust and scalable DNC (rsDNC) (Franke et al., 2018) and the DNC-DMS (Csord\u00e1s and Schmidhuber, 2019) .",
"cite_spans": [
{
"start": 102,
"end": 123,
"text": "(Franke et al., 2018)",
"ref_id": "BIBREF7"
},
{
"start": 140,
"end": 171,
"text": "(Csord\u00e1s and Schmidhuber, 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Differentiable Neural Computer",
"sec_num": "2"
},
{
"text": "Robust and Scalable DNC : Focusing on QA tasks, Franke et al. (2018) extended the DNC to be more robust and scalable (rsDNC), with the following four improvements: (1) using only the content-based memory unit to reduce memory consumption and training time, (2) applying layer normalization to lower the variance in performance between different runs, (3) using bypass dropout to make the memory unit's effect stronger, and (4) introducing a bidirectional architecture to encode input sequences in a more informative way. 2019tackled three problems of vanilla DNC and proposed an improved model called DNC-DMS. First, the lack of key-value separation makes the contentbased address distribution flat and prevents the model from accessing specific parts of a memory. By masking improper parts of both lookup key and memory content, the key-value separation can be controlled dynamically. Second, memory de-allocation mechanisms do not affect memory content which is crucial to content-based addressing and result in memory aliasing. Thus, DNC-DMS proposed to erase memory content completely. Lastly, chaining multiple reads with the temporal linkage matrix exponentially blurs the address distribution. Exponentiation and renormalization of the distribution reduced the effect of exponential blurring and improved sharpness of the link distribution.",
"cite_spans": [
{
"start": 48,
"end": 68,
"text": "Franke et al. (2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Differentiable Neural Computer",
"sec_num": "2"
},
{
"text": "Although these methods lead to good improvements over the original DNC model, none of them addressed how to incorporate structured knowledge into the DNC explicitly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DNC-DMS : Csord\u00e1s and Schmidhuber",
"sec_num": null
},
{
"text": "We expand three models, DNC, rsDNC, and DNC-DMS, by adding an extra memory architecture to store structured knowledge. Therefore, our proposed model consists of a control unit and two memory units. One memory unit stores contextual information in the dialogue and we call it \"context memory\". The other memory unit stores knowledge information and we call it \"knowledge memory\". The differences from the original DNC models are to introduce the knowledge memory and add the operation for it. Figure 2 shows the overview of our proposed model based on DNC.",
"cite_spans": [],
"ref_spans": [
{
"start": 492,
"end": 500,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3"
},
{
"text": "The procedures at every time-step t are described as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3"
},
{
"text": "!\"#$%&$ '%'\"() !\"#$(\"* +,--. /#01$2+$. 31$01$2+$. ! \" # \" $ \" % \" & \" ' & \"() * + \" 4#\"5*%67%2 '%'\"() & \" * & \"() ' , \" -. -/ -0 ' -1 -0 * Figure 2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3"
},
{
"text": "Overview of our proposed model based on DNC 1. The controller (RNN) receives an input vec-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3"
},
{
"text": "tor x t , a set of R read vectors r c t\u22121 = [r c,1 t\u22121 ; ...; r c,R t\u22121 ] (r c t\u22121 is a concatenation of r c,1 t\u22121 , ..., r c,R t\u22121 ) from the context memory ma- trix M c t\u22121 \u2208 R N \u00d7W",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3"
},
{
"text": "at the previous timestep and a set of R read vectors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3"
},
{
"text": "r k t\u22121 = [r k,1 t\u22121 ; ...; r k,R t\u22121",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3"
},
{
"text": "] from the knowledge memory matrix M k t\u22121 \u2208 R N \u00d7W at the previous timestep. It then emits an hidden vector h t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3"
},
{
"text": "2. By linear transformation of h t , an output vector \u03c5 t = W y h t , an interface vector \u03be t = W \u03be h t that parameterizes the context memory interactions at the current time-step and an interface vector \u03b6 t = W \u03b6 h t for the knowledge memory are obtained. The W terms denote learnable weight matrices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3"
},
{
"text": "3. The write operation to the context memory is performed using \u03be t and its state is updated. The write operation to the knowledge memory is not performed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3"
},
{
"text": "4. Finally, the output vector y t is calculated by adding \u03c5 t to a vector obtained by multiplying the concatenation of the current read vectors r c t from the context memory and W c r and a vector obtained by multiplying the concatenation of the current read vectors r k t from the knowledge memory and W k r .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3"
},
{
"text": "y t = \u03c5 t + W c r r c t + W k r r k t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3"
},
{
"text": "The read vectors r c t and r k t are appended to the controller input at the next time-step. The read and write operations to two memories are performed by repeating the above procedures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3"
},
{
"text": "To build the knowledge memory unit used in our models, we first run the original DNC model on a knowledge base (KB). We then use the pretrained memory unit as the knowledge memory unit in our proposed models. The process to build this knowledge memory is described in the next section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3"
},
{
"text": "We built a knowledge memory unit with a knowledge base (KB). Facts in the KB have a Resource Description Framework (RDF) 1 triple structure \"(subject, relation, object)\". For example, information such as \"Washington, D.C. is the capital of the U.S.\" is expressed as (the U.S., capital, Washington, D.C.).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Memory Building",
"sec_num": null
},
{
"text": "We applied the original DNC model, which has a single memory, to learn KB facts by giving the model all three components of a triple and any two of a triple and then making the model learn to return the other of a triple. For instance, when inputs for the model are '' the U.S.'', '' capital'', '' Washington, D.C.'', '' the U.S.'', and ''capital'', the output is '' Washington, D.C.''. The model was trained using all triples of the KB and produced a memory unit which stores the whole KB. We used this pre-trained memory unit as the knowledge memory unit in our proposed models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Memory Building",
"sec_num": null
},
{
"text": "We trained the DNC model using memory dimensions of 512 \u00d7 128 because the results of our proposed models were better than when we also trained using memory dimensions of 256 \u00d7 64 which were the same as context memory dimensions in our proposed method. Whereas the context memory just stores each dialogue content in the dataset, the knowledge memory stores the whole content of the KB and thus it is reasonable that the knowledge memory needs to be larger than the context memory. We evaluated the model with the accuracy and used TransE (Bordes et al., 2013) for KB's word embeddings.",
"cite_spans": [
{
"start": 538,
"end": 559,
"text": "(Bordes et al., 2013)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Memory Building",
"sec_num": null
},
{
"text": "We evaluated our approach on two dialogue datasets, the (6) dialog bAbI tasks and the Movie Dialog dataset . Both datasets require context compre-1 https://www.w3.org/TR/rdf11-primer/ hension and knowledge background, and provide dialogue data on a specific domain and RDF triple data to answer questions in the dialogue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "The hyperparameters of all models are mainly based on the original DNC paper . We trained all models using one layer LSTM (Hochreiter and Schmidhuber, 1997 ) with a hidden layer of size 256 , a batch of size 32, a learning rate of 1 \u00d7 10 \u22124 , context memory dimensions of 256 \u00d7 64, knowledge memory dimensions of 512 \u00d7 128, four read heads, one write head. We used the RMSProp optimizer (Tieleman and Hinton, 2012) with a momentum of 0.9. rsDNC models have a dropout probability of 10%, following (Franke et al., 2018) . We used TransE (Bordes et al., 2013) for KB's word embeddings and GloVe embeddings (Pennington et al., 2014) for words that do not appear in the KB but appear in the dialogue such as \"the\" and \"what\" referring to (Saha et al., 2018) . The dimension of each word embedding vector is 200. We stopped training if the result of a validation set drops ten epochs in a row and the model repeats this five times during training. We run every model three times under different random initializations and report the averaged results.",
"cite_spans": [
{
"start": 122,
"end": 155,
"text": "(Hochreiter and Schmidhuber, 1997",
"ref_id": "BIBREF10"
},
{
"start": 387,
"end": 414,
"text": "(Tieleman and Hinton, 2012)",
"ref_id": "BIBREF23"
},
{
"start": 497,
"end": 518,
"text": "(Franke et al., 2018)",
"ref_id": "BIBREF7"
},
{
"start": 536,
"end": 557,
"text": "(Bordes et al., 2013)",
"ref_id": "BIBREF3"
},
{
"start": 604,
"end": 629,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF16"
},
{
"start": 734,
"end": 753,
"text": "(Saha et al., 2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.1"
},
{
"text": "The (6) dialog bAbI tasks are a set of six dialogue tasks within the goal-oriented context of restaurant reservation. Among them, we focus on Task 5, which combines Tasks 1-4 to generate full dialogs. We also removed sentences starting with the special token \"api call\" from it in our work. Training, validation and test sets hold 1,000 examples, respectively. It also includes an Out-Of-Vocabulary (OOV) test set of 1,000 examples that include entities unseen in training and validation sets. The KB contains 8,400 facts in the restaurant domain such as ''resto seoul cheap korean 1stars, R cuisine, korean''.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialog bAbI tasks",
"sec_num": "4.2"
},
{
"text": "The number of entities is 3,635, the number of relations is 7, and the vocabulary size of the dialog is 2,043.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialog bAbI tasks",
"sec_num": "4.2"
},
{
"text": "In Table 1 , we present the mean perresponse accuracy over three different runs for all models. For clarity, we use the following notation: DNC is the original DNC model, rsDNC refers to the robust and scalable DNC model proposed by Franke et al. (2018) , DNC-DMS (Csord\u00e1s and Schmidhuber, 2019) has three modifications (i.e. de-allocation mechanisms, masked content based addressing, and sharpness enhancement), and DNC-MD (a variant of the DNC-DMS model) is with only masking and de-allocation modifications. Models with \"+KM\" notation are our proposed DNC models with the added knowledge memory unit. DNC+KM outperformed the original DNC on both Full dialogs and OOV tasks. Table 2 shows the results in detail separating tasks that can be answered without KB facts and tasks that require KB facts to answer questions. The w/o KB facts task has 12,351 sentences and the w/ KB facts task contains 3,912 sentences in the test set. Though DNC and DNC+KM were the same on the w/o KB facts task (99.9%), there was a significant improvement of 14.45% on the w/ KB facts task, where the DNC and DNC+KM models obtained accuracy scores of 29.53% and 43.98%, respectively. rsDNC+KM achieved the best performance overall and also improved the results on the w/ KB facts task by 19.52%, where the rsDNC and rsDNC+KM models obtained accuracy scores of 68.66% and 49.14%, respectively. Focusing on the rsDNC+KM model, we visualized the results of read/write attention weights from/to memories at every time-step to investigate what the model wrote to memories and what it read from memories. Figure 3, Figure 4 , and Figure 5 shows the attention weights visualization on a successful example where the outputs of the model are all correct, as presented in Table 3 . In Figure 3 , the horizontal axis represents locations in the context memory and the vertical axis represents inputs of data, or user's utterances and outputs of the model at every time-step. While the model takes input sequences, it returns nothing, and while there is no input, it generates responses, in other words, when input sequences are \"can, you, make, a, restaurant, reservation, in, paris, -, -, -\", the output sequences are \"-, -, -, -, -, -, -, -, i'm, on, it\" ('-' is a padding word). Figure 3 shows write attention weights to the context memory corresponding to turn 2 to turn 6 in the example shown in Table 3 . There are strong attentions on KB words such as \"paris\" and it is interesting that there are also attentions on words used to change the conditions of a restaurant such as \"actually\" and \"instead\". Figure 4 shows read attention weights from the context memory corresponding to turn 8 to 14 in the example shown in Table 3 . The model's memory uses four read heads and we show one of them. The slot where \"indian\" was strongly written to has an attention when the model outputs restaurant information. The correct answer is an indian restaurant and therefore read information of \"indian\" is thought to be useful. Figure 5 presents read attention weights from the knowledge memory. There are attentions before the model answers restaurant information and distinctive features are not found compared to the context memory.",
"cite_spans": [
{
"start": 233,
"end": 253,
"text": "Franke et al. (2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 677,
"end": 684,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 1580,
"end": 1598,
"text": "Figure 3, Figure 4",
"ref_id": null
},
{
"start": 1605,
"end": 1613,
"text": "Figure 5",
"ref_id": "FIGREF1"
},
{
"start": 1744,
"end": 1751,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 1757,
"end": 1765,
"text": "Figure 3",
"ref_id": null
},
{
"start": 2253,
"end": 2261,
"text": "Figure 3",
"ref_id": null
},
{
"start": 2372,
"end": 2379,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 2580,
"end": 2588,
"text": "Figure 4",
"ref_id": null
},
{
"start": 2696,
"end": 2703,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 2994,
"end": 3002,
"text": "Figure 5",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": null
},
{
"text": "We also examined a poor performance example where the model made mistakes in all w/ KB facts tasks, as shown in Table 4 . Figure 6 shows a part of the results of write attention weights to the context memory between turn 2 and 8 in Table 4. KB words such as \"moderate\" and \"british\" have strong attentions. Figure 7 presents a part of read attention weights from the context memory between turn 10 and 14 in Table 4 . There are attentions on a slot where \"british\" was written to and on another slot where \"with\" was written to when the model outputs restaurant information. Unnecessary information of \"with\" may be a negative influence on the outputs of the model. Figure 8 shows read attention weights from the knowledge memory. There are attentions before the model answers restaurant information and it shows a similar behavior to read attention weights from the knowledge memory in the good example. Considering the visualized results, the contributions of the knowledge memory appear blurry (it might give the model some information about when to answer KB entities), however, the performance of rsDNC improves by adding the knowledge mem- Figure 4 : Visualized result of read attention weights from the context memory in the rsDNC+KM's successful example on the dialog bAbI tasks ory architecture and therefore it is our future work to investigate the knowledge memory behavior in more detail.",
"cite_spans": [],
"ref_spans": [
{
"start": 112,
"end": 119,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 122,
"end": 130,
"text": "Figure 6",
"ref_id": null
},
{
"start": 307,
"end": 315,
"text": "Figure 7",
"ref_id": null
},
{
"start": 408,
"end": 415,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 666,
"end": 674,
"text": "Figure 8",
"ref_id": "FIGREF2"
},
{
"start": 1146,
"end": 1154,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": null
},
{
"text": "Regarding DNC-DMS models, the scores of DNC-DMS+KM were lower than the ones of DNC-DMS. We also conducted experiments on DNC-MD models since Csord\u00e1s and Schmidhuber (2019) reported that DNC-MD performed better than DNC-DMS on QA tasks, and we found that the performance of the DNC-MD+KM model was higher than DNC-MD. We hypothesize this is due to the nature of the knowledge memory, which does not use the temporal memory linkage. The order of the KB triples when building knowledge memory and the order of the words in the dialogue are irrelevant, therefore sharpness enhancement does not work well.",
"cite_spans": [
{
"start": 141,
"end": 171,
"text": "Csord\u00e1s and Schmidhuber (2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": null
},
{
"text": "Turn User's utterance Outputs of rsDNC+KM Target 1 good morning hello what can i help you with today hello what can i help you with today 2 i'd like to book a table in a moderate price range i'm on it i'm on it 3 <silence> any preference on a type of cuisine any preference on a type of cuisine 4 with british food where should it be where should it be 5 madrid please how many people would be in your party how many people would be in your party 6 for four people please ok let me look into some options for you ok let me look into some options for you 7 instead could it be in a expensive price range sure is there anything else to update sure is there anything else to update 8 actually i would prefer for six sure is there anything else to update sure is there anything else to update 9 no ok let me look into some options for you ok let me look into some options for ",
"cite_spans": [],
"ref_spans": [
{
"start": 42,
"end": 308,
"text": "Target 1 good morning hello what can i help you with today hello what can i help you with today 2 i'd like to book a table in a moderate price range i'm on it i'm on it 3 <silence> any preference on a type of cuisine any preference on a type of cuisine 4",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": null
},
{
"text": "The Movie Dialog dataset ) is a set of four dialogue tasks on the topic of movies. We used Task 3 (QA+Recommendation Dialog) which combines the question answering and recommendation tasks. The dialogues consist of three turns: the first turn requires a recommendation, e.g. \"I'm looking for a Brian De Palma movie. Response: Blow Out\", in the second turn, the user asks a factoid question regarding the model's previous response, e.g. \"Who does that star? Response: John Travolta, John Lithgow, Nancy Allen, Dennis Franz\", and in the third turn , the user asks for another recommendation and gives extra information about their tastes, e.g. \"I prefer Robert De Niro movies. Can you suggest an alternative? Response: Hi Mom!\". The dataset contains 1M examples of dialogues for training and 10k for development and test respectively. Among them, we used 100k for training and 4,907 for development, Figure 6 : Visualized result of write attention weights to the context memory in the rsDNC+KM's poor performance example of dialog bAbI tasks and 4,766 for test due to the limitation of computational resources. The Movie Dialog dataset's KB is built from the Open Movie Database (OMDb) 2 and the MovieLens dataset 3 . We extracted only triples sharing their entities with entities that appeared in our reduced dialogue data from the original KB, and got 126,999 triples. The number of entities is 37,055, the number of relations is 10, and the vocabulary size of the dialogues is 26,314.",
"cite_spans": [],
"ref_spans": [
{
"start": 897,
"end": 905,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Movie Dialog tasks",
"sec_num": "4.3"
},
{
"text": "Results Table 5 shows the mean hits@1 and hits@10 over three different runs of multiple models and our proposed models. Though our DNC+KM and DNC-MD+KM were worse than their corresponding original models, our rs- Table 5 : Mean hits@k of different models and our proposed models in the Movie Dialog dataset Task 3 (QA+Recommendation Dialog). \"k=1\" and \"k=10\" mean hits@1 and hits@10, respectively Table 6 : Detailed results in the Movie Dialog dataset Task 3 (QA+Recommendation Dialog). \"k=1\" and \"k=10\" mean hits@1 and hits@10, respectively Figure 7 : Visualized result of read attention weights from the context memory in the rsDNC+KM's poor performance example of dialog bAbI tasks DNC+KM and DNC-DMS+KM performed better than their original models on both hits@1 and hits@10. In Table 6 , we provide the detailed results for a more specific analysis. Each task's outputs are a list of KB entities. Response 1 (Recs) is the first turn in the dialogue and requires a recommendation. The number of entities involved in response 1's test set is 5,421. Response 2 (QA) denotes the second turn and the model needs to answer factoid questions considering context from the previous turn. Response 2 has 9,867 entities in the test set since it is often asked to answer more than one entities. Response 3 (Similar) is the third turn where the model provides another recommendation given the user's extra information about their tastes. 4,939 entities are contained in the test set. In response 3 tasks, the hits@1 score of our model with the knowledge memory was higher in every DNC model. In rsDNC models, both hits@1 and hits@10 of rs-DNC+KM improved on all three tasks. Despite the fact that the scores of rsDNC+KM outperformed the other models on response 2 and response 3 tasks, the results on response 1 tasks were rather low. Response 1 tasks require to deal with long input sentences such as \"Gentlemen of fortune, Revanche, Eternal sunshine of the spotless mind, Prometheus, Fanny and Alexander, The hurt locker, and 127 hours are films I really like. I'm looking for a Brian De Palma movie. \" and therefore we think that rsDNC models have difficulties at processing long sequences. Rae et al. (2016) introduced the sparse DNC (SDNC) with a sparse memory access scheme called Sparse Access Memory (SAM). SAM controls memory modifications within a sparse subset and uses efficient data structures for content-based addressing, and therefore the memory size does not influence memory consumption. Ben-Ari and Bekker (2017) proposed a differentiable allocation mechanism to replace the non-differentiable sorting function of DNC and reduced training time. As different approaches to neural networks with memories, the dynamic memory network (DMN) (Kumar et al., 2015) and the DMN+ (Xiong et al., 2016) , and End-to-end memory networks (Sukhbaatar et al., 2015) can be mainly listed. They store sentences in a memory and look up related sentences to answer queries using the attention mechanism. The relation memory network (RMN) (Moon et al., 2018) uses MLP and makes a multi-hop approach to an external memory to find out relevant information. In contrast to the above models, our model explicitly incorporates a memory architecture to store structured knowledge. Key-Value Memory Networks are based on End-to-end memory networks (Sukhbaatar et al., 2015) and operate a memory with the key-value structure. This structure makes the model flexible to encode knowledge sources and solves the gap between reading documents and using the KB. Saha et al. (2018) created Complex Sequential Question Answering (CSQA) dataset that consists of coherently linked questions which can be answered from a large scale KB. They combined the hierarchical recurrent encoder decoder (HRED) model (Serban et al., 2015) and the key-value memory network model to solve their CSQA dataset. Unlike these models, our proposed model does not need to extract KB facts related to queries beforehand and can learn which KB facts the model should extract in a differentiable way.",
"cite_spans": [
{
"start": 2185,
"end": 2202,
"text": "Rae et al. (2016)",
"ref_id": "BIBREF17"
},
{
"start": 2497,
"end": 2522,
"text": "Ben-Ari and Bekker (2017)",
"ref_id": "BIBREF1"
},
{
"start": 2746,
"end": 2766,
"text": "(Kumar et al., 2015)",
"ref_id": "BIBREF11"
},
{
"start": 2780,
"end": 2800,
"text": "(Xiong et al., 2016)",
"ref_id": "BIBREF24"
},
{
"start": 2834,
"end": 2859,
"text": "(Sukhbaatar et al., 2015)",
"ref_id": "BIBREF21"
},
{
"start": 3028,
"end": 3047,
"text": "(Moon et al., 2018)",
"ref_id": "BIBREF14"
},
{
"start": 3330,
"end": 3355,
"text": "(Sukhbaatar et al., 2015)",
"ref_id": "BIBREF21"
},
{
"start": 3538,
"end": 3556,
"text": "Saha et al. (2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 8,
"end": 15,
"text": "Table 5",
"ref_id": null
},
{
"start": 213,
"end": 220,
"text": "Table 5",
"ref_id": null
},
{
"start": 397,
"end": 404,
"text": "Table 6",
"ref_id": null
},
{
"start": 542,
"end": 550,
"text": "Figure 7",
"ref_id": null
},
{
"start": 782,
"end": 789,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Movie Dialog tasks",
"sec_num": "4.3"
},
{
"text": "We added knowledge memory architecture to three DNC models, vanilla DNC, rsDNC, and DNC-DMS, and experimentally analyzed the effect of our addition on dialogue tasks that require background knowledge. Our proposed models, DNC+KM, rs-DNC+KM, and DNC-MD+KM outperformed their original models on full dialog tasks in the (6) dialog bAbI tasks dataset. In particular, each model obtained an improvement of approximately 14%, 20%, and 7%, respectively on tasks which require KB facts. In the Movie Dialog dataset, our rs-DNC+KM and DNC-DMS+KM performed better than their original models. In future work we will investigate the behavior of the knowledge memory in detail and study how to build and use the knowledge memory more effectively in the whole architecture. We will also conduct experiments with models that are different from DNC models such as Key-Value Memory Networks to compare with our models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "http://beforethecode.com/projects/omdb/download.aspx. 3 http://grouplens.org/datasets/movielens/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by Grant-in-Aid for JSPS Fellows Grant Number 20J23182.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Differentiable memory allocation mechanism for neural computing",
"authors": [
{
"first": "Itamar",
"middle": [],
"last": "Ben",
"suffix": ""
},
{
"first": "-Ari",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"Joseph"
],
"last": "Bekker",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Itamar Ben-Ari and Alan Joseph Bekker. 2017. Dif- ferentiable memory allocation mechanism for neural computing.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning long-term dependencies with gradient descent is difficult",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Simard",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Frasconi",
"suffix": ""
}
],
"year": 1994,
"venue": "Trans. Neur. Netw",
"volume": "5",
"issue": "2",
"pages": "157--166",
"other_ids": {
"DOI": [
"10.1109/72.279181"
]
},
"num": null,
"urls": [],
"raw_text": "Y. Bengio, P. Simard, and P. Frasconi. 1994. Learning long-term dependencies with gradient descent is dif- ficult. Trans. Neur. Netw., 5(2):157-166.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Translating embeddings for modeling multirelational data",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Usunier",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Garcia-Duran",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Oksana",
"middle": [],
"last": "Yakhnenko",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "26",
"issue": "",
"pages": "2787--2795",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Bordes, Nicolas Usunier, Alberto Garcia- Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 2787-2795. Curran Associates, Inc.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning end-to-end goal-oriented dialog",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Bordes and Jason Weston. 2016. Learn- ing end-to-end goal-oriented dialog. CoRR, abs/1605.07683.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Improving differentiable neural computers through memory masking, de-allocation",
"authors": [
{
"first": "R\u00f3bert",
"middle": [],
"last": "Csord\u00e1s",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R\u00f3bert Csord\u00e1s and J\u00fcrgen Schmidhuber. 2019. Im- proving differentiable neural computers through memory masking, de-allocation, and link distribu- tion sharpness control. CoRR, abs/1904.10278.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Evaluating prerequisite qualities for learning end-to-end dialog systems",
"authors": [
{
"first": "Jesse",
"middle": [],
"last": "Dodge",
"suffix": ""
},
{
"first": "Andreea",
"middle": [],
"last": "Gane",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Alexander",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Szlam",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander H Miller, Arthur Szlam, and Jason Weston. 2016. Evaluating prereq- uisite qualities for learning end-to-end dialog sys- tems. abs/1511.06931.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Robust and scalable differentiable neural computer for question answering",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Franke",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Niehues",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Franke, Jan Niehues, and Alex Waibel. 2018. Ro- bust and scalable differentiable neural computer for question answering. CoRR, abs/1807.02658.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Neural turing machines. Cite",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Wayne",
"suffix": ""
},
{
"first": "Ivo",
"middle": [],
"last": "Danihelka",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. Cite arxiv:1410.5401.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Hybrid computing using a neural network with dynamic external memory",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Wayne",
"suffix": ""
},
{
"first": "Malcolm",
"middle": [],
"last": "Reynolds",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Harley",
"suffix": ""
},
{
"first": "Ivo",
"middle": [],
"last": "Danihelka",
"suffix": ""
},
{
"first": "Agnieszka",
"middle": [],
"last": "Grabska-Barwi\u0144ska",
"suffix": ""
},
{
"first": "Sergio",
"middle": [
"G\u00f3mez"
],
"last": "Colmenarejo",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Tiago",
"middle": [],
"last": "Ramalho",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Agapiou",
"suffix": ""
},
{
"first": "Adri\u00e0",
"middle": [
"Puigdom\u00e8nech"
],
"last": "Badia",
"suffix": ""
},
{
"first": "Karl",
"middle": [
"Moritz"
],
"last": "Hermann",
"suffix": ""
},
{
"first": "Yori",
"middle": [],
"last": "Zwols",
"suffix": ""
},
{
"first": "Georg",
"middle": [],
"last": "Ostrovski",
"suffix": ""
}
],
"year": 2016,
"venue": "Koray Kavukcuoglu, and Demis Hassabis",
"volume": "538",
"issue": "",
"pages": "471--476",
"other_ids": {
"DOI": [
"10.1038/nature20101"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska- Barwi\u0144ska, Sergio G\u00f3mez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Adri\u00e0 Puigdom\u00e8nech Badia, Karl Moritz Hermann, Yori Zwols, Georg Ostrovski, Adam Cain, Helen King, Christopher Summerfield, Phil Blunsom, Koray Kavukcuoglu, and Demis Hassabis. 2016. Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):471- 476.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Ask me anything: Dynamic memory networks for natural language processing",
"authors": [
{
"first": "Ankit",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Ozan",
"middle": [],
"last": "Irsoy",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "English",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Pierce",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Ondruska",
"suffix": ""
},
{
"first": "Ishaan",
"middle": [],
"last": "Gulrajani",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ankit Kumar, Ozan Irsoy, Jonathan Su, James Brad- bury, Robert English, Brian Pierce, Peter Ondruska, Ishaan Gulrajani, and Richard Socher. 2015. Ask me anything: Dynamic memory networks for natu- ral language processing. CoRR, abs/1506.07285.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Effective approaches to attention-based neural machine translation",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1412--1421",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1166"
]
},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 1412-1421, Lis- bon, Portugal. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Key-value memory networks for directly reading documents",
"authors": [
{
"first": "Alexander",
"middle": [
"H"
],
"last": "Miller",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Fisch",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Dodge",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Amir-Hossein",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Karimi",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander H. Miller, Adam Fisch, Jesse Dodge, Amir- Hossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-value memory networks for directly read- ing documents. CoRR, abs/1606.03126.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Finding remo (related memory object): A simple neural architecture for text based reasoning",
"authors": [
{
"first": "Jihyung",
"middle": [],
"last": "Moon",
"suffix": ""
},
{
"first": "Hyochang",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Sungzoon",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jihyung Moon, Hyochang Yang, and Sungzoon Cho. 2018. Finding remo (related memory object): A simple neural architecture for text based reasoning. CoRR, abs/1801.08459.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "First draft of a report on the edvac",
"authors": [
{
"first": "",
"middle": [],
"last": "John Von Neumann",
"suffix": ""
}
],
"year": 1945,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John von Neumann. 1945. First draft of a report on the edvac. Technical report.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "14",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP, volume 14, pages 1532- 1543.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Scaling memoryaugmented neural networks with sparse reads and writes",
"authors": [
{
"first": "Jack",
"middle": [
"W"
],
"last": "Rae",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"J"
],
"last": "Hunt",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Harley",
"suffix": ""
},
{
"first": "Ivo",
"middle": [],
"last": "Danihelka",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"W"
],
"last": "Senior",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Wayne",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Timothy",
"middle": [
"P"
],
"last": "Lillicrap",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jack W. Rae, Jonathan J. Hunt, Tim Harley, Ivo Dani- helka, Andrew W. Senior, Greg Wayne, Alex Graves, and Timothy P. Lillicrap. 2016. Scaling memory- augmented neural networks with sparse reads and writes. CoRR, abs/1610.09027.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Complex sequential question answering: Towards learning to converse over linked question answer pairs with a knowledge graph",
"authors": [
{
"first": "Amrita",
"middle": [],
"last": "Saha",
"suffix": ""
},
{
"first": "Vardaan",
"middle": [],
"last": "Pahuja",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mitesh",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Khapra",
"suffix": ""
},
{
"first": "Sarath",
"middle": [],
"last": "Sankaranarayanan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chandar",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amrita Saha, Vardaan Pahuja, Mitesh M. Khapra, Karthik Sankaranarayanan, and Sarath Chandar. 2018. Complex sequential question answering: To- wards learning to converse over linked question an- swer pairs with a knowledge graph.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Hierarchical neural network generative models for movie dialogues",
"authors": [
{
"first": "Iulian",
"middle": [],
"last": "Vlad Serban",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Aaron",
"middle": [
"C"
],
"last": "Courville",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iulian Vlad Serban, Alessandro Sordoni, Yoshua Ben- gio, Aaron C. Courville, and Joelle Pineau. 2015. Hierarchical neural network generative models for movie dialogues. CoRR, abs/1507.04808.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "On the computational power of neural nets",
"authors": [
{
"first": "H",
"middle": [
"T"
],
"last": "Siegelmann",
"suffix": ""
},
{
"first": "E",
"middle": [
"D"
],
"last": "Sontag",
"suffix": ""
}
],
"year": 1995,
"venue": "J. Comput. Syst. Sci",
"volume": "50",
"issue": "1",
"pages": "132--150",
"other_ids": {
"DOI": [
"10.1006/jcss.1995.1013"
]
},
"num": null,
"urls": [],
"raw_text": "H.T. Siegelmann and E.D. Sontag. 1995. On the com- putational power of neural nets. J. Comput. Syst. Sci., 50(1):132-150.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "End-to-end memory networks",
"authors": [
{
"first": "Sainbayar",
"middle": [],
"last": "Sukhbaatar",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Fergus",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "28",
"issue": "",
"pages": "2440--2448",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sainbayar Sukhbaatar, arthur szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory net- works. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2440-2448. Curran Associates, Inc.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. CoRR, abs/1409.3215.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude",
"authors": [
{
"first": "T",
"middle": [],
"last": "Tieleman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T Tieleman and G Hinton. 2012. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Dynamic memory networks for visual and textual question answering",
"authors": [
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Caiming Xiong, Stephen Merity, and Richard Socher. 2016. Dynamic memory networks for visual and tex- tual question answering. CoRR, abs/1603.01417.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Overview of DNC by applying a read weighting w r over memory M :"
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Visualized result of read attention weights from the knowledge memory in the rsDNC+KM's successful example on the dialog bAbI tasks"
},
"FIGREF2": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Visualized result of read attention weights from the knowledge memory in the rsDNC+KM's poor performance example of dialog bAbI tasks"
},
"TABREF1": {
"content": "<table><tr><td>Task</td><td colspan=\"2\">DNC DNC+KM</td><td colspan=\"6\">rsDNC rsDNC+KM DNC-DMS DNC-DMS DNC-MD DNC-MD</td></tr><tr><td/><td/><td/><td/><td/><td/><td>+KM</td><td/><td>+KM</td></tr><tr><td>w/o KB facts</td><td>99.99%</td><td colspan=\"2\">99.99% 100.00%</td><td>100.00%</td><td>100.00%</td><td>100.00%</td><td>99.99%</td><td>100.00%</td></tr><tr><td>w/ KB facts</td><td>29.53%</td><td>43.98%</td><td>49.14%</td><td>68.66%</td><td>35.49%</td><td>32.78%</td><td>27.28%</td><td>34.73%</td></tr><tr><td colspan=\"2\">w/o KB facts (OOV) 95.98%</td><td>98.01%</td><td>99.99%</td><td>99.72%</td><td>96.65%</td><td>94.37%</td><td>95.03%</td><td>95.27%</td></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": "Mean per-response accuracy of different models and our proposed models in the dialog bAbI tasks (Task 5)."
},
"TABREF2": {
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null,
"text": "Detailed results in the dialog bAbI tasks (Task 5). w/o KB facts denotes tasks that can be answered without KB facts and w/ KB facts denotes tasks that need KB facts to answer questions. The results of w/ KB facts (OOV) are omitted since they are all 0.00%."
},
"TABREF4": {
"content": "<table><tr><td>Figure 3: Visualized result of write attention weights</td></tr><tr><td>to the context memory in a rsDNC+KM's successful</td></tr><tr><td>example on the dialog bAbI tasks. The horizontal axis</td></tr><tr><td>represents locations in the context memory and the ver-</td></tr><tr><td>tical axis represents inputs of data and outputs of the</td></tr><tr><td>model at every time-step.</td></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": "Outputs of rsDNC+KM on a successful example on the dialog bAbI tasks"
},
"TABREF6": {
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null,
"text": "Outputs of rsDNC+KM on a poor performance example on the dialog bAbI tasks"
}
}
}
}