| { |
| "paper_id": "P19-1002", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:30:47.025315Z" |
| }, |
| "title": "Incremental Transformer with Deliberation Decoder for Document Grounded Conversations", |
| "authors": [ |
| { |
| "first": "Zekang", |
| "middle": [], |
| "last": "Li", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Dian Group", |
| "institution": "Huazhong University of Science and Technology \u2021 Pattern Recognition Center", |
| "location": { |
| "addrLine": "WeChat AI" |
| } |
| }, |
| "email": "zekangli97@gmail.com" |
| }, |
| { |
| "first": "Cheng", |
| "middle": [], |
| "last": "Niu", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "chengniu@tencent.com" |
| }, |
| { |
| "first": "Fandong", |
| "middle": [], |
| "last": "Meng", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "fandongmeng@tencent.com" |
| }, |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "Feng", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Northeastern University", |
| "location": { |
| "country": "China" |
| } |
| }, |
| "email": "fengyang@ict.ac.cn" |
| }, |
| { |
| "first": "Qian", |
| "middle": [], |
| "last": "Li", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "qianli@stumail.neu.edu.cn" |
| }, |
| { |
| "first": "Jie", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "jiezhou@tencent.com" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Document Grounded Conversations is a task to generate dialogue responses when chatting about the content of a given document. Obviously, document knowledge plays a critical role in Document Grounded Conversations, while existing dialogue models do not exploit this kind of knowledge effectively enough. In this paper, we propose a novel Transformerbased architecture for multi-turn document grounded conversations. In particular, we devise an Incremental Transformer to encode multi-turn utterances along with knowledge in related documents. Motivated by the human cognitive process, we design a two-pass decoder (Deliberation Decoder) to improve context coherence and knowledge correctness. Our empirical study on a real-world Document Grounded Dataset proves that responses generated by our model significantly outperform competitive baselines on both context coherence and knowledge relevance.", |
| "pdf_parse": { |
| "paper_id": "P19-1002", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Document Grounded Conversations is a task to generate dialogue responses when chatting about the content of a given document. Obviously, document knowledge plays a critical role in Document Grounded Conversations, while existing dialogue models do not exploit this kind of knowledge effectively enough. In this paper, we propose a novel Transformerbased architecture for multi-turn document grounded conversations. In particular, we devise an Incremental Transformer to encode multi-turn utterances along with knowledge in related documents. Motivated by the human cognitive process, we design a two-pass decoder (Deliberation Decoder) to improve context coherence and knowledge correctness. Our empirical study on a real-world Document Grounded Dataset proves that responses generated by our model significantly outperform competitive baselines on both context coherence and knowledge relevance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Past few years have witnessed the rapid development of dialogue systems. Based on the sequenceto-sequence framework (Sutskever et al., 2014) , most models are trained in an end-to-end manner with large corpora of human-to-human dialogues and have obtained impressive success (Shang et al., 2015; Vinyals and Le, 2015; Li et al., 2016; Serban et al., 2016) . While there is still a long way for reaching the ultimate goal of dialogue systems, which is to be able to talk like humans. And one of the essential intelligence to achieve this goal is the ability to make use of knowledge.", |
| "cite_spans": [ |
| { |
| "start": 116, |
| "end": 140, |
| "text": "(Sutskever et al., 2014)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 275, |
| "end": 295, |
| "text": "(Shang et al., 2015;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 296, |
| "end": 317, |
| "text": "Vinyals and Le, 2015;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 318, |
| "end": 334, |
| "text": "Li et al., 2016;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 335, |
| "end": 355, |
| "text": "Serban et al., 2016)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "There are several works on dialogue systems exploiting knowledge.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The Mem2Seq (Madotto et al., 2018) incorporates structured knowledge into the end-to-end task-oriented dialogue. introduces factmatching and knowledge-diffusion to generate meaningful, diverse and natural responses using structured knowledge triplets. Ghazvininejad et al. (2018) , Parthasarathi and Pineau (2018) , Yavuz et al. (2018) , Dinan et al. (2018) and Lo and Chen (2019) apply unstructured text facts in open-domain dialogue systems. These works mainly focus on integrating factoid knowledge into dialogue systems, while factoid knowledge requires a lot of work to build up, and is only limited to expressing precise facts. Documents as a knowledge source provide a wide spectrum of knowledge, including but not limited to factoid, event updates, subjective opinion, etc. Recently, intensive research has been applied on using documents as knowledge sources for Question-Answering (Chen et al., 2017; Yu et al., 2018; Rajpurkar et al., 2018; Reddy et al., 2018) .", |
| "cite_spans": [ |
| { |
| "start": 12, |
| "end": 34, |
| "text": "(Madotto et al., 2018)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 252, |
| "end": 279, |
| "text": "Ghazvininejad et al. (2018)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 282, |
| "end": 313, |
| "text": "Parthasarathi and Pineau (2018)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 316, |
| "end": 335, |
| "text": "Yavuz et al. (2018)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 338, |
| "end": 357, |
| "text": "Dinan et al. (2018)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 362, |
| "end": 380, |
| "text": "Lo and Chen (2019)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 891, |
| "end": 910, |
| "text": "(Chen et al., 2017;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 911, |
| "end": 927, |
| "text": "Yu et al., 2018;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 928, |
| "end": 951, |
| "text": "Rajpurkar et al., 2018;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 952, |
| "end": 971, |
| "text": "Reddy et al., 2018)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The Document Grounded Conversation is a task to generate natural dialogue responses when chatting about the content of a specific document. This task requires to integrate document knowledge with the multi-turn dialogue history. Different from previous knowledge grounded dialogue systems, Document Grounded Conversations utilize documents as the knowledge source, and hence are able to employ a wide spectrum of knowledge. And the Document Grounded Conversations is also different from document QA since the contextual consistent conversation response should be generated. To address the Document Grounded Conversation task, it is important to: 1) Exploit document knowledge which are relevant to the conversation; 2) Develop a unified representation combining multi-turn utterances along with the relevant document knowledge.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we propose a novel and effective Transformer-based (Vaswani et al., 2017) architecture for Document Grounded Conversations, named Incremental Transformer with Deliberation Decoder. The encoder employs a transformer architecture to incrementally encode multi-turn history utterances, and incorporate document knowledge into the the multi-turn context encoding process. The decoder is a two-pass decoder similar to the Deliberation Network in Neural Machine Translation (Xia et al., 2017) , which is designed to improve the context coherence and knowledge correctness of the responses. The first-pass decoder focuses on contextual coherence, while the second-pass decoder refines the result of the firstpass decoder by consulting the relevant document knowledge, and hence increases the knowledge relevance and correctness. This is motivated by human cognition process. In real-world human conversations, people usually first make a draft on how to respond the previous utterance, and then consummate the answer or even raise questions by consulting background knowledge.", |
| "cite_spans": [ |
| { |
| "start": 66, |
| "end": 88, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 483, |
| "end": 501, |
| "text": "(Xia et al., 2017)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We test the effectiveness of our proposed model on Document Grounded Conversations Dataset (Zhou et al., 2018) . Experiment results show that our model is capable of generating responses of more context coherence and knowledge relevance. Sometimes document knowledge is even well used to guide the following conversations. Both automatic and manual evaluations show that our model substantially outperforms the competitive baselines.", |
| "cite_spans": [ |
| { |
| "start": 91, |
| "end": 110, |
| "text": "(Zhou et al., 2018)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our contributions are as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 We build a novel Incremental Transformer to incrementally encode multi-turn utterances with document knowledge together.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 We are the first to apply a two-pass decoder to generate responses for document grounded conversations. Two decoders focus on context coherence and knowledge correctness respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "2 Approach", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our goal is to incorporate the relevant document knowledge into multi-turn conversations. Formally, let U = u (1) , ..., u (k) , ..., u (K) be a whole conversation composed of K utterances. We use", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Statement", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "u (k) = u (k) 1 , ..., u (k) i , ..., u (k) I", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Statement", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "to denote the k-th utterance containing I words, where u (k) i denotes the i-th word in the k-th utterance. For each utterance u (k) , likewise, there is a specified relevant document s (k) ", |
| "cite_spans": [ |
| { |
| "start": 129, |
| "end": 132, |
| "text": "(k)", |
| "ref_id": null |
| }, |
| { |
| "start": 186, |
| "end": 189, |
| "text": "(k)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Statement", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "= s (k) 1 , ..., s (k) j , ..., s (k)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Statement", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "J , which represents the document related to the kth utterance containing J words. We define the document grounded conversations task as generating a response u (k+1) given its related document s (k+1) and previous k utterances U \u2264k with related documents S \u2264k , where U \u2264k = u (1) , ..., u (k) and S \u2264k = s (1) , ..., s (k) . Note that s (k) , s (k+1) , ..., s (k+n) may be the same.", |
| "cite_spans": [ |
| { |
| "start": 291, |
| "end": 294, |
| "text": "(k)", |
| "ref_id": null |
| }, |
| { |
| "start": 321, |
| "end": 324, |
| "text": "(k)", |
| "ref_id": null |
| }, |
| { |
| "start": 339, |
| "end": 342, |
| "text": "(k)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Statement", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Therefore, the probability to generate the response u (k+1) is computed as: Figure 1 shows the framework of the proposed Incremental Transformer with Deliberation De- (1) coder. Please refer to Figure 2 (1) for more details. It consists of three components: 1) Self-Attentive Encoder (SA) (in orange) is a transformer encoder as described in (Vaswani et al., 2017) , which encodes the document knowledge and the current utterance independently.", |
| "cite_spans": [ |
| { |
| "start": 342, |
| "end": 364, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 76, |
| "end": 84, |
| "text": "Figure 1", |
| "ref_id": null |
| }, |
| { |
| "start": 194, |
| "end": 202, |
| "text": "Figure 2", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Problem Statement", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "P (u (k+1) |U \u2264k , S \u2264k+1 ; \u03b8) = I i=1 P (u k+1 i |U \u2264k , S \u2264k+1 , u (k+1) <i ; \u03b8) (1) where u (k+1) <i = u (k+1) 1 , ..., u (k+1) i\u22121 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Statement", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "2) Incremental Transformer Encoder (ITE) (on the top) is a unified transformer encoder which encodes multi-turn utterances with knowledge representation using an incremental encoding scheme. This module takes previous utterances u (i) and the document s (i) 's SA representation as input, and use attention mechanism to incrementally build up the representation of relevant context and document knowledge.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Description", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "3) Deliberation Decoder (DD) (on the bottom) is a two-pass unified transformer decoder for better generating the next response. The first-pass decoder takes current utterance u (k) 's SA representation and ITE output as input, and mainly relies on conversation context for response generation. The second-pass decoder takes the SA representation of the first pass result and the relevant document s (k+1) 's SA representation as input, and uses document knowledge to further refine the response.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Description", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "As document knowledge often includes several sentences, it's important to capture long-range dependencies and identify relevant information. We use multi-head self-attention (Vaswani et al., 2017) to compute the representation of document knowledge.", |
| "cite_spans": [ |
| { |
| "start": 174, |
| "end": 196, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Self-Attentive Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "As shown in Figure 2 (a), we use a selfattentive encoder to compute the representation of the related document knowledge s (k) . The input (In (k) s ) of the encoder is a sequence of document words embedding with positional encoding added. (Vaswani et al., 2017) :", |
| "cite_spans": [ |
| { |
| "start": 123, |
| "end": 126, |
| "text": "(k)", |
| "ref_id": null |
| }, |
| { |
| "start": 143, |
| "end": 146, |
| "text": "(k)", |
| "ref_id": null |
| }, |
| { |
| "start": 240, |
| "end": 262, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 12, |
| "end": 20, |
| "text": "Figure 2", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Self-Attentive Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "In (k) s = [s (k) 1 , ..., s (k) J ] (2) s (k) j = e s j + PE(j)", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Self-Attentive Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "where e s j is the word embedding of s (k) j and PE(\u2022) denotes positional encoding function.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Self-Attentive Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "The Self-Attentive encoder contains a stack of N x identical layers. Each layer has two sublayers. The first sub-layer is a multi-head selfattention (MultiHead) (Vaswani et al., 2017) . MultiHead(Q, K, V) is a multi-head attention function that takes a query matrix Q, a key matrix K, and a value matrix V as input. In current case, Q = K = V. That's why it's called self-attention. And the second sub-layer is a simple, position-wise fully connected feed-forward network (FFN). This FFN consists of two linear transformations with a ReLU activation in between. (Vaswani et al., 2017) .", |
| "cite_spans": [ |
| { |
| "start": 161, |
| "end": 183, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 562, |
| "end": 584, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Self-Attentive Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "A (1) = MultiHead(In (k) s , In (k) s , In (k) s ) (4) D (1) = FFN(A (1) )", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "Self-Attentive Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "FFN(x) = max(0, xW 1 + b 1 )W 2 + b 2", |
| "eq_num": "(6)" |
| } |
| ], |
| "section": "Self-Attentive Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "where A (1) is the hidden state computed by multihead attention at the first layer, D (1) denotes the representation of s (k) after the first layer. Note that residual connection and layer normalization are used in each sub-layer, which are omitted in the presentation for simplicity. Please refer to (Vaswani et al., 2017) for more details. For each layer, repeat this process:", |
| "cite_spans": [ |
| { |
| "start": 122, |
| "end": 125, |
| "text": "(k)", |
| "ref_id": null |
| }, |
| { |
| "start": 301, |
| "end": 323, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Self-Attentive Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "A (n) = MultiHead(D (n\u22121) , D (n\u22121) , D (n\u22121) ) (7) D (n) = FFN(A (n) )", |
| "eq_num": "(8)" |
| } |
| ], |
| "section": "Self-Attentive Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "where n = 1, ..., N s and D (0) = In (k) s . We use SA s (\u2022) to denote this whole process:", |
| "cite_spans": [ |
| { |
| "start": 37, |
| "end": 40, |
| "text": "(k)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Self-Attentive Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "d (k) = D (Nx) = SA s (s (k) )", |
| "eq_num": "(9)" |
| } |
| ], |
| "section": "Self-Attentive Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "where d (k) is the final representation for the document knowledge s (k) . Similarly, for each utterance u (k) , we use", |
| "cite_spans": [ |
| { |
| "start": 69, |
| "end": 72, |
| "text": "(k)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Self-Attentive Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "In (k) u = [u (k) 1 , ..., u", |
| "eq_num": "(k)" |
| } |
| ], |
| "section": "Self-Attentive Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "I ] to represent the sequence of the position-aware word embedding. Then the same Self-Attentive Encoder is used to compute the representation of current utterance u (k) , and we use SA u (u (k) ) to denote this encoding result. The Self-Attentive Encoder is also used to encode the document s (k+1) and the first pass decoding results in the second pass of the decoder. Note that SA s and SA u have the same architecture but different parameters. More details about this will be mentioned in the following sections.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Self-Attentive Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "To encode multi-turn document grounded utterances effectively, we design an Incremental Transformer Encoder. Incremental Transformer uses multi-head attention to incorporate document knowledge and context into the current utterance's encoding process. This process can be stated recursively as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "c (k) = ITE(c (k\u22121) , d (k) , In (k) u )", |
| "eq_num": "(10)" |
| } |
| ], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "where ITE(\u2022) denotes the encoding function, c (k) denotes the context state after encoding utterance u (k) , c (k\u22121) is the context state after encoding last utterance u (k\u22121) , d (k) is the representation of document s (k) and In (k) u is the embedding of current utterance u (k) .", |
| "cite_spans": [ |
| { |
| "start": 46, |
| "end": 49, |
| "text": "(k)", |
| "ref_id": null |
| }, |
| { |
| "start": 103, |
| "end": 106, |
| "text": "(k)", |
| "ref_id": null |
| }, |
| { |
| "start": 180, |
| "end": 183, |
| "text": "(k)", |
| "ref_id": null |
| }, |
| { |
| "start": 220, |
| "end": 223, |
| "text": "(k)", |
| "ref_id": null |
| }, |
| { |
| "start": 231, |
| "end": 234, |
| "text": "(k)", |
| "ref_id": null |
| }, |
| { |
| "start": 277, |
| "end": 280, |
| "text": "(k)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "As shown in Figure 2 (b), we use a stack of N u identical layers to encode u (k) . Each layer consists of four sub-layers. The first sub-layer is a multihead self-attention:", |
| "cite_spans": [ |
| { |
| "start": 77, |
| "end": 80, |
| "text": "(k)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 12, |
| "end": 20, |
| "text": "Figure 2", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "B (n) = MultiHead(C (n\u22121) , C (n\u22121) , C (n\u22121) )", |
| "eq_num": "(11)" |
| } |
| ], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "where n = 1, ..., N u , C (n\u22121) is the output of the last layer and C (0) = In (k) u . The second sub-layer is a multi-head knowledge attention:", |
| "cite_spans": [ |
| { |
| "start": 79, |
| "end": 82, |
| "text": "(k)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "E (n) = MultiHead(B (n) , d (k) , d (k) )", |
| "eq_num": "(12)" |
| } |
| ], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "The third sub-layer is a multi-head context attention:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "F (n) = MultiHead(E (n) , c (k\u22121) , c (k\u22121) ) (13)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "where c (k\u22121) is the representation of the previous utterances. That's why we called the encoder \"Incremental Transformer\". The fourth sub-layer is a position-wise fully connected feed-forward network:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "C (n) = FFN(F (n) )", |
| "eq_num": "(14)" |
| } |
| ], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "We use c (k) to denote the final representation at N u -th layer:", |
| "cite_spans": [ |
| { |
| "start": 9, |
| "end": 12, |
| "text": "(k)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "c (k) = C (Nu)", |
| "eq_num": "(15)" |
| } |
| ], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "Deliberation Decoder", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "Motivated by the real-world human cognitive process, we design a Deliberation Decoder containing two decoding passes to improve the knowledge relevance and context coherence. The first-pass decoder takes the representation of current utterance SA u (u (k) ) and context c (k) as input and focuses on how to generate responses contextual coherently. The second-pass decoder takes the representation of the first-pass decoding results and related document s (k+1) as input and focuses on increasing knowledge usage and guiding the following conversations within the scope of the given document.", |
| "cite_spans": [ |
| { |
| "start": 252, |
| "end": 255, |
| "text": "(k)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "When generating the i-th response word u (k+1) i , we have the generated words u (k+1) <i as input (Vaswani et al., 2017) . We use In (k+1) r to denote the matrix representation of u (k+1) <i as following:", |
| "cite_spans": [ |
| { |
| "start": 99, |
| "end": 121, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 134, |
| "end": 139, |
| "text": "(k+1)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "In (k+1) r = [u (k+1) 0 , u (k+1) 1 , ..., u (k+1) i\u22121 ]", |
| "eq_num": "(16)" |
| } |
| ], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "where", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "u (k+1) 0", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "is the vector representation of sentence-start token.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "As shown in Figure 2 (c) , the Deliberation Decoder consists of a first-pass decoder and a second-pass decoder. These two decoders have the same architecture but different input for sublayers. Both decoders are composed of a stack of N y identical layers. Each layer has four sublayers. For the first-pass decoder, the first sublayer is a multi-head self-attention:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 12, |
| "end": 24, |
| "text": "Figure 2 (c)", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "G (n) 1 = MultiHead(R (n\u22121) 1 , R (n\u22121) 1 , R (n\u22121) 1 ) (17) where n = 1, ..., N y , R (n\u22121) 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "is the output of the previous layer, and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "R (0) 1 = In (k+1) r .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "The second sub-layer is a multi-head context attention:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "H (n) 1 = MultiHead(G (n) 1 , c (k) , c (k) )", |
| "eq_num": "(18)" |
| } |
| ], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "where c (k) is the representation of context u \u2264k . The third sub-layer is a multi-head utterance attention:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "M (n) 1 = MultiHead(H (n) 1 , SA u (u (k) ), SA u (u (k) ))", |
| "eq_num": "(19)" |
| } |
| ], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "where SA u (\u2022) is a Self-Attentive Encoder which encodes latest utterance u (k) . Eq. (18) mainly encodes the context and document knowledge relevant to the latest utterance, while Eq. (19) encodes the latest utterance directly. We hope optimal performance can be achieved by combining both.", |
| "cite_spans": [ |
| { |
| "start": 76, |
| "end": 79, |
| "text": "(k)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "The fourth sub-layer is a position-wise fully connected feed-forward network:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "R (n) 1 = FFN(M (n) 1 )", |
| "eq_num": "(20)" |
| } |
| ], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "After N y layers, we use softmax to get the words probabilities decoded by first-pass decoder:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P (\u00fb (k+1) (1) ) = softmax(R (Ny) 1 ) (21) where\u00fb (k+1)", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "is the response decoded by the firstpass decoder. For second-pass decoder:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "G (n) 2 = MultiHead(R (n\u22121) 2 , R (n\u22121) 2 , R (n\u22121) 2 ) (22) H (n) 2 = MultiHead(G (n) 2 , d (k+1) , d (k+1) ) (23) M (n) 2 = MultiHead(H (n) 2 , SA u (\u00fb (k+1) (1) ), SA u (\u00fb (k+1) (1) )) (24) R (n) 2 = FFN(M (n) 2 ) (25) P (\u00fb (k+1) (2) ) = softmax(R (Ny) 2 )", |
| "eq_num": "(26)" |
| } |
| ], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "where", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "R (n\u22121) 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "is the counterpart to R", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "(n\u22121) 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "in pass two decoder, referring to the output of the previous layer. d (k+1) is the representation of document s (k+1) using Self-Attentive Encoder,\u00fb", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "(k+1) (2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "is the output words after the second-pass decoder.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incremental Transformer Encoder", |
| "sec_num": null |
| }, |
| { |
| "text": "In contrast to the original Deliberation Network (Xia et al., 2017) , where they propose a complex joint learning framework using Monte Carlo Method, we minimize the following loss as Xiong et al. (2018) do:", |
| "cite_spans": [ |
| { |
| "start": 49, |
| "end": 67, |
| "text": "(Xia et al., 2017)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 184, |
| "end": 203, |
| "text": "Xiong et al. (2018)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "L mle = L mle1 + L mle2", |
| "eq_num": "(27)" |
| } |
| ], |
| "section": "Training", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "L mle1 = \u2212 K k=1 I i=1 log P (\u00fb (k+1) (1)i )", |
| "eq_num": "(28)" |
| } |
| ], |
| "section": "Training", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "L mle2 = \u2212 K k=1 I i=1 log P (\u00fb (k+1) (2)i )", |
| "eq_num": "(29)" |
| } |
| ], |
| "section": "Training", |
| "sec_num": null |
| }, |
| { |
| "text": "3 Experiments", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": null |
| }, |
| { |
| "text": "We evaluate our model using the Document Grounded Conversations Dataset (Zhou et al., 2018) . There are 72922 utterances for training, 3626 utterances for validation and 11577 utterances for testing. The utterances can be either casual chats or document grounded. Note that we consider consequent utterances of the same person as one utterance. For example, we consider A: Hello! B: Hi! B: How's it going? as A: Hello! B: Hi! How's it going?. And there is a related document given for every several consequent utterances, which may contain movie name, casts, introduction, ratings, and some scenes. The average length of documents is about 200. Please refer to (Zhou et al., 2018) for more details.", |
| "cite_spans": [ |
| { |
| "start": 72, |
| "end": 91, |
| "text": "(Zhou et al., 2018)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 661, |
| "end": 680, |
| "text": "(Zhou et al., 2018)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We compare our proposed model with the following state-of-the-art baselines: Models not using document knowledge:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Seq2Seq: A simple encoder-decoder model (Shang et al., 2015; Vinyals and Le, 2015) with global attention (Luong et al., 2015) . We concatenate utterances context to a long sentence as input.", |
| "cite_spans": [ |
| { |
| "start": 40, |
| "end": 60, |
| "text": "(Shang et al., 2015;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 61, |
| "end": 82, |
| "text": "Vinyals and Le, 2015)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 105, |
| "end": 125, |
| "text": "(Luong et al., 2015)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "HRED: A hierarchical encoder-decoder model (Serban et al., 2016), which is composed of a word-level LSTM for each sentence and a sentence-level LSTM connecting utterances.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Transformer: The state-of-the-art NMT model based on multi-head attention (Vaswani et al., 2017) . We concatenate utterances context to a long sentence as its input. Models using document knowledge: Seq2Seq (+knowledge) and HRED (+knowledge) are based on Seq2Seq and HRED respectively. They both concatenate document knowledge representation and last decoding output embedding as input when decoding. Please refer to (Zhou et al., 2018) for more details.", |
| "cite_spans": [ |
| { |
| "start": 74, |
| "end": 96, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 417, |
| "end": 436, |
| "text": "(Zhou et al., 2018)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Wizard Transformer: A Transformer-based model for multi-turn open-domain dialogue with unstructured text facts (Dinan et al., 2018 sequence as input. We replace the text facts with document knowledge. Here, we also conduct an ablation study to illustrate the validity of our proposed Incremental Transformer Encoder and Deliberation Decoder.", |
| "cite_spans": [ |
| { |
| "start": 111, |
| "end": 130, |
| "text": "(Dinan et al., 2018", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "ITE+CKAD: It uses Incremental Transformer Encoder (ITE) as encoder and Context-Knowledge-Attention Decoder (CKAD) as shown in Figure 2 (e). This setup is to test the validity of the deliberation decoder.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 126, |
| "end": 134, |
| "text": "Figure 2", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Knowledge-Attention Transformer (KAT): As shown in Figure 2 (d) , the encoder of this model is a simplified version of Incremental Transformer Encoder (ITE), which doesn't have context-attention sub-layer. We concatenate utterances context to a long sentence as its input. The decoder of the model is a simplified Context-Knowledge-Attention Decoder (CKAD). It doesn't have context-attention sub-layer either. This setup is to test how effective the context has been exploited in the full model.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 51, |
| "end": 63, |
| "text": "Figure 2 (d)", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We use OpenNMT-py 1 (Klein et al., 2017) as the code framework 2 . For all models, the hidden size is set to 512. For rnn-based models (Seq2Seq, HRED), 3-layer bidirectional LSTM (Hochreiter and Schmidhuber, 1997) and 1-layer LSTM is applied for encoder and decoder respectively. For transformer-based models, the layers of both encoder and decoder are set to 3. The number of attention heads in multi-head attention is 8 and the filter size is 2048. The word embedding is shared by utterances, knowledge and generated responses. The dimension of word embedding is set to 512 empirically. We use Adam (Kingma and Ba, 2014) for optimization. When decoding, beam size is set to 5. We use the previous three utterances and its related documents as input.", |
| "cite_spans": [ |
| { |
| "start": 20, |
| "end": 40, |
| "text": "(Klein et al., 2017)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 179, |
| "end": 213, |
| "text": "(Hochreiter and Schmidhuber, 1997)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Automatic Evaluation: We adopt perplexity (PPL) and BLEU (Papineni et al., 2002) to automatically evaluate the response generation performance. Models are evaluated using perplexity of the gold response as described in (Dinan et al., 2018) . Lower perplexity indicates better performance. BLEU measures n-gram overlap between a generated response and a gold response. However, since there is only one reference for each response and there may exist multiple feasible responses, BLEU scores are extremely low. We compute BLEU score by the multi-bleu.perl 3 Manual Evaluation: Manual evaluations are essential for dialogue generation. We randomly sampled 30 conversations containing 606 utterances from the test set and obtained 5454 utterances from the nine models. We have annotators score these utterances given its previous utterances and related documents. We defined three metrics fluency, knowledge relevance and context coherence for manual evaluation. All these metrics are scored 0/1/2. fluency: Whether the response is natural and Document moviename despicable me ... cast: steve carell as gru ... rotten tomatoes: 81% ... it is the debut film of illumination entertainment. ... gru, a supervillain, is disheartened when an unknown supervillain steals the great pyramid of giza. gru, with the assistance of his colleague dr. Nefario and his minions, resolves to one-up this mystery rival by shrinking and stealing the moon. as it would be costly to steal the moon, gru seeks a loan from the bank of evil. ... upon learning that an up-and-coming villain named vector was responsible for the pyramid theft ... Utterance 1 what do you think about despicable me? ITE+DD i think it is an awesome movie, what about you? Wizard i think it's a great movie! Utterance 2 i rather liked it. it wasn't what i was expecting but it was still good.", |
| "cite_spans": [ |
| { |
| "start": 57, |
| "end": 80, |
| "text": "(Papineni et al., 2002)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 219, |
| "end": 239, |
| "text": "(Dinan et al., 2018)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "ITE+DD yeah, it makes me laugh, even when they make a movie about it.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Wizard it's a great movie for kids and adults. Utterance 3 i like it too. funny and heart warming at the same time.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "ITE+DD i think it's interesting that gru decided to take a loan from the bank of evil. Wizard i think they should have made a sequel. Utterance 4 gru was a really interesting character, too. i liked steve carell and the voice he used for gru. ITE+DD yeah, now that you mention it, what did you think of the minions? Wizard i think the entire cast was great. Utterance 5 yes, he did an excellent job portraying the character. 81% in ratings i think is great.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "ITE+DD yes, 81% is a great score! Wizard i also liked the villain of the movie. fluent. Score 0 represents not fluent and incomprehensible; 1 represents partially fluent but still comprehensible; 2 represents totally fluent. knowledge relevance: Whether the response uses relevant and correct knowledge. Score 0 represents no relevant knowledge; 1 represents containing relevant knowledge but not correct; 2 represents containing relevant knowledge and correct.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "context coherence: Whether the response is coherent with the context and guides the following utterances. Score 0 represents not coherent or leading the dialogue to an end; 1 represents coherent with the utterance history but not guiding the following utterances; 2 represents coherent with utterance history and guiding the next utterance. Table 1 shows the automatic and manual evaluation results for both the baseline and our models.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 341, |
| "end": 348, |
| "text": "Table 1", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "In manual evaluation, among baselines, Wizard Transformer and RNN without knowledge have the highest fluency of 1.62 and Wizard obtains the highest knowledge relevance of 0.47 while Transformer without knowledge gets the highest context coherence of 0.67. For all models, ITE+CKAD obtains the highest fluency of 1.68 and ITE+DD has the highest Knowledge Relevance of 0.56 and highest Context Coherence of 0.90.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Results", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "In automatic evaluation, our proposed model has lower perplexity and higher BLEU scores than baselines. For BLEU, HRED with knowledge obtains the highest BLEU score of 0.77 among the baselines. And ITE+DD gets 0.95 BLEU score, which is the highest among all the models. For perplexity, Wizard Transformer obtains the lowest perplexity of 70.30 among baseline models and ITE+DD has remarkably lower perplexity of 15.11 than all the other models. A detailed analysis is in Section 3.6.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Results", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "To our surprise, ITE+DD reaches an extremely low ground truth perplexity. We find that the ground truth perplexity after the first-pass decoding is only similar to the ITE+CKAD. It shows that the second-pass decoder utilizes the document knowledge well, and dramatically reduced the ground truth perplexity. Context Coherence than ITE+CKAD. This result also demonstrates that Deliberation Decoder can improve the knowledge correctness and guide the following conversations better. Although the perplexity of ITE+CKAD is only slightly better than KAT, the BLEU score, Fluency, Knowledge Relevance and Context Coherence of ITE+CKAD all significantly outperform those of KAT model, which indicates that Incremental Transformer can deal with multi-turn document grounded conversations better.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis and Discussion", |
| "sec_num": "3.6" |
| }, |
| { |
| "text": "Wizard Transformer has a great performance on Knowledge Relevance only second to our proposed Incremental Transformer. However, its score on Context Coherence is lower than some other baselines. As shown in Table 2 , Wizard Transformer has Knowledge Relevance score 1 results twice more than score 2 results, which indicates that the model tends to generate responses with related knowledge but not correct. And the poor performance on Context Coherence also shows Wizard Transformer does not respond to the previous utterance well. This shows the limitation of representing context and document knowledge by simple concatenation.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 207, |
| "end": 214, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "As shown in", |
| "sec_num": null |
| }, |
| { |
| "text": "In this section, we list some examples to show the effectiveness of our proposed model. Table 3 lists some responses generated by our proposed Incremental Transformer with Deliberation Decoder (ITE+DD) and Wizard Transformer (which achieves overall best performance among baseline models). Our proposed model can generate better responses than Wizard Transformer on knowledge relevance and context coherence.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 88, |
| "end": 95, |
| "text": "Table 3", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Case Study", |
| "sec_num": "3.7" |
| }, |
| { |
| "text": "To demonstrate the effectiveness of the twopass decoder, we compare the results from the first-pass decoding and the second-pass decoding. Table 4 shows the improvement after the secondpass decoding. For Case 1, the second-pass decoding result revises the knowledge error in the first-pass decoding result. For Case 2, the secondpass decoder uses more detailed knowledge than the first-pass one. For Case 3, the second-pass decoder cannot only respond to the previous utterance but also guide the following conversations by asking some knowledge related questions.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 139, |
| "end": 146, |
| "text": "Table 4", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Case Study", |
| "sec_num": "3.7" |
| }, |
| { |
| "text": "The closest work to ours lies in the area of opendomain dialogue system incorporating unstructured knowledge. Ghazvininejad et al. (2018) uses an extended Encoder-Decoder where the decoder is provided with an encoding of both the context and the external knowledge. Parthasarathi and Pineau (2018) uses an architecture containing a Bag-of-Words Memory Network fact encoder and an RNN decoder. Dinan et al. (2018) combines Memory Network architectures to retrieve, read and condition on knowledge, and Transformer architectures to provide text representation and generate outputs. Different from these works, we greatly enhance the Transformer architectures to handle the document knowledge in multi-turn dialogue from two aspects: 1) using attention mechanism to combine document knowledge and context utterances; and 2) exploiting incremental encoding scheme to encode multi-turn knowledge aware conversations.", |
| "cite_spans": [ |
| { |
| "start": 110, |
| "end": 137, |
| "text": "Ghazvininejad et al. (2018)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 266, |
| "end": 297, |
| "text": "Parthasarathi and Pineau (2018)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 393, |
| "end": 412, |
| "text": "Dinan et al. (2018)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Our work is also inspired by several works in other areas. Zhang et al. (2018) introduces document context into Transformer on document-level Neural Machine Translation (NMT) task. devises the incremental encoding scheme based on rnn for story ending generation task. In our work, we design an Incremental Transformer to achieve a knowledge-aware context representation using an incremental encoding scheme. Xia et al. (2017) first proposes Deliberation Network based on rnn on NMT task. Our Deliberation Decoder is different in two aspects: 1) We clearly devise the two decoders targeting context and knowledge respectively; 2) Our sec-ond pass decoder directly fine tunes the first pass result, while theirs uses both the hidden states and results from the first pass.", |
| "cite_spans": [ |
| { |
| "start": 59, |
| "end": 78, |
| "text": "Zhang et al. (2018)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 408, |
| "end": 425, |
| "text": "Xia et al. (2017)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In this paper, we propose an Incremental Transformer with Deliberation Decoder for the task of Document Grounded Conversations. Through an incremental encoding scheme, the model achieves a knowledge-aware and context-aware conversation representation. By imitating the real-world human cognitive process, we propose a Deliberation Decoder to optimize knowledge relevance and context coherence. Empirical results show that the proposed model can generate responses with much more relevance, correctness, and coherence compared with the state-of-the-art baselines. In the future, we plan to apply reinforcement learning to further improve the performance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "https://github.com/OpenNMT/OpenNMT-py 2 The code and models are available at https:// github.com/lizekang/ITDD", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/google/seq2seq/ blob/master/bin/tools/multi-bleu.perl", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work is supported by 2018 Tencent Rhino-Bird Elite Training Program, National Natural Science Foundation of China (NO. 61662077, NO.61876174) and National Key R&D Program of China (NO.YS2017YFGH001428). We sincerely thank the anonymous reviewers for their thorough reviewing and valuable suggestions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": "6" |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Reading wikipedia to answer opendomain questions", |
| "authors": [ |
| { |
| "first": "Danqi", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Fisch", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| }, |
| { |
| "first": "Antoine", |
| "middle": [], |
| "last": "Bordes", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1870--1879", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open- domain questions. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1870-1879.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Wizard of wikipedia: Knowledge-powered conversational agents", |
| "authors": [ |
| { |
| "first": "Emily", |
| "middle": [], |
| "last": "Dinan", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Roller", |
| "suffix": "" |
| }, |
| { |
| "first": "Kurt", |
| "middle": [], |
| "last": "Shuster", |
| "suffix": "" |
| }, |
| { |
| "first": "Angela", |
| "middle": [], |
| "last": "Fan", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Auli", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1811.01241" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2018. Wizard of wikipedia: Knowledge-powered conversational agents. arXiv preprint arXiv:1811.01241.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "A knowledge-grounded neural conversation model", |
| "authors": [ |
| { |
| "first": "Marjan", |
| "middle": [], |
| "last": "Ghazvininejad", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Brockett", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Bill", |
| "middle": [], |
| "last": "Dolan", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianfeng", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "Yih", |
| "middle": [], |
| "last": "Wen-Tau", |
| "suffix": "" |
| }, |
| { |
| "first": "Michel", |
| "middle": [], |
| "last": "Galley", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Thirty-Second AAAI Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In Thirty-Second AAAI Confer- ence on Artificial Intelligence.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Story ending generation with incremental encoding and commonsense knowledge", |
| "authors": [ |
| { |
| "first": "Jian", |
| "middle": [], |
| "last": "Guan", |
| "suffix": "" |
| }, |
| { |
| "first": "Yansen", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Minlie", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1808.10113" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jian Guan, Yansen Wang, and Minlie Huang. 2018. Story ending generation with incremental encod- ing and commonsense knowledge. arXiv preprint arXiv:1808.10113.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Long short-term memory", |
| "authors": [ |
| { |
| "first": "Sepp", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00fcrgen", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural computation", |
| "volume": "9", |
| "issue": "8", |
| "pages": "1735--1780", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Flowqa: Grasping flow in history for conversational machine comprehension", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hsin-Yuan", |
| "suffix": "" |
| }, |
| { |
| "first": "Eunsol", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Wen-Tau", |
| "middle": [], |
| "last": "Choi", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Yih", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1810.06683" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hsin-Yuan Huang, Eunsol Choi, and Wen-tau Yih. 2018. Flowqa: Grasping flow in history for con- versational machine comprehension. arXiv preprint arXiv:1810.06683.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Adam: A method for stochastic optimization", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Diederik", |
| "suffix": "" |
| }, |
| { |
| "first": "Jimmy", |
| "middle": [], |
| "last": "Kingma", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ba", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1412.6980" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Open-NMT: Open-source toolkit for neural machine translation", |
| "authors": [ |
| { |
| "first": "Guillaume", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoon", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuntian", |
| "middle": [], |
| "last": "Deng", |
| "suffix": "" |
| }, |
| { |
| "first": "Jean", |
| "middle": [], |
| "last": "Senellart", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [ |
| "M" |
| ], |
| "last": "Rush", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proc. ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P17-4012" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M. Rush. 2017. Open- NMT: Open-source toolkit for neural machine trans- lation. In Proc. ACL.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "A diversity-promoting objective function for neural conversation models", |
| "authors": [ |
| { |
| "first": "Jiwei", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Michel", |
| "middle": [], |
| "last": "Galley", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Brockett", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianfeng", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "Bill", |
| "middle": [], |
| "last": "Dolan", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "110--119", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objec- tive function for neural conversation models. In Pro- ceedings of the 2016 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110-119.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Knowledge diffusion for neural dialogue generation", |
| "authors": [ |
| { |
| "first": "Shuman", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Hongshen", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhaochun", |
| "middle": [], |
| "last": "Ren", |
| "suffix": "" |
| }, |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "Feng", |
| "suffix": "" |
| }, |
| { |
| "first": "Qun", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Dawei", |
| "middle": [], |
| "last": "Yin", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1489--1498", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shuman Liu, Hongshen Chen, Zhaochun Ren, Yang Feng, Qun Liu, and Dawei Yin. 2018. Knowledge diffusion for neural dialogue generation. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1489-1498.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Knowledge-grounded response generation with deep attentional latent-variable model", |
| "authors": [ |
| { |
| "first": "Kai-Ling", |
| "middle": [], |
| "last": "Hao-Tong Ye", |
| "suffix": "" |
| }, |
| { |
| "first": "Shang-Yu Su Yun-Nung", |
| "middle": [], |
| "last": "Lo", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Thirty-Third AAAI Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hao-Tong Ye Kai-Ling Lo and Shang-Yu Su Yun-Nung Chen. 2019. Knowledge-grounded response gen- eration with deep attentional latent-variable model. Thirty-Third AAAI Conference on Artificial Intelli- gence.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Effective approaches to attention-based neural machine translation", |
| "authors": [ |
| { |
| "first": "Thang", |
| "middle": [], |
| "last": "Luong", |
| "suffix": "" |
| }, |
| { |
| "first": "Hieu", |
| "middle": [], |
| "last": "Pham", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher D", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1412--1421", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thang Luong, Hieu Pham, and Christopher D Man- ning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Mem2seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems", |
| "authors": [ |
| { |
| "first": "Andrea", |
| "middle": [], |
| "last": "Madotto", |
| "suffix": "" |
| }, |
| { |
| "first": "Chien-Sheng", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Pascale", |
| "middle": [], |
| "last": "Fung", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1468--1478", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2018. Mem2seq: Effectively incorporating knowl- edge bases into end-to-end task-oriented dialog sys- tems. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), volume 1, pages 1468-1478.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Bleu: a method for automatic evaluation of machine translation", |
| "authors": [ |
| { |
| "first": "Kishore", |
| "middle": [], |
| "last": "Papineni", |
| "suffix": "" |
| }, |
| { |
| "first": "Salim", |
| "middle": [], |
| "last": "Roukos", |
| "suffix": "" |
| }, |
| { |
| "first": "Todd", |
| "middle": [], |
| "last": "Ward", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "311--318", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. pages 311-318.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Extending neural generative conversational model using external knowledge sources", |
| "authors": [ |
| { |
| "first": "Prasanna", |
| "middle": [], |
| "last": "Parthasarathi", |
| "suffix": "" |
| }, |
| { |
| "first": "Joelle", |
| "middle": [], |
| "last": "Pineau", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "690--695", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Prasanna Parthasarathi and Joelle Pineau. 2018. Ex- tending neural generative conversational model us- ing external knowledge sources. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 690-695.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Know what you dont know: Unanswerable questions for squad", |
| "authors": [ |
| { |
| "first": "Pranav", |
| "middle": [], |
| "last": "Rajpurkar", |
| "suffix": "" |
| }, |
| { |
| "first": "Robin", |
| "middle": [], |
| "last": "Jia", |
| "suffix": "" |
| }, |
| { |
| "first": "Percy", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "784--789", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you dont know: Unanswerable ques- tions for squad. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), volume 2, pages 784-789.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Coqa: A conversational question answering challenge", |
| "authors": [ |
| { |
| "first": "Siva", |
| "middle": [], |
| "last": "Reddy", |
| "suffix": "" |
| }, |
| { |
| "first": "Danqi", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher D", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1808.07042" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Siva Reddy, Danqi Chen, and Christopher D Manning. 2018. Coqa: A conversational question answering challenge. arXiv preprint arXiv:1808.07042.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Building end-to-end dialogue systems using generative hierarchical neural network models", |
| "authors": [ |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Iulian V Serban", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Sordoni", |
| "suffix": "" |
| }, |
| { |
| "first": "Aaron", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| }, |
| { |
| "first": "Joelle", |
| "middle": [], |
| "last": "Courville", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pineau", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Thirtieth AAAI Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hier- archical neural network models. In Thirtieth AAAI Conference on Artificial Intelligence.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Neural responding machine for short-text conversation", |
| "authors": [ |
| { |
| "first": "Lifeng", |
| "middle": [], |
| "last": "Shang", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhengdong", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "" |
| }, |
| { |
| "first": "Hang", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "1", |
| "issue": "", |
| "pages": "1577--1586", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversa- tion. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), vol- ume 1, pages 1577-1586.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Sequence to sequence learning with neural networks", |
| "authors": [ |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Oriol", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "Quoc V", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Advances in neural information processing systems", |
| "volume": "", |
| "issue": "", |
| "pages": "3104--3112", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In Advances in neural information process- ing systems, pages 3104-3112.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Attention is all you need", |
| "authors": [ |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Vaswani", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Niki", |
| "middle": [], |
| "last": "Parmar", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| }, |
| { |
| "first": "Llion", |
| "middle": [], |
| "last": "Jones", |
| "suffix": "" |
| }, |
| { |
| "first": "Aidan", |
| "middle": [ |
| "N" |
| ], |
| "last": "Gomez", |
| "suffix": "" |
| }, |
| { |
| "first": "\u0141ukasz", |
| "middle": [], |
| "last": "Kaiser", |
| "suffix": "" |
| }, |
| { |
| "first": "Illia", |
| "middle": [], |
| "last": "Polosukhin", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "5998--6008", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998-6008.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "A neural conversational model", |
| "authors": [ |
| { |
| "first": "Oriol", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "Quoc", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1506.05869" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Oriol Vinyals and Quoc Le. 2015. A neural conversa- tional model. arXiv preprint arXiv:1506.05869.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Deliberation networks: Sequence generation beyond one-pass decoding", |
| "authors": [ |
| { |
| "first": "Yingce", |
| "middle": [], |
| "last": "Xia", |
| "suffix": "" |
| }, |
| { |
| "first": "Fei", |
| "middle": [], |
| "last": "Tian", |
| "suffix": "" |
| }, |
| { |
| "first": "Lijun", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianxin", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Tao", |
| "middle": [], |
| "last": "Qin", |
| "suffix": "" |
| }, |
| { |
| "first": "Nenghai", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Tie-Yan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "1784--1794", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yingce Xia, Fei Tian, Lijun Wu, Jianxin Lin, Tao Qin, Nenghai Yu, and Tie-Yan Liu. 2017. Deliberation networks: Sequence generation beyond one-pass de- coding. In Advances in Neural Information Process- ing Systems, pages 1784-1794.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Modeling coherence for discourse neural machine translation", |
| "authors": [ |
| { |
| "first": "Hao", |
| "middle": [], |
| "last": "Xiong", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhongjun", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Hua", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Haifeng", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1811.05683" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hao Xiong, Zhongjun He, Hua Wu, and Haifeng Wang. 2018. Modeling coherence for discourse neural ma- chine translation. arXiv preprint arXiv:1811.05683.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Deepcopy: Grounded response generation with hierarchical pointer networks", |
| "authors": [ |
| { |
| "first": "Semih", |
| "middle": [], |
| "last": "Yavuz", |
| "suffix": "" |
| }, |
| { |
| "first": "Abhinav", |
| "middle": [], |
| "last": "Rastogi", |
| "suffix": "" |
| }, |
| { |
| "first": "Guan-Lin", |
| "middle": [], |
| "last": "Chao", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Semih Yavuz, Abhinav Rastogi, Guan-lin Chao, Dilek Hakkani-T\u00fcr, and Amazon Alexa AI. 2018. Deep- copy: Grounded response generation with hierarchi- cal pointer networks. Advances in Neural Informa- tion Processing Systems.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Qanet: Combining local convolution with global self-attention for reading comprehension", |
| "authors": [ |
| { |
| "first": "Adams", |
| "middle": [ |
| "Wei" |
| ], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Dohan", |
| "suffix": "" |
| }, |
| { |
| "first": "Minh-Thang", |
| "middle": [], |
| "last": "Luong", |
| "suffix": "" |
| }, |
| { |
| "first": "Rui", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohammad", |
| "middle": [], |
| "last": "Norouzi", |
| "suffix": "" |
| }, |
| { |
| "first": "Quoc V", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1804.09541" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. 2018. Qanet: Combining local convolution with global self-attention for reading comprehen- sion. arXiv preprint arXiv:1804.09541.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Improving the transformer translation model with document-level context", |
| "authors": [ |
| { |
| "first": "Jiacheng", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Huanbo", |
| "middle": [], |
| "last": "Luan", |
| "suffix": "" |
| }, |
| { |
| "first": "Maosong", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Feifei", |
| "middle": [], |
| "last": "Zhai", |
| "suffix": "" |
| }, |
| { |
| "first": "Jingfang", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Min", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "533--542", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jiacheng Zhang, Huanbo Luan, Maosong Sun, Feifei Zhai, Jingfang Xu, Min Zhang, and Yang Liu. 2018. Improving the transformer translation model with document-level context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 533-542.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "A dataset for document grounded conversations", |
| "authors": [ |
| { |
| "first": "Kangyan", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Shrimai", |
| "middle": [], |
| "last": "Prabhumoye", |
| "suffix": "" |
| }, |
| { |
| "first": "Alan", |
| "middle": [ |
| "W" |
| ], |
| "last": "Black", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "708--713", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kangyan Zhou, Shrimai Prabhumoye, and Alan W Black. 2018. A dataset for document grounded con- versations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Pro- cessing, pages 708-713.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "(1) Detailed architecture of model components. (a) The Self-Attentive Encoder(SA). (b) Incremental Transformer (ITE). (c) Deliberation Decoder (DD). (2) Simplified version of our proposed model used to verify the validity of our proposed Incremental Transformer Encoder and Deliberation Decoder. (d) Knowledge-Attention Transformer(KAT). (e) Context-Knowledge-Attention Decoder (CKAD).", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "TABREF2": { |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>). It con-</td></tr></table>", |
| "num": null, |
| "text": "Automatic evaluation and manual evaluation results for baselines and our proposed models." |
| }, |
| "TABREF3": { |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>: The percent(%) of score (0/1/2) of Knowledge</td></tr><tr><td>Relevance and Context Coherence for Wizard Trans-</td></tr><tr><td>former, ITE+CKAD and ITE+DD.</td></tr></table>", |
| "num": null, |
| "text": "" |
| }, |
| "TABREF4": { |
| "type_str": "table", |
| "html": null, |
| "content": "<table/>", |
| "num": null, |
| "text": "Responses generated by Incremental Transformer with Deliberation Decoder(ITE+DD) and the Wizard Transformer(Wizard). These utterances are from continuous dialogues. ITE+DD and Wizard generate responses using context utterances and a given document. Note that Utterance i + 1 is the gold response of Utterance i." |
| }, |
| "TABREF5": { |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td colspan=\"2\">ID Utterance</td><td>Two-pass Responses</td></tr><tr><td>1</td><td>I think rachel mcadams</td><td>i'm not a fan of</td></tr><tr><td/><td>had an even better role</td><td>kristen bell, but i think</td></tr><tr><td/><td>as regina george</td><td>she did a great job.</td></tr><tr><td/><td>however! would you</td><td>i'm not a huge fan of</td></tr><tr><td/><td>agree?</td><td>rachel mcadams, but</td></tr><tr><td/><td/><td>he did a great job.</td></tr><tr><td>2</td><td>yeah, I guess that's</td><td>yeah, not only does</td></tr><tr><td/><td>always worth it, and a</td><td>she reconcile with the</td></tr><tr><td/><td>truce was made as well.</td><td>plastics.</td></tr><tr><td/><td/><td>yeah, she reconciles</td></tr><tr><td/><td/><td>with janis , damien and</td></tr><tr><td/><td/><td>aaron.</td></tr><tr><td>3</td><td>i liked the scene where</td><td>i think that's one of</td></tr><tr><td/><td>buzz thinks he's a big</td><td>the best scenes in the</td></tr><tr><td/><td>shot hero but then the</td><td>movie.</td></tr><tr><td/><td>camera reveals him to</td><td>oh, i think that is</td></tr><tr><td/><td>be a tiny toy.</td><td>what makes the movie</td></tr><tr><td/><td/><td>unique as well. have</td></tr><tr><td/><td/><td>you seen any of the</td></tr><tr><td/><td/><td>other pixar movies?</td></tr></table>", |
| "num": null, |
| "text": "ITE+DD has a higher percent of score 2 both on Knowledge Relevance and" |
| }, |
| "TABREF6": { |
| "type_str": "table", |
| "html": null, |
| "content": "<table/>", |
| "num": null, |
| "text": "Examples of the two pass decoding. Underlined texts are the differences between two results. For each case, the first-pass response is on the top." |
| } |
| } |
| } |
| } |