ACL-OCL / Base_JSON /prefixJ /json /J19 /J19-1005.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "J19-1005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:58:18.086711Z"
},
"title": "A Sequential Matching Framework for Multi-Turn Response Selection in Retrieval-Based Chatbots",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "State Key Laboratory of Software Development Environment",
"institution": "Beihang University",
"location": {}
},
"email": "wuyu@buaa.edu.cn"
},
{
"first": "Wei",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "Research and AI Group",
"institution": "Microsoft Corporation",
"location": {}
},
"email": "wuwei@microsoft.com"
},
{
"first": "Chen",
"middle": [],
"last": "Xing",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NanKai University",
"location": {}
},
"email": "xingchen1113@gmail.com"
},
{
"first": "Can",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {
"laboratory": "Research and AI Group",
"institution": "Microsoft Corporation",
"location": {}
},
"email": "can.xu@microsoft.com"
},
{
"first": "Zhoujun",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "Beihang University State Key Laboratory of Software Development Environment",
"institution": "",
"location": {}
},
"email": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": "",
"affiliation": {
"laboratory": "Microsoft Research Natural Language Computing Group",
"institution": "",
"location": {}
},
"email": "mingzhou@microsoft.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Table 1 An example of multi-turn conversation. Context Turn-1 Human: How are you doing? Turn-2 ChatBot: I am going to hold a drum class in Shanghai. Anyone wants to join? The location is near Lujiazui. Turn-3 Human: Interesting! Do you have coaches who can help me practice drum? Turn-4 ChatBot: Of course. Turn-5 Human: Can I have a free first lesson? Response Candidates Response 1: Sure. Have you ever played drum before? Response 2: What lessons do you want?",
"pdf_parse": {
"paper_id": "J19-1005",
"_pdf_hash": "",
"abstract": [
{
"text": "Table 1 An example of multi-turn conversation. Context Turn-1 Human: How are you doing? Turn-2 ChatBot: I am going to hold a drum class in Shanghai. Anyone wants to join? The location is near Lujiazui. Turn-3 Human: Interesting! Do you have coaches who can help me practice drum? Turn-4 ChatBot: Of course. Turn-5 Human: Can I have a free first lesson? Response Candidates Response 1: Sure. Have you ever played drum before? Response 2: What lessons do you want?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "We study the problem of response selection for multi-turn conversation in retrieval-based chatbots. The task involves matching a response candidate with a conversation context, the challenges for which include how to recognize important parts of the context, and how to model the relationships among utterances in the context. Existing matching methods may lose important information in contexts as we can interpret them with a unified framework in which contexts are transformed to fixed-length vectors without any interaction with responses before matching. This motivates us to propose a new matching framework that can sufficiently carry important information in contexts to matching and model relationships among utterances at the same time. The new framework, which we call a sequential matching framework (SMF), lets each utterance in a context interact with a response candidate at the first step and transforms the pair to a matching vector. The matching vectors are then accumulated following the order of the utterances in the context with a recurrent neural network (RNN) that models relationships among utterances. Context-response matching is then calculated with the hidden states of the RNN. Under SMF, we propose a sequential convolutional network and sequential attention network and conduct experiments on two public data sets to test their performance. Experiment results show that both models can significantly outperform state-of-the-art matching methods. We also show that the models are interpretable with visualizations that provide us insights on how they capture and leverage important information in contexts for matching.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Recent years have witnessed a surge of interest on building conversational agents both in industry and academia. Existing conversational agents can be categorized into taskoriented dialog systems and non-task-oriented chatbots. Dialog systems focus on helping people complete specific tasks in vertical domains (Young et al. 2010) , such as flight booking, bus route enquiry, restaurant recommendation, and so forth; chatbots aim to naturally and meaningfully converse with humans on open domain topics (Ritter, Cherry, and Dolan 2011) . Building an open domain chatbot is challenging, because it requires the conversational engine to be capable of responding to any input from humans that covers a wide range of topics. To address the problem, researchers have considered leveraging the large amount of conversation data available on the Internet, and proposed generation-based methods (Shang, Lu, and Li 2015; Vinyals and Le 2015; Li et al. 2016b; Mou et al. 2016; Serban et al. 2016; Xing et al. 2017 ) and retrieval-based methods (Wang et al. 2013; Hu et al. 2014; Ji, Lu, and Li 2014; Wang et al. 2015; Yan, Song, and Wu 2016; Zhou et al. 2016; Wu et al. 2018a) . Generation-based methods generate responses with natural language generation models learned from conversation data, while retrieval-based methods re-use the existing responses by selecting proper ones from an index of the conversation data. In this work, we study the problem of response selection in retrieval-based chatbots, because retrieval-based chatbots have the advantage of returning informative and fluent responses. Although most existing work on retrieval-based chatbots studies response selection for single-turn conversation (Wang et al. 2013) in which conversation history is ignored, we study the problem in a multi-turn scenario. In a chatbot, multi-turn response selection takes a message and utterances in its previous turns as an input and selects a response that is natural and relevant to the entire context.",
"cite_spans": [
{
"start": 311,
"end": 330,
"text": "(Young et al. 2010)",
"ref_id": "BIBREF58"
},
{
"start": 503,
"end": 535,
"text": "(Ritter, Cherry, and Dolan 2011)",
"ref_id": "BIBREF28"
},
{
"start": 887,
"end": 911,
"text": "(Shang, Lu, and Li 2015;",
"ref_id": "BIBREF35"
},
{
"start": 912,
"end": 932,
"text": "Vinyals and Le 2015;",
"ref_id": null
},
{
"start": 933,
"end": 949,
"text": "Li et al. 2016b;",
"ref_id": "BIBREF16"
},
{
"start": 950,
"end": 966,
"text": "Mou et al. 2016;",
"ref_id": "BIBREF24"
},
{
"start": 967,
"end": 986,
"text": "Serban et al. 2016;",
"ref_id": "BIBREF32"
},
{
"start": 987,
"end": 1003,
"text": "Xing et al. 2017",
"ref_id": "BIBREF52"
},
{
"start": 1034,
"end": 1052,
"text": "(Wang et al. 2013;",
"ref_id": "BIBREF45"
},
{
"start": 1053,
"end": 1068,
"text": "Hu et al. 2014;",
"ref_id": "BIBREF9"
},
{
"start": 1069,
"end": 1089,
"text": "Ji, Lu, and Li 2014;",
"ref_id": "BIBREF11"
},
{
"start": 1090,
"end": 1107,
"text": "Wang et al. 2015;",
"ref_id": "BIBREF46"
},
{
"start": 1108,
"end": 1131,
"text": "Yan, Song, and Wu 2016;",
"ref_id": "BIBREF55"
},
{
"start": 1132,
"end": 1149,
"text": "Zhou et al. 2016;",
"ref_id": "BIBREF60"
},
{
"start": 1150,
"end": 1166,
"text": "Wu et al. 2018a)",
"ref_id": "BIBREF49"
},
{
"start": 1707,
"end": 1725,
"text": "(Wang et al. 2013)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "A key step in response selection is measuring matching degree between an input and response candidates. Different from single-turn conversation, in which the input is a single utterance (i.e., the message), multi-turn conversation requires context-response matching where both the current message and the utterances in its previous turns should be taken into consideration. The challenges of the task include (1) how to extract important information (words, phrases, and sentences) from the context and leverage the information in matching; and (2) how to model relationships and dependencies among the utterances in the context. Table 1 uses an example to illustrate the challenges. First, to find a proper response for the context, the chatbot must know that \"hold a drum class\" and \"drum\" are important points. Without them, it may return a response relevant to the message (i.e., Turn-5 in the context) but nonsensical in the context (e.g., \"what lessons do you want?\"). On the other hand, words like \"Shanghai\" and \"Lujiazui\" are less useful and even noisy to response selection. The responses from the chatbot may drift to the topic of \"Shanghai\" if the chatbot pays significant attention to these words. Therefore, it is crucial yet non-trivial to let the chatbot understand the important points in the context and leverage them in matching and at the same time circumvent noise. Second, there is a clear dependency between Turn-5 and Turn-2 in the context, and the order of utterances matters in response selection because there will be different proper responses if we exchange Turn-3 and Turn-5.",
"cite_spans": [],
"ref_spans": [
{
"start": 630,
"end": 637,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Existing work, including the recurrent neural network architectures proposed by Lowe et al. (2015) , the deep learning to respond architecture proposed by Yan, Song, and Wu (2016) , and the multi-view architecture proposed by Zhou et al. (2016) , may lose important information in context-response matching because they follow the same paradigm to perform matching, which suffers clear drawbacks. In fact, although these models have different structures, they can be interpreted with a unified framework: A context and a response are first individually represented as vectors, and then their matching score is computed with the vectors. The context representation includes two layers. The first layer represents utterances in the context, and the second layer takes the output of the first layer as an input and represents the entire context. The existing work differs in how they design the context representation and the response representation and how they calculate the matching score with the two representations. The framework view unifies the existing models and indicates the common drawbacks they have: everything in the context is compressed to one or more fixed-length vectors before matching is conducted; and there is no interaction between the context and the response in the formation of their representations. The context is represented without enough supervision from the response, and so is the response.",
"cite_spans": [
{
"start": 80,
"end": 98,
"text": "Lowe et al. (2015)",
"ref_id": "BIBREF21"
},
{
"start": 155,
"end": 179,
"text": "Yan, Song, and Wu (2016)",
"ref_id": "BIBREF55"
},
{
"start": 226,
"end": 244,
"text": "Zhou et al. (2016)",
"ref_id": "BIBREF60"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "To overcome the drawbacks, we propose a sequential matching network (SMN) for context-response matching in our early work (Wu et al. 2017) where we construct a matching vector for each utterance-response pair through convolution and pooling on their similarity matrices, and then aggregate the sequence of matching vectors as a matching score of the context and the response. In this work, we take it one step further and generalize the SMN model to a sequential matching framework (SMF). The framework view allows us to tackle the challenges of context-response matching from a high level. Specifically, SMF matches each utterance in the context with the response at the first step and forms a sequence of matching vectors. It then accumulates the matching vectors of utterance-response pairs in the chronological order of the utterances. The final context-response matching score is calculated with the accumulation of pair matching. Different from the existing framework, SMF allows utterances in the context and the response to interact with each other at the very beginning, and thus important matching information in each utterance-response pair can be sufficiently preserved and carried to the final matching score. Moreover, relationships and dependencies among utterances are modeled in a matching fashion, so the order of utterances can supervise the aggregation of the utterance-response matching. Generally speaking, SMF consists of three layers. The first layer extracts important matching information from each utterance-response pair and transforms the information into a matching vector. The matching vectors are then uploaded to the second layer where a recurrent neural network with gated recurrent units (GRU) (Chung et al. 2014 ) is used to model the relationships and dependencies among utterances and accumulate the matching vectors into its hidden states. The final layer takes the hidden states of the GRU as input and calculates a matching score for the context and the response. The key to the success of SMF lies in how to design the utterance-response matching layer, which requires identification of important parts in each utterance. We first show that the point-wise similarity calculation followed by convolution and pooling in SMN is one implementation of the utterance-response matching layer of SMF, making the SMN model a special case of the framework. Then, we propose a new model named sequential attention network (SAN), which implements the utterance-response matching layer of SMF with an attention mechanism. Specifically, for an utterance-response pair, SAN lets the response attend to important parts (either words or segments) in the utterance by weighting the parts using each part of the response. Each weight reflects how important the part in the utterance is with respect to the corresponding part in the response. Then for each part in the response, parts in the utterance are linearly combined with the weights, and the combination interacts with the part of the response by Hadamard product to form a representation of the utterance. Such utterance representations are computed on both a word level and a segment level. The two levels of representations are finally concatenated and processed by a GRU to form a matching vector. SMN and SAN are two different implementations of the utteranceresponse matching layer, and we give a comprehensive comparison between SAN and SMN. Theoretically, SMN is faster and easier to parallelize than SAN, whereas SAN can better utilize the sequential relationship and dependency. The empirical results are consistent with the theoretical analysis.",
"cite_spans": [
{
"start": 122,
"end": 138,
"text": "(Wu et al. 2017)",
"ref_id": "BIBREF50"
},
{
"start": 1729,
"end": 1747,
"text": "(Chung et al. 2014",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "We empirically compare SMN and SAN on two public data sets: the Ubuntu Dialogue Corpus (Lowe et al. 2015) and the Douban Conversation Corpus (Wu et al. 2017) . The Ubuntu corpus is a large-scale English data set in which negative instances are randomly sampled and dialogues are collected from a specific domain; the Douban corpus is a newly published Chinese data set where conversations are crawled from an open domain forum with response candidates collected following the procedure of retrieval-based chatbots and their appropriateness judged by human annotators. Experimental results show that on both data sets, both SMN and SAN can significantly outperform the existing methods. Particularly, on the Ubuntu corpus, SMN and SAN yield 6 and 7 percentage point improvement, respectively, on R 10 @1 over the best performing baseline method, and on the Douban corpus, the improvement on mean average precision from SMN and SAN over the best baseline are 2.6 and 3.6 percentage points, respectively. The empirical results indicate that SAN can achieve better performance than SMN in practice. In addition to the quantitative evaluation, we also visualize the two models with examples from the Ubuntu corpus. The visualization reveals how the two models understand conversation contexts and provides us insights on why they can achieve big improvement over state-of-the-art methods.",
"cite_spans": [
{
"start": 87,
"end": 105,
"text": "(Lowe et al. 2015)",
"ref_id": "BIBREF21"
},
{
"start": 141,
"end": 157,
"text": "(Wu et al. 2017)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "This work is a substantial extension of our previous work reported at ACL 2017. The extension in this article includes a unified framework for the existing methods, a proposal of a new framework for context-response matching, and a new model under the framework. Specifically, the contributions of this work include the following.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "We unify existing context-response matching models with a framework and disclose their intercorrelations with detailed mathematical derivations, which reveals their common drawbacks and sheds light on our new direction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022",
"sec_num": null
},
{
"text": "We propose a new framework for multi-turn response selection, namely, the sequential matching framework, which is capable of overcoming the drawbacks suffered by the existing models and addressing the challenges of context-response matching in an end-to-end way. The framework indicates that the key to context-response matching is not the 2D convolution and pooling operations in SMN, but a general utterance-response matching function that can capture the important matching information in utterance-response pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022",
"sec_num": null
},
{
"text": "We propose a new architecture, the sequential attention network, under the new framework. Moreover, we compare SAN with SMN on both efficiency and effectiveness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022",
"sec_num": null
},
{
"text": "We conduct extensive experiments on public data sets and verify that SAN achieves new state-of-the-art performance on context-response matching.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022",
"sec_num": null
},
{
"text": "The rest of the paper is organized as follows: In Section 2 we summarize the related work. We formalize the learning problem in Section 3. In Section 4, we interpret the existing models with a framework. Section 5 elaborates our new framework and gives two models as special cases of the framework. Section 6 gives the learning objective and some training details. In Section 7 we give details of the experiments. In Section 8, we outline our conclusions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022",
"sec_num": null
},
{
"text": "We briefly review the history and recent progress of chatbots, and application of text matching techniques in other tasks. Together with the review of existing work, we clarify the connection and difference between these works and our work in this article.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "Research on chatbots goes back to the 1960s when ELIZA (Weizenbaum 1966) , an early chatbot, was designed with a large number of handcrafted templates and heuristic rules.",
"cite_spans": [
{
"start": 55,
"end": 72,
"text": "(Weizenbaum 1966)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Chatbots",
"sec_num": "2.1"
},
{
"text": "ELIZA needs huge human effort but can only return limited responses. To remedy this, researchers have developed data-driven approaches (Higashinaka et al. 2014) . The idea behind data-driven approaches is to build a chatbot with the large amount of conversation data available on social media such as forums and microblogging services. Methods along this line can be categorized into retrieval-based and generation-based ones.",
"cite_spans": [
{
"start": 135,
"end": 160,
"text": "(Higashinaka et al. 2014)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Chatbots",
"sec_num": "2.1"
},
{
"text": "Generation-based chatbots reply to a message with natural language generation techniques. Early work (Ritter, Cherry, and Dolan 2011) regards messages and responses as source language and target language, respectively, and learn a phrase-based statistical machine translation model to translate a message to a response. Recently, together with the success of deep learning approaches, the sequence-to-sequence framework has become the mainstream approach, because it can implicitly capture compositionality and long-span dependencies in languages. Under this framework, many models have been proposed for both single-turn conversation and multi-turn conversation. For example, in single-turn conversation, sequence-to-sequence with an attention mechanism (Shang, Lu, and Li 2015; Vinyals and Le 2015) has been applied to response generation; Li et al. (2016a) proposed a maximum mutual information objective to improve diversity of generated responses; Xing et al. (2017) and Mou et al. (2016) introduced external knowledge into the sequence-to-sequence model; Wu et al. (2018b) proposed decoding a response from a dynamic vocabulary; Li et al. (2016b) incorporated persona information into the sequence-to-sequence model to enhance response consistency with speakers; and Zhou et al. (2018) explored how to generate emotional responses with a memory augmented sequence-to-sequence model. In multi-turn conversation, Sordoni et al. (2015) compressed a context to a vector with a multi-layer perceptron in response generation; Serban et al. (2016) extended the sequence-to-sequence model to a hierarchical encoder-decoder structure; and under this structure, they further proposed two variants including VHRED (Serban et al. 2017b) and MrRNN (Serban et al. 2017a) to introduce latent and explicit variables into the generation process. Xing et al. (2018) exploited a hierarchical attention mechanism to highlight the effect of important words and utterances in generation. Upon these methods, reinforcement learning technique (Li et al. 2016c) and an adversarial learning technique (Li et al. 2017) have also been applied to response generation.",
"cite_spans": [
{
"start": 101,
"end": 133,
"text": "(Ritter, Cherry, and Dolan 2011)",
"ref_id": "BIBREF28"
},
{
"start": 755,
"end": 779,
"text": "(Shang, Lu, and Li 2015;",
"ref_id": "BIBREF35"
},
{
"start": 780,
"end": 800,
"text": "Vinyals and Le 2015)",
"ref_id": null
},
{
"start": 842,
"end": 859,
"text": "Li et al. (2016a)",
"ref_id": "BIBREF15"
},
{
"start": 953,
"end": 971,
"text": "Xing et al. (2017)",
"ref_id": "BIBREF52"
},
{
"start": 976,
"end": 993,
"text": "Mou et al. (2016)",
"ref_id": "BIBREF24"
},
{
"start": 1061,
"end": 1078,
"text": "Wu et al. (2018b)",
"ref_id": "BIBREF51"
},
{
"start": 1135,
"end": 1152,
"text": "Li et al. (2016b)",
"ref_id": "BIBREF16"
},
{
"start": 1417,
"end": 1438,
"text": "Sordoni et al. (2015)",
"ref_id": "BIBREF38"
},
{
"start": 1526,
"end": 1546,
"text": "Serban et al. (2016)",
"ref_id": "BIBREF32"
},
{
"start": 1709,
"end": 1730,
"text": "(Serban et al. 2017b)",
"ref_id": "BIBREF33"
},
{
"start": 1741,
"end": 1762,
"text": "(Serban et al. 2017a)",
"ref_id": null
},
{
"start": 1835,
"end": 1853,
"text": "Xing et al. (2018)",
"ref_id": "BIBREF53"
},
{
"start": 2025,
"end": 2042,
"text": "(Li et al. 2016c)",
"ref_id": "BIBREF17"
},
{
"start": 2081,
"end": 2097,
"text": "(Li et al. 2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Chatbots",
"sec_num": "2.1"
},
{
"text": "Different from the generation based systems, retrieval-based chatbots select a proper response from an index and re-use the one to reply to a new input. The key to response selection is how to match the input with a response. In a single-turn scenario, matching is conducted between a message and a response. For example, Hu et al. (2014) proposed message-response matching with convolutional neural networks; Wang et al. (2015) incorporated syntax information into matching; Ji, Lu, and Li (2014) combined several matching features, such as cosine, topic similarity, and translation score, to rank response candidates. In multi-turn conversation, matching requires taking the entire context into consideration. In this scenario, Lowe et al. (2015) used a dual long short-term memory (LSTM) model to match a response with the literal concatenation of utterances in a context; Yan, Song, and Wu (2016) reformulated the input message with the utterances in its previous turns and performed matching with a deep neural network architecture; Zhou et al. (2016) adopted an utterance view and a word view in matching to model relationships among utterances; and Wu et al. (2017) proposed a sequential matching network that can capture important information in contexts and model relationships among utterances in a unified form.",
"cite_spans": [
{
"start": 322,
"end": 338,
"text": "Hu et al. (2014)",
"ref_id": "BIBREF9"
},
{
"start": 730,
"end": 748,
"text": "Lowe et al. (2015)",
"ref_id": "BIBREF21"
},
{
"start": 876,
"end": 900,
"text": "Yan, Song, and Wu (2016)",
"ref_id": "BIBREF55"
},
{
"start": 1038,
"end": 1056,
"text": "Zhou et al. (2016)",
"ref_id": "BIBREF60"
},
{
"start": 1156,
"end": 1172,
"text": "Wu et al. (2017)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Chatbots",
"sec_num": "2.1"
},
{
"text": "Our work is a retrieval-based method. It is an extension of the work by Wu et al. (2017) reported at the ACL conference. In this work, we analyze the existing models from a framework view, generalize the model in Wu et al. (2017) to a framework, give another implementation with better performance under the framework, and compare the new model with the model in the conference paper on various aspects.",
"cite_spans": [
{
"start": 72,
"end": 88,
"text": "Wu et al. (2017)",
"ref_id": "BIBREF50"
},
{
"start": 213,
"end": 229,
"text": "Wu et al. (2017)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Chatbots",
"sec_num": "2.1"
},
{
"text": "In addition to response selection in chatbots, neural network-based text matching techniques have proven effective in capturing semantic relations between text pairs in a variety of NLP tasks. For example, in question answering, covolutional neural networks (Qiu and Huang 2015; Severyn and Moschitti 2015) can effectively capture compositions of n-grams and their relations in questions and answers. Inner-Attention (Wang, Liu, and Zhao 2016) and multiple view (MV)-LSTM (Wan et al. 2016a) Yin and Sch\u00fctze [2015] ). In Web search, Shen et al. (2014) and Huang et al. (2013) built a neural network with tri-letters to alleviate mismatching of queries and documents due to spelling errors. In textual entailment, the model in Rockt\u00e4schel et al. (2015) utilized a word-by-word attention mechanism to distinguish the relationship between two sentences. Wang and Jiang (2016b) introduced another way to adopt an attention mechanism for textual entailment. Besides those two works, Chen et al. 2016, Parikh et al. (2016) , and Wang and Jiang (2016a) also investigated the textual entailment problem with neural network models.",
"cite_spans": [
{
"start": 258,
"end": 278,
"text": "(Qiu and Huang 2015;",
"ref_id": "BIBREF27"
},
{
"start": 279,
"end": 306,
"text": "Severyn and Moschitti 2015)",
"ref_id": "BIBREF34"
},
{
"start": 417,
"end": 443,
"text": "(Wang, Liu, and Zhao 2016)",
"ref_id": "BIBREF44"
},
{
"start": 472,
"end": 490,
"text": "(Wan et al. 2016a)",
"ref_id": "BIBREF42"
},
{
"start": 491,
"end": 513,
"text": "Yin and Sch\u00fctze [2015]",
"ref_id": "BIBREF56"
},
{
"start": 532,
"end": 550,
"text": "Shen et al. (2014)",
"ref_id": "BIBREF36"
},
{
"start": 555,
"end": 574,
"text": "Huang et al. (2013)",
"ref_id": "BIBREF10"
},
{
"start": 725,
"end": 750,
"text": "Rockt\u00e4schel et al. (2015)",
"ref_id": "BIBREF29"
},
{
"start": 850,
"end": 872,
"text": "Wang and Jiang (2016b)",
"ref_id": "BIBREF48"
},
{
"start": 995,
"end": 1015,
"text": "Parikh et al. (2016)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text Matching",
"sec_num": "2.2"
},
{
"text": "In this work, we study text matching for response selection in multi-turn conversation, in which matching is conducted between a piece of text and a context which consists of multiple pieces of text dependent on each other. We propose a new matching framework that is able to extract important information in the context and model dependencies among utterances in the context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Matching",
"sec_num": "2.2"
},
{
"text": "Suppose that we have a data set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formalization",
"sec_num": "3."
},
{
"text": "D = {(y i , s i , r i )} N i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formalization",
"sec_num": "3."
},
{
"text": ", where s i is a conversation context, r i is a response candidate, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formalization",
"sec_num": "3."
},
{
"text": "y i \u2208 {0, 1} is a label. s i = {u i,1 , . . . , u i,n i } where {u i,k } n i k=1 are utterances. \u2200k, u i,k = (w u i,k ,1 , . . . , w u i,k ,j , . . . , w u i,k ,n u i ,k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formalization",
"sec_num": "3."
},
{
"text": "where w u i,k ,j is the j-th word in u i,k and n u i ,k is the length of u i,k . Similarly,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formalization",
"sec_num": "3."
},
{
"text": "r i = (w r i ,1 , . . . , w r i ,j , . . . , w r i ,n r i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formalization",
"sec_num": "3."
},
{
"text": ") where w r i ,j is the j-th word in r i and n r i is the length of the response. y i = 1 if r i is a proper response to s i , otherwise y i = 0. Our goal is to learn a matching model g(\u2022, \u2022) with D, and thus for any new context-response pair (s, r), g(s, r) measures their matching degree. According to g(s, r), we can rank candidates for s and select a proper one as its response.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formalization",
"sec_num": "3."
},
{
"text": "In the following sections, we first review how the existing work defines g(\u2022, \u2022) from a framework view. The framework view discloses the common drawbacks of the existing work. Then, based on this analysis, we propose a new matching framework and give two models under the framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formalization",
"sec_num": "3."
},
{
"text": "Before our work, a few studies on context-response matching for response selection in multi-turn conversation have been conducted. For example, Lowe et al. 2015 a deep learning to respond architecture for multi-turn response selection; and Zhou et al. 2016perform context-response matching from both a word view and an utterance view. Although these models are proposed from different backgrounds, we find that they can be interpreted with a unified framework, given by ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "( ) f ( ) f ( ) f '( ) f ( ) h ( , ) m \uf0d7 \uf0d7 2 u",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "g(s, r) = m h f (u 1 ), . . . , f (u n ) , f (r)",
"eq_num": "(1)"
}
],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "The existing models are special cases under the framework with different defini- ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "tions of f (\u2022), h(\u2022), f (\u2022),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "m rnn (s, r) = \u03c3 h rnn f rnn (u 1 ), . . . , f rnn (u n ) \u2022 M \u2022 f rnn (r) + b (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "where M is a linear transformation, b is a bias, and \u03c3(\u2022) is a sigmoid function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "\u2200u i = {w u i ,1 , . . . , w u i ,n i }, f rnn (u i ) is defined by f rnn (u i ) = w u i ,1 , . . . , w u i ,k , . . . , w u i ,n i (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "where w u i ,k is the embedding of the k-th word w u i ,k , and [\u2022] denotes a horizontal concatenation operator on vectors or matrices. 1 Suppose that the dimension of the word embedding is d, then the output of f rnn (u i ) is a d \u00d7 n i matrix with each column an embedding vector. Suppose that r = (w r,1 , . . . , w r,n r ), then f rnn (r) is defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f rnn (r) = RNN( w r,1 , . . . , w r,k , . . . , w r,n r )",
"eq_num": "(4)"
}
],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "where w r,k is the embedding of the k-th word in r, and RNN(\u2022) is either a vanilla RNN (Elman 1990) or an RNN with LSTM units (Hochreiter and Schmidhuber 1997) . RNN(\u2022) takes a sequence of vectors as an input, and outputs the last hidden state of the network. Finally, the context representation h rnn (\u2022) is defined by",
"cite_spans": [
{
"start": 87,
"end": 99,
"text": "(Elman 1990)",
"ref_id": "BIBREF5"
},
{
"start": 126,
"end": 159,
"text": "(Hochreiter and Schmidhuber 1997)",
"ref_id": "BIBREF8"
},
{
"start": 162,
"end": 168,
"text": "RNN(\u2022)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h rnn f rnn (u 1 ), . . . , f rnn (u n ) = RNN [f rnn (u 1 ), . . . , f rnn (u n )]",
"eq_num": "(5)"
}
],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "In the deep learning to respond (DL2R) architecture (Yan, Song, and Wu 2016) , the authors first transform the context s to an s = {v 1 , . . . , v o } with heuristics including \"no context,\" \"whole context,\" \"add-one,\" \"drop-out,\" and \"combined.\" These heuristics differ on how utterances before the last input in the context are incorporated into matching. In \"no context,\" s = {u n }, and thus no previous utterances are considered; in \"whole context,\"",
"cite_spans": [
{
"start": 52,
"end": 76,
"text": "(Yan, Song, and Wu 2016)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "s = {u 1 \u2022 \u2022 \u2022 u n , u n }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "where operator glues vectors together and forms a long vector. Therefore, in \"whole context,\" the conversation context is represented as a concatenation of all its utterances; in \"add-one,\" s = {u 1 u n , . . . , u n\u22121 u n , u n }. \"add-one\" leverages the conversation context by concatenating each of its utterances (except the last one) with the last input; in \"drop-out,\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "s = {(c\\u 1 ) u n , . . . , (c\\u n\u22121 ) u n , u n } where c = u 1 \u2022 \u2022 \u2022 u n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "and c\\u i means excluding u i from c. \"drop-out\" also utilizes each utterance before the last one individually, but concatenates the complement of each utterance with the last input; and in \"combined,\" s is the union of the other heuristics. Let v o = u n in all heuristics, then the matching model of DL2R can be reformulated as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "m dl2r (s, r) = o i=1 MLP( f dl2r (v i ) f dl2r (v o )) \u2022 MLP( f dl2r (v i ) f dl2r (r))",
"eq_num": "(6)"
}
],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "MLP(\u2022) is a multi-layer perceptron. \u2200v \u2208 {v 1 , . . . , v o }, suppose that { w v,1 , . . . , w v,n v } represent embedding vectors of the words in v, then f dl2r (v) is given by f dl2r (v) = CNN Bi-LSTM( w v,1 , . . . , w v,n v )",
"eq_num": "(7)"
}
],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "where CNN ( ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "{f dl2r (v 1 ), . . . , f dl2r (v o )}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "Note that in the paper of Yan, Song, and Wu (2016) , the authors also assume that each response candidate is associated with an antecedent posting p. This assumption does not always hold in multi-turn response selection. For example, in the Ubuntu Dialog Corpus (Lowe et al. 2015) , there are no antecedent postings. To make the framework compatible with their assumption, we can simply extend",
"cite_spans": [
{
"start": 26,
"end": 50,
"text": "Yan, Song, and Wu (2016)",
"ref_id": "BIBREF55"
},
{
"start": 262,
"end": 280,
"text": "(Lowe et al. 2015)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "f dl2r (r) to [f dl2r (p), f dl2r (r)], and define m dl2r (s, r) as o i=1 \uf8eb \uf8ed MLP( f dl2r (v i ) f dl2r (v o )) \u2022 \uf8eb \uf8ed p MLP( f dl2r (v i ) f dl2r (p)) \u2022 MLP( f dl2r (v i ) f dl2r (r)) \uf8f6 \uf8f8 \uf8f6 \uf8f8 (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "Finally, in Zhou et al. (2016) , the multi-view matching model can be rewritten as",
"cite_spans": [
{
"start": 12,
"end": 30,
"text": "Zhou et al. (2016)",
"ref_id": "BIBREF60"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "m mv (s, r) = \u03c3 h mv ( f mv (u 1 ), . . . , f mv (u n )) M 1 M 2 f mv (r) + b 1 b 2 (9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "where M 1 and M 2 are linear transformations, b 1 and b 2 are biases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "\u2200u i = {w u i ,1 , . . . , w u i ,n i }, f mv (u i ) is defined as f mv (u i ) = {f w (u i ), f u (u i )} (10)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "where f w (u i ) and f u (u i ) are utterance representations from a word view and an utterance view, respectively. The formulation of f w (u i ) and f u (u i ) are given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f w (u i ) = w u i ,1 , . . . , w u i ,n i f u (u i ) = CNN( w u i ,1 , . . . , w u i ,n i ) Suppose that r = (w r,1 , . . . , w r,n r ), then f mv (r) is defined as f mv (r) = [f w (r) , f u (r) ]",
"eq_num": "(11)"
}
],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "where the word view representation f w (r) and the utterance view representation f u (r) are formulated as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "f w (r) = GRU( w r,1 , . . . , w u r,nr ) f u (r) = CNN( w r,1 , . . . , w u r,nr )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "where GRU(\u2022) is a recurrent neural network with GRUs (Cho et al. 2014) . The output of f w (r) is the last hidden state of the GRU model. The context representation",
"cite_spans": [
{
"start": 53,
"end": 70,
"text": "(Cho et al. 2014)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h mv ( f mv (u 1 ), . . . , f mv (u n )) is defined as h mv ( f mv (u 1 ), . . . , f mv (u n )) = [h w ( f w (u 1 ), . . . , f w (u n )) , h u ( f u (u 1 ), . . . , f u (u n )) ]",
"eq_num": "(12)"
}
],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "where the word view h w (\u2022) and the utterance view h u (\u2022) are defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "h w ( f w (u 1 ), . . . , f w (u n )) = GRU [f w (u 1 ), . . . , f w (u n )] h u ( f u (u 1 ), . . . , f u (u n )) = GRU f u (u 1 ), . . . , f u (u n )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "There are several advantages when applying the framework view to the existing context-response matching models. First, it unifies the existing models and reveals the instinct connections among them. These models are nothing but similarity functions of a context representation and a response representation. Their difference on performance comes from how well the two representations capture the semantics and the structures of the context and the response and how accurate the similarity calculation is. For example, in empirical studies, the multi-view model performs much better than the RNN models. This is because the multi-view model captures the sequential relationship among words, the composition of n-grams, and the sequential relationship of utterances by h w (\u2022) and h u (\u2022); whereas in RNN models, only the sequential relationship among words are modeled by h rnn (\u2022). Second, it is easy to make an extension of the existing models by replacing f (\u2022), f (\u2022), h(\u2022), and m(\u2022, \u2022). For example, we can replace the h rnn (\u2022) in RNN models with a composition of CNN and RNN to model both composition of n-grams and their sequential relationship, and we can replace the m rnn (\u2022) with a more powerful neural tensor network (Socher et al. 2013) . Third, the framework unveils the limitations the existing models and their possible extensions suffer: Everything in the context are compressed to one or more fixed-length vectors before matching; and there is no interaction between the context and the response in the formation of their representations. The context is represented without enough supervision from the response, and so is the response. As a result, these models may lose important information of contexts in matching, and more seriously, no matter how we improve them, as long as the improvement is under the framework, we cannot overcome the limitations. The framework view motivates us to propose a new framework that can essentially change the existing matching paradigm.",
"cite_spans": [
{
"start": 1229,
"end": 1249,
"text": "(Socher et al. 2013)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for the Existing Models",
"sec_num": "4."
},
{
"text": "We propose a sequential matching framework (SMF) that can simultaneously capture important information in a context and model relationships among utterances in the context. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Matching Framework",
"sec_num": "5."
},
{
"text": "2 u ( ) h ( ) m Matching accumulation ( , ) f ( , ) f ( , ) f Utterance-response matching \uf0d7 \uf0d7 \uf0d7 Figure 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Matching Framework",
"sec_num": "5."
},
{
"text": "Our new framework for multi-turn response selection, which is called the Sequential Matching Framework. It first computes a matching vector between an utterance and a response, then the matching vectors are accumulated by a GRU. Finally, the matching score is obtained with the hidden states in the second layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Matching Framework",
"sec_num": "5."
},
{
"text": "components are organized in a three-layer architecture. Given a context s = {u 1 , . . . , u n } and a response candidate r, the first layer matches each u i in s with r through f (\u2022, \u2022) and forms a sequence of matching vectors {f (u 1 , r), . . . , f (u n , r)}. Here, we require f (\u2022, \u2022) to be capable of differentiating important parts from unimportant parts in u i and carry the important information into f (u i , r). Details of how to design such a f (\u2022, \u2022) will be described later. The matching vectors {f (u 1 , r), . . . , f (u n , r)} are then uploaded to the second layer where h(\u2022) models relationships and dependencies among the utterances {u 1 , . . . u n }. Here, we define h(\u2022) as a recurrent neural network whose output is a sequence of hidden states {h 1 , . . . , h n }. \u2200k \u2208 {1, . . . , n}, h k is given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Matching Framework",
"sec_num": "5."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h k = h h k\u22121 , f (u k , r)",
"eq_num": "(13)"
}
],
"section": "Sequential Matching Framework",
"sec_num": "5."
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Matching Framework",
"sec_num": "5."
},
{
"text": "h (\u2022, \u2022)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Matching Framework",
"sec_num": "5."
},
{
"text": "is a non-linear transformation, and ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Matching Framework",
"sec_num": "5."
},
{
"text": "h 0 = 0. h(\u2022) accumulates matching vectors {f (u 1 , r), . . . , f (u n , r)} in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Matching Framework",
"sec_num": "5."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "g(s, r) = m h f (u 1 , r), f (u 2 , r), . . . , f (u n i r)",
"eq_num": "(14)"
}
],
"section": "Sequential Matching Framework",
"sec_num": "5."
},
{
"text": "SMF has two major differences over the existing framework: first, SMF lets each utterance in the context and the response \"meet\" at the very beginning, and therefore utterances and the response can sufficiently interact with each other. Through the interaction, the response will help recognize important information in each utterance. The information is preserved in the matching vectors and carried into the final matching score with minimal loss; second, matching and utterance relationships are coupled rather than separately modeled as in the existing framework. Hence, the utterance relationships (e.g., the order of the utterances), as a kind of knowledge, can supervise the formation of the matching score. Because of the differences, SMF can overcome the drawbacks the existing models suffer and tackle the two challenges of context-response matching simultaneously.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Matching Framework",
"sec_num": "5."
},
{
"text": "It is obvious that the success of SMF lies in how to design f (\u2022, \u2022), because f (\u2022, \u2022) plays a key role in capturing important information in a context. In the following sections, we will first specify the design of f (\u2022, \u2022), and then discuss how to define h(\u2022) and m(\u2022).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Matching Framework",
"sec_num": "5."
},
{
"text": "We design the utterance-response matching function f (\u2022, \u2022) in SMF as neural networks to benefit from their powerful representation abilities. To guarantee that f (\u2022, \u2022) can capture important information in utterances with the help of the response, we implement f (\u2022, \u2022) using a convolution-pooling technique and an attention technique, which results in a sequential convolutional network (SCN) and a sequential attention network (SAN). Moreover, in both SCN and SAN, we consider matching on multiple levels of granularity of text. Note that in our ACL paper (Wu et al. 2017) , the sequential convolutional network is named \"SMN.\" Here, we rename it to SCN in order to distinguish it from the framework.",
"cite_spans": [
{
"start": 559,
"end": 575,
"text": "(Wu et al. 2017)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Utterance-Response Matching",
"sec_num": "5.1"
},
{
"text": "The architecture of SCN. The first layer extracts matching information from interactions between utterances and a response on a word level and a segment level by a CNN. The second layer accumulates the matching information from the first layer by a GRU. The third layer takes the hidden states of the second layer as an input and calculates a matching score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "5.1.1 Sequential Convolutional Network. Figure 3 gives the architecture of SCN. Given an utterance u in a context s and a response candidate r, SCN looks up an embedding table and represents u and r as U = e u,1 , . . . , e u,n u and R = e r,1 , . . . , e r,n r , respectively, where e u,i , e r,i \u2208 R d are the embeddings of the i-th word of u and r, respectively. With U and R, SCN constructs a word-word similarity matrix M 1 \u2208 R n u \u00d7n r and a sequence-sequence similarity matrix M 2 \u2208 R n u \u00d7n r as two input channels of a convolutional neural network (CNN). The CNN then extracts important matching information from the two matrices and encodes the information into a matching vector v.",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 48,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "Specifically, \u2200i, j, the (i, j)-th element of M 1 is defined by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e 1,i,j = e u,i \u2022 e r,j",
"eq_num": "(15)"
}
],
"section": "Figure 3",
"sec_num": null
},
{
"text": "M 1 models the interaction between u and r on a word level. To get M 2 , we first transform U and R to sequences of hidden vectors with a GRU. Suppose that H u = h u,1 , . . . , h u,n u are the hidden vectors of U, then \u2200i, h u,i \u2208 R m is defined by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "z i = \u03c3(W z e u,i + U z h u,i\u22121 ) r i = \u03c3(W r e u,i + U r h u,i\u22121 ) h u,i = tanh(W h e u,i + U h (r i h u,i\u22121 )) h u,i = z i h u,i + (1 \u2212 z i ) h u,i\u22121",
"eq_num": "(16)"
}
],
"section": "Figure 3",
"sec_num": null
},
{
"text": "where h u,0 = 0, z i and r i are an update gate and a reset gate respectively, \u03c3(\u2022) is a sigmoid function, and W z , W h , W r , U z , U r ,U h are parameters. Similarly, we have H r = h r,1 , . . . , h r,n r as the hidden vectors of R. Then, \u2200i, j, the (i, j)-th element of M 2 is defined by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e 2,i,j = h u,i Ah r,j",
"eq_num": "(17)"
}
],
"section": "Figure 3",
"sec_num": null
},
{
"text": "where A \u2208 R m\u00d7m is a linear transformation. \u2200i, GRU encodes the sequential information and the dependency among words until position i in u into the i-th hidden state. As a consequence, M 2 models the interaction between u and r on a segment level. M 1 and M 2 are then processed by a CNN to compute the matching vector v. \u2200f = 1, 2, CNN regards M f as an input channel, and alternates convolution and max-pooling operations. If we denote the k-th feature map at the l-th layer as z k , whose filters are determined by a tensor W k and a bias b k , then the feature map z k is obtained as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "z k i,j = \u03c3((W k * z ) i,j + b k ) (18) z k i,j = \u03c3 u W u k * z u i,j + b k",
"eq_num": "(19)"
}
],
"section": "Figure 3",
"sec_num": null
},
{
"text": "where \u03c3(\u2022) is a ReLU, W u k is the weight of the u-th feature map, z = (z 1 . . . z u . . . z U ) is feature maps on the (l \u2212 1)-th layer, and U is the number of feature maps. Notably, * is a 2D convolutional operation, sliding a window on feature maps at that layer, that is formulated as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(W * o) m,n = width i=0 height j=0 W i,j \u2022 o m+i,n+j",
"eq_num": "(20)"
}
],
"section": "Figure 3",
"sec_num": null
},
{
"text": "where width and height are the hyper-parameters of the convolutional window, and o = z u . A max pooling operation follows a convolution operation and picks the maximal values within a window sliding on the output of the convolution operation, and carries out a linear transformation on the feature values within the window. The max pooling operation can be formulated as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "z k i,j = max z (i:i+p w ,j:j+p h )",
"eq_num": "(21)"
}
],
"section": "Figure 3",
"sec_num": null
},
{
"text": "where p w and p h are the width and the height of the 2D pooling, respectively. The matching vector v is defined by concatenating outputs of the last feature maps and transforming it to a low dimensional space:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v = W c [z 0 , z 1 . . . , z f ] + b c",
"eq_num": "(22)"
}
],
"section": "Figure 3",
"sec_num": null
},
{
"text": "where f denotes the number of feature maps, W c and b c are parameters, and z k is the concatenation of elements at the k-th feature map, meaning z k = [z k 0,0 , z k 0,1 . . . z k I,J ] where I and J are the maximum indices of the feature map.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "SCN distills important information in each utterance in the context from multiple levels of granularity through convolution and pooling operations on similarity matrices. From Equations (15), (17), (18), and (21), we can see that by learning word embeddings and parameters of GRU from training data, important words or segments in the utterance may have high similarity with some words or segments in the response and result in high value areas in the similarity matrices. These areas will be transformed and extracted to the matching vector by convolutions and poolings. We will further explore the mechanism of SCN by visualizing M 1 and M 2 of an example in Section 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "The architecture of SAN. The first layer highlights important words and segments in context, and computes a matching vector from both word level and segment level. Similar to SCN, the second layer uses a GRU to accumulate the matching information, and the third layer predicts the final matching score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 4",
"sec_num": null
},
{
"text": "With word embeddings U and R and hidden vectors H u and H r , SAN also performs utterance-response matching on a word level and a segment level. Figure 4 gives the architecture of SAN. In each level of matching, SAN exploits every part of the response (either a word or a hidden state) to weight the parts of the utterance and obtain a weighted representation of the utterance. The utterance representation then interacts with the part of the response. The interactions are finally aggregated following the order of the parts in the response as a matching vector. Specifically, \u2200e r,i \u2208 R, the weight of e u,j \u2208 U is given by",
"cite_spans": [],
"ref_spans": [
{
"start": 145,
"end": 153,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sequential Attention Network.",
"sec_num": "5.1.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c9 i,j = tanh(e u,j W att1 e r,i + b att1 ) (23) \u03b1 i,j = e \u03c9 i,j n u j=1 e \u03c9 i,j",
"eq_num": "(24)"
}
],
"section": "Sequential Attention Network.",
"sec_num": "5.1.2"
},
{
"text": "where W att1 \u2208 R d\u00d7d , and b att1 \u2208 R are parameters. \u03c9 i,j \u2208 R represents the importance of e u,j in the utterance corresponding to e r,i in the response. \u03b1 i,j is normalized importance. The interaction between u and e r,i is then defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Attention Network.",
"sec_num": "5.1.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "t 1,i = \uf8eb \uf8ed n u j=1 \u03b1 i,j e u,j \uf8f6 \uf8f8 e r,i",
"eq_num": "(25)"
}
],
"section": "Sequential Attention Network.",
"sec_num": "5.1.2"
},
{
"text": "where ( n u j=1 \u03b1 i,j e u,j ) is the representation of u with weights {\u03b1 i,j } n u j=1 , and is the Hadamard product. Similarly, \u2200h r,i \u2208 H r , the weight of h u,j \u2208 H u can be defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Attention Network.",
"sec_num": "5.1.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c9 i,j = v tanh(h u,j W att2 h r,i + b att2 ) (26) \u03b1 i,j = e \u03c9 i,j n u j=1 e \u03c9 i,j",
"eq_num": "(27)"
}
],
"section": "Sequential Attention Network.",
"sec_num": "5.1.2"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Attention Network.",
"sec_num": "5.1.2"
},
{
"text": "W att2 \u2208 R d\u00d7d , v \u2208 R d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Attention Network.",
"sec_num": "5.1.2"
},
{
"text": ", and b att2 \u2208 R d are parameters. The interaction between u and h r,i then can be formulated as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Attention Network.",
"sec_num": "5.1.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "t 2,i = n u j=1 \u03b1 i,j h u,j h r,i",
"eq_num": "(28)"
}
],
"section": "Sequential Attention Network.",
"sec_num": "5.1.2"
},
{
"text": "We denote the attention weights {\u03b1 i,j } and {\u03b1 i,j } as A 1 and A 2 , respectively. With the word-level interaction T 1 = [t 1,1 , . . . , t 1,n r ] and the segment level interaction",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Attention Network.",
"sec_num": "5.1.2"
},
{
"text": "T 2 = [t 2,1 , . . . , t 2,n r ], we form a T = [t 1 , . . . , t n r ] by defining t i as [t 1,i , t 2,i ] .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Attention Network.",
"sec_num": "5.1.2"
},
{
"text": "The matching vector v of SAN is then obtained by processing T with a GRU:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Attention Network.",
"sec_num": "5.1.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v = GRU(T)",
"eq_num": "(29)"
}
],
"section": "Sequential Attention Network.",
"sec_num": "5.1.2"
},
{
"text": "where the specific parameterization of GRU(\u2022) is similar to Equation 16, and we take the last hidden state of the GRU as v.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Attention Network.",
"sec_num": "5.1.2"
},
{
"text": "From Equations 23and 26, we can see that SAN identifies important information in utterances in a context through an attention mechanism. Words or segments in utterances that are useful to recognize the appropriateness between the context and a response will receive high weights from the response. The information conveyed by these words and segments will be highlighted in the interaction between the utterances and the response and carried to the matching vector through a RNN that models the aggregation of information in the utterances under the supervision of the response. Similar to SCN, we will further investigate the effect of the attention mechanism in SAN by visualizing the attention weights in Section 7. 5.1.3 SAN vs. SCN. Because SCN and SAN exploits different mechanisms to understand important parts in contexts, an interesting question arises: What are the advantages and disadvantages of the two models in practice? Here, we leave empirical comparison of their performance to experiments and first compare SCN with SAN on the following aspects: (1) amount of parallelable computation, which is measured by the minimum number of sequential operations required; and (2) total time complexity. Table 2 summarizes the comparison between the two models. In terms of parallelability, SAN uses two RNNs to learn the representations, which requires 2n sequential operations, whereas SCN has n sequentially executed operations in the construction of M 2 . Hence, SCN is easier to parallelize than SAN. In terms of time complexity, the complexity of SCN is O(k",
"cite_spans": [],
"ref_spans": [
{
"start": 1211,
"end": 1218,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sequential Attention Network.",
"sec_num": "5.1.2"
},
{
"text": "\u2022 n \u2022 d 2 + n \u2022 d 2 + n 2 \u2022 d),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Attention Network.",
"sec_num": "5.1.2"
},
{
"text": "where k is the number of feature maps in convolutions, n is max(n u , n r ), and d is embedding size. More specifically, in SCN, the cost on construction of M 1 and M 2 is O(n \u2022 d 2 + n 2 \u2022 d), and the cost on convolution and pooling is O(k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Attention Network.",
"sec_num": "5.1.2"
},
{
"text": "\u2022 n \u2022 d 2 ). The complexity of SAN is O(n 2 \u2022 d + n 2 \u2022 d 2 ), where O(n 2 \u2022 d)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Attention Network.",
"sec_num": "5.1.2"
},
{
"text": "is the cost on calculating H u and H r and O(n 2 \u2022 d 2 ) is the cost of the following attentionbased GRU. In practice, k is usually much smaller than the maximum sentence length n.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Attention Network.",
"sec_num": "5.1.2"
},
{
"text": "Comparison between SCN and SAN. k is the kernel number of convolutions. n is max(n u , n r ). d is the embedding size. time complexity number of sequential operations",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 2",
"sec_num": null
},
{
"text": "SCN O(k \u2022 n \u2022 d 2 + n \u2022 d 2 + n 2 \u2022 d) n SAN O(n 2 \u2022 d 2 + n 2 \u2022 d) 2n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 2",
"sec_num": null
},
{
"text": "Therefore, SCN could be faster than SAN. The conclusion is also verified by empirical results in Section 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 2",
"sec_num": null
},
{
"text": "The function of matching accumulation h(\u2022) in SMF can be implemented with any recurrent neural networks such as LSTM and GRU. In this work, we fix h(\u2022) as GRU in both SCN and SAN. Given {f (u 1 , r), . . . , f (u n , r)} as the output of the first layer of SMF, the non-linear transformation h (\u2022, \u2022) in Equation 13is formulated as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching Accumulation",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "z i = \u03c3(W z f (u i , r) + U z h i\u22121 ) r i = \u03c3(W r f (u i , r) + U r h i\u22121 ) h i = tanh(W h f (u i , r) + U h (r i h i\u22121 )) h i = z i h i + (1 \u2212 z i ) h i\u22121",
"eq_num": "(30)"
}
],
"section": "Matching Accumulation",
"sec_num": "5.2"
},
{
"text": "where W z , W h , W r , U z , U r ,U h are parameters, and z i and r i are an update gate and a reset gate, respectively. Here, h i is a hidden state, which encodes the matching information in its previous turns. From Equation (30) we can see that the reset gate (i.e., r i ) and the update gate (i.e., z i ) control how much information from the current matching vector f (u i , r) flows into the accumulation vector h i . Ideally, the two gates should let matching vectors that correspond to important utterances make a great impact to the accumulation vectors (i.e., the hidden states) while blocking the information from the unimportant utterances. In practice, we find that we can achieve this by learning SCN and SAN from large-scale conversation data. The details will be given in Section 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching Accumulation",
"sec_num": "5.2"
},
{
"text": "m(\u2022) takes {h 1 , . . . , h n } from h(\u2022)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching Prediction",
"sec_num": "5.3"
},
{
"text": "as an input and predicts a matching score for (s, r). We consider three approaches to implementing m(\u2022).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching Prediction",
"sec_num": "5.3"
},
{
"text": "The first approach is that we only use the last hidden state h n to calculate a matching score. The underlying assumption is that important information in the context, after selection by the gates of the GRU, has been encoded into the vector h n . Then m(\u2022) is formulated as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Last State.",
"sec_num": "5.3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "m last (h 1 , . . . , h n ) = softmax(W l h n + b l )",
"eq_num": "(31)"
}
],
"section": "Last State.",
"sec_num": "5.3.1"
},
{
"text": "where W l and b l are parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Last State.",
"sec_num": "5.3.1"
},
{
"text": "Average. The second approach is combining all hidden states with weights determined by their positions. In this approach, m(\u2022) can be formulated as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Static",
"sec_num": "5.3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "m static (h 1 , . . . , h n ) = softmax(W s ( n i=1 w i h i ) + b s )",
"eq_num": "(32)"
}
],
"section": "Static",
"sec_num": "5.3.2"
},
{
"text": "where W s and b s are parameters, and w i is the weight of the i-th hidden state and is learned from data. Note that in m static (\u2022), once {w i } n i=1 are learned, they are fixed for any (s, r) pairs, and that is why we call the approach \"static average.\" Compared with last state, the static average can leverage more information in the early parts of {h 1 , . . . , h n }, and thus can avoid information loss from the process of the GRU in h(\u2022).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Static",
"sec_num": "5.3.2"
},
{
"text": "Average. Similar to static average, we also combine all hidden states to calculate a matching score, but the difference is that the combination weights are dynamically computed by the hidden states and the utterance vectors through an attention mechanism as in Bahdanau, Cho, and Bengio (2014) . The weights will change according to the content of the utterances in different contexts, and that is why we call the approach \"dynamic average.\" In this approach, m(\u2022) is defined as",
"cite_spans": [
{
"start": 261,
"end": 293,
"text": "Bahdanau, Cho, and Bengio (2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic",
"sec_num": "5.3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "t i = t s tanh(W d1 h u,n u + W d2 h i + b d1 ) \u03b1 i = exp(t i ) i exp(t i ) m(h 1 , . . . , h n ) = softmax(W d ( n i=1 \u03b1 i h i ) + b d2 )",
"eq_num": "(33)"
}
],
"section": "Dynamic",
"sec_num": "5.3.3"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic",
"sec_num": "5.3.3"
},
{
"text": "W d1 \u2208 R q\u00d7m , W d2 \u2208 R q\u00d7q , b d1 \u2208 R q , W d \u2208 R q\u00d7q",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic",
"sec_num": "5.3.3"
},
{
"text": ", and b d2 \u2208 R q are parameters. t s is a virtual context vector that is learned in training. h i and h u,n u are i-th hidden state of h(\u2022) and the final hidden state of the utterance, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic",
"sec_num": "5.3.3"
},
{
"text": "We choose cross entropy as the loss function. Let \u0398 denote the parameters of f ( ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training",
"sec_num": "6."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L(D, \u0398) = \u2212 N i=1 y i log(g(s i , r i )) + (1 \u2212 y i )log(1 \u2212 g(s i , r i ))",
"eq_num": "(34)"
}
],
"section": "Model Training",
"sec_num": "6."
},
{
"text": "where N in the number of instances in D. We optimize the objective function using backpropagation and the parameters are updated by stochastic gradient descent with the Adam algorithm (Kingma and Ba 2014) on a single Tesla K80 GPU. The initial learning rate is 0.001, and the parameters of Adam, \u03b2 1 and \u03b2 2 , are 0.9 and 0.999, respectively. We use early-stopping as a regularization strategy. Models are trained in mini-batches with a batch size of 200.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training",
"sec_num": "6."
},
{
"text": "We test SAN and SCN on two public data sets with both quantitative metrics and qualitative analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7."
},
{
"text": "The first data set we exploited to test the performance of our models is the Ubuntu Dialogue Corpus v1 (Lowe et al. 2015) . The corpus contains large-scale two-way conversations collected from the chat logs of the Ubuntu forum. The conversations are multiturn discussions about Ubuntu-related technical issues. We used the copy shared by Xu et al. (Xu et al. 2017 ), 2 in which numbers, URLs, and paths are replaced by special placeholders. The data set consists of 1 million context-response pairs for training, 0.5 million pairs for validation, and 0.5 million pairs for testing. In each conversation, a human reply is selected as a positive response to the context, and negative responses are randomly sampled. The ratio of positive responses and negative responses is 1:1 in the training set, and 1:9 in both the validation and test sets. In addition to the Ubuntu Dialogue Corpus, we selected the Douban Conversation Corpus (Wu et al. 2017) as another data set. The data set is a recently released largescale open-domain conversation corpus in which conversations are crawled from a popular Chinese forum Douban Group. 3 The training set contains 1 million contextresponse pairs, and the validation set contains 50, 000 pairs. In both sets, a context has a human reply as a positive response and a randomly sampled reply as a negative response. Therefore, the ratio of positive instances and negative instances in both training and validation is 1:1. Different from the Ubuntu Dialogue Corpus, the test set of the Douban Conversation Corpus contains 1, 000 contexts with each one having 10 responses retrieved from a pre-built index. Each response receives three labels from human annotators that indicate its appropriateness as a reply to the context and the majority of the labels are taken as the final decision. An appropriate response means that the response can naturally reply to the conversation history by satisfying logic consistency, fluency, and semantic relevance. Otherwise, if a response does not meet any of the three conditions, it is an inappropriate response. The Fleiss kappa (Fleiss 1971 ) of the labeling is 0.41, which means that the labelers reached a moderate agreement in their work. Note that in our experiments, we removed contexts whose responses are all labeled as positive or negative. After this step, there are 6, 670 context-response pairs left in the test set. Table 3 summarizes the statistics of the two data sets.",
"cite_spans": [
{
"start": 103,
"end": 121,
"text": "(Lowe et al. 2015)",
"ref_id": "BIBREF21"
},
{
"start": 338,
"end": 363,
"text": "Xu et al. (Xu et al. 2017",
"ref_id": "BIBREF54"
},
{
"start": 929,
"end": 945,
"text": "(Wu et al. 2017)",
"ref_id": "BIBREF50"
},
{
"start": 1124,
"end": 1125,
"text": "3",
"ref_id": null
},
{
"start": 2101,
"end": 2113,
"text": "(Fleiss 1971",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 2401,
"end": 2408,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Data Sets",
"sec_num": "7.1"
},
{
"text": "We compared our methods with the following methods: TF-IDF: We followed Lowe et al. (2015) and computed TF-IDF-based cosine similarity between a context and a response. Utterances in the context are concatenated to form a document. IDF is computed on the training data.",
"cite_spans": [
{
"start": 72,
"end": 90,
"text": "Lowe et al. (2015)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "7.2"
},
{
"text": "Basic deep learning models: We used models in Lowe et al. (2015) and Kadlec, Schmid, and Kleindienst (2015) , in which representations of a context are learned by Multi-View: The model proposed in Zhou et al. (2016) that utilizes a hierarchical recurrent neural network to model utterance relationships. It integrates information in a context from an utterance view and a word view. Details of the model can be found in Equation 9.",
"cite_spans": [
{
"start": 46,
"end": 64,
"text": "Lowe et al. (2015)",
"ref_id": "BIBREF21"
},
{
"start": 69,
"end": 107,
"text": "Kadlec, Schmid, and Kleindienst (2015)",
"ref_id": "BIBREF12"
},
{
"start": 197,
"end": 215,
"text": "Zhou et al. (2016)",
"ref_id": "BIBREF60"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "7.2"
},
{
"text": "The authors in Yan, Song, and Wu (2016) proposed several approaches to reformulate a message with previous turns in a context. The response and the reformulated message are then represented by a composition of RNN and CNN. Finally, the matching score is computed with the concatenation of the representations. Details of the model can be found in Equation (6).",
"cite_spans": [
{
"start": 15,
"end": 39,
"text": "Yan, Song, and Wu (2016)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Deep learning to respond (DL2R):",
"sec_num": null
},
{
"text": "Advanced single-turn matching models: Because BiLSTM does not represent the state-of-the-art matching model, we concatenated the utterances in a context and matched the long text with a response candidate using more powerful models, including MV-LSTM (Wan et al. 2016b ) (2D matching), Match-LSTM (Wang and Jiang 2016b), and Attentive-LSTM (Tan et al. 2016) (two attention based models). To demonstrate the importance of modeling utterance relationships, we also calculated a matching score for the concatenation of utterances and the response candidate using the methods in Section 5.1. The two models are simple versions of SCN and SAN, respectively, without considering utterance relationships. We denote them as SCN single and SAN single , respectively.",
"cite_spans": [
{
"start": 251,
"end": 268,
"text": "(Wan et al. 2016b",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Deep learning to respond (DL2R):",
"sec_num": null
},
{
"text": "In experiments on the Ubuntu corpus, we followed Lowe et al. (2015) and used recall at position k in n candidates (R n @k) as evaluation metrics. Here the matching models are required to return k most likely responses, and R n @k = 1 if the true response is among the k candidates. R n @k will become larger when k gets larger or n gets smaller.",
"cite_spans": [
{
"start": 49,
"end": 67,
"text": "Lowe et al. (2015)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "7.3"
},
{
"text": "R n @k has bias when there are multiple true candidates for a context. Hence, on the Douban corpus, apart from R n @ks, we also followed the convention of information retrieval and used mean average precision (MAP) (Baeza-Yates, Ribeiro-Neto et al. 1999) , mean reciprocal rank (MRR) (Voorhees and Tice 2000) , and precision at position 1 (P@1) as evaluation metrics, which are defined as follows",
"cite_spans": [
{
"start": 229,
"end": 254,
"text": "Ribeiro-Neto et al. 1999)",
"ref_id": "BIBREF0"
},
{
"start": 284,
"end": 308,
"text": "(Voorhees and Tice 2000)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "7.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "MAP = 1 |S| s i \u2208S AP(s i ) , where AP(s i ) = N r j=0 j k=0 rel(r k ,s i ) j \u2022 rel(r j , s i ) N r j=0 rel(r j , s i ) (35) MRR = 1 |S| s i \u2208S RR(s i ) , where RR(s i ) = 1 rank i (36) P@1 = 1 |S| s i \u2208S rel(r top1 , s i )",
"eq_num": "(37)"
}
],
"section": "Evaluation Metrics",
"sec_num": "7.3"
},
{
"text": "where rank i refers to the position of the first relevant response to context s i in the ranking list; r j refers to the response ranked at the j-th position; rel(r j , s i ) = 1 if r j is an appropriate response to context s i , otherwise rel(r j , s i ) = 0; r top1 is the response ranked at the top position; S is the universal set of contexts; and N r denotes the number of retrieved responses. We did not calculate R 2 @1 for the test data in the Douban corpus because one context could have more than one correct response, and we have to randomly sample one for R 2 @1, which may bring bias to the evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "7.3"
},
{
"text": "For baseline models, we copied the numbers in the existing papers if their results on the Ubuntu corpus are reported in their original paper (TF-IDF, RNN, CNN, LSTM, BiLSTM, Multi-View); otherwise we implemented the models by tuning their parameters on the validation sets. All models were implemented using the Theano framework (Theano Development Team 2016). Word embeddings in neural networks were initialized by the results of word2vec (Mikolov et al. 2013 4 ) pre-trained on the training data. We did not use GloVe (Pennington, Socher, and Manning 2014) because the Ubuntu corpus contains many technical words that are not covered by Twitter or Wikipedia. The word embedding size was chosen as 200. The maximum utterance length was set as 50. The maximum context length (i.e., number of utterances per context) was varied from 1 to 20 and set as 10 at last. We padded zeros if the number of utterances in a context is less than 10; otherwise we kept the last 10 utterances. We will discuss how the performance of models changes in terms of different maximum context length later.",
"cite_spans": [
{
"start": 440,
"end": 462,
"text": "(Mikolov et al. 2013 4",
"ref_id": null
},
{
"start": 520,
"end": 558,
"text": "(Pennington, Socher, and Manning 2014)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Tuning",
"sec_num": "7.4"
},
{
"text": "For SCN, the window size of convolution and pooling was tuned to {(2, 2), (3, 3)(4, 4)} and was set as (3, 3) finally. The number of feature maps is 8. The size of the hidden states in the construction of M 2 is the same with the word embedding size, and the size of the output vector v was set as 50. Furthermore, the size of the hidden states in the matching accumulation module is also 50. In SAN, the size of the hidden states in the segment level representation is 200, and the size of the hidden states in Equation (29) was set as 400.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Tuning",
"sec_num": "7.4"
},
{
"text": "All tuning was done according to R 2 @1 on the validation data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Tuning",
"sec_num": "7.4"
},
{
"text": "Tables 4 and 5 show the evaluation results on the Ubuntu Corpus and the Douban Corpus, respectively. SAN and SCN outperform baselines over all metrics on both data sets with large margins, and except for R 10 @5 of SCN on the Douban corpus, the improvements are statistically significant (t-test with p-value \u2264 0.01). Our models are better than state-of-the-art single turn matching models such as MV-LSTM, Match-LSTM, SCN single , and SAN single . The results demonstrate that one cannot neglect utterance relationships and simply perform multi-turn response selection by concatenating utterances together. TF-IDF shows the worst performance, indicating that the multi-turn response selection problem cannot be addressed with shallow features. LSTM is the best model among the basic models. The reason might be that it models relationships among words. Multi-View is better than LSTM, demonstrating the effectiveness of the utterance-view in context modeling. Advanced models have better performance, because they are capable of capturing more complicated structures in contexts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "7.5"
},
{
"text": "SAN is better than SCN on both data sets, which might be attributed to three reasons. The first reason is that SAN uses vectors instead of scalars to represent interactions between words or text segments. Therefore, the matching vectors in SAN can encode more information from the pairs than those in SCN. The second reason is that SAN uses a soft attention mechanism to emphasize important words or segments Table 4 Evaluation results on the Ubuntu corpus. Subscripts including last, static, and dynamic indicate three approaches to predicting a matching score as described in Section 5.3. Numbers in bold mean that the improvement from the models is statistically significant over the best baseline method.",
"cite_spans": [],
"ref_spans": [
{
"start": 409,
"end": 416,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "7.5"
},
{
"text": "R 2 @1 R 10 @1 R 10 @2 R 10 @5 Table 5 Evaluation results on the Douban corpus. Notations have the same meaning as those in Table 4 . On R 10 @5, only SAN significantly outperforms baseline methods.",
"cite_spans": [],
"ref_spans": [
{
"start": 31,
"end": 38,
"text": "Table 5",
"ref_id": null
},
{
"start": 124,
"end": 131,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "7.5"
},
{
"text": "MAP MRR P@1 R 10 @1 R 10 @2 R 10 @5 in utterances, whereas SCN uses a max pooling operation to select important information from similarity matrices. When multiple words or segments are important in an utterance-response pair, a max pooling operation just selects the top one, but the attention mechanism can leverage all of them. The last reason is that SAN models the sequential relationship and dependency among words or segments in the interaction aggregation module, whereas SCN only considers n-grams. The three approaches to matching prediction do not show much difference in both SCN and SAN, but dynamic average and static average are better than the last state on the Ubuntu corpus and worse than it on the Douban corpus. This is because contexts in the Ubuntu corpus are longer than those in the Douban corpus (average context length 10.1 vs. 6.7), and thus the last hidden state may lose information in history on the Ubuntu data. In contrast, the Douban corpus has shorter contexts but longer utterances (average utterance length 18.5 vs. 12.4), and thus noise may be involved in response selection if more hidden states are taken into consideration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "7.5"
},
{
"text": "There are two reasons that R n @ks on the Douban corpus are much smaller than those on the Ubuntu corpus. One is that response candidates in the Douban corpus are returned by a search engine rather than negative sampling. Therefore, some negative responses in the Douban corpus might be semantically closer to the true positive responses than those in the Ubuntu corpus, and thus more difficult to differentiate by a model. The other is that there are multiple correct candidates for a context, so the maximum R 10 @1 for some contexts are not 1. For example, if there are three correct responses, then the maximum R 10 @1 is 0.33. P@1 is about 40% on the Douban corpus, indicating the difficulty of the task in a real chatbot. 7.6 Further Analysis 7.6.1 Model Ablation. We first investigated how different parts of SCN and SAN affect their performance by ablating SCN last and SAN last . Table 6 reports the results of ablation on the test data. First, we replaced the utterance-response matching module in SCN and SAN with a neural tensor (Socher et al. 2013 ) (denoted as Replace M ), which matches an utterance and a response by feeding their representations to a neural tensor network (NTN). The result is that the performance of the two models dropped dramatically. This is because in NTN there is no interaction between the utterance and the response before their matching; and it is doubtful whether NTN can recognize important parts in the pair and encode the information into matching. As a result, the model loses important information in the pair. Therefore, we can conclude that a good utterance-response matching mechanism is crucial to the success of SMF. At least, one has to let an utterance and a response interact with each other and explicitly highlight important parts in their matching vector. Second, we replaced the GRU in the matching accumulation modules of SCN and SAN with a multi-layer perceptron (denoted as SCN Replace A and SAN Replace A , respectively). The change led to a slight performance drop. This indicates that utterance relationships are useful in context-response matching. Finally, we only left one level of granularity, either word level or segment level, in SCN and SAN, and denoted the models as SCN with words, SCN with segments, SAN with words, and SAN with segments, respectively. The results indicate that segment level matching on utterance-response pairs contributes more to the final context-response matching, and both segments and words are useful in response selection.",
"cite_spans": [
{
"start": 1041,
"end": 1060,
"text": "(Socher et al. 2013",
"ref_id": "BIBREF37"
}
],
"ref_spans": [
{
"start": 889,
"end": 896,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "7.5"
},
{
"text": "Respect to Context Length. We then studied how the performance of SCN last and SAN last changes across contexts with different lengths. Context-response pairs were bucketed into three bins according to the length of the contexts (i.e., the number of utterances in the contexts), and comparison was made in different bins on different metrics. Figure 5 gives the results. Note that we did the analysis only on the Douban corpus because on the Ubuntu corpus many results were copied from the existing literatures and the bin-level results are not available. SAN and SCN consistently perform better than the baselines over bins, and a general trend is that when contexts become longer, gaps become larger. For example, in (2, 5], SAN is 3 points higher than LSTM on R 10 @5, but the gap becomes 5 points in (5, 10] . The results demonstrate that our models can well capture dependencies, especially long-distance dependencies, among utterances in contexts. SAN and SCN have similar trends because both of them use a Table 6 Evaluation results of model ablation.",
"cite_spans": [
{
"start": 804,
"end": 807,
"text": "(5,",
"ref_id": null
},
{
"start": 808,
"end": 811,
"text": "10]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 343,
"end": 351,
"text": "Figure 5",
"ref_id": null
},
{
"start": 1013,
"end": 1020,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with",
"sec_num": "7.6.2"
},
{
"text": "R 2 @1 R 10 @1 R 10 @2 R 10 @5 MAP MRR P@1 R 10 @1 R 10 @2 R 10 @5 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ubuntu Corpus Douban Corpus",
"sec_num": null
},
{
"text": "Model performance across context length. We compared SAN and SCN with LSTM, MV-LSTM, and Multi-View on the Douban corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 5",
"sec_num": null
},
{
"text": "GRU in the second layer to model dependencies among utterances. The performance of all models drops when the length of contexts increases from (2, 5] to (5, 10] . This is because semantics of longer contexts is more difficult to capture than that of shorter contexts. On the other hand, the performance of all models improved when the length of contexts increases from (5, 10] to (10, ). This is because the bin of (10, ) contains much less data than the other two bins (the data distribution is 53% for (2, 5], 38% for (5, 10], and 9% for (10, )), and thus the improvement does not make much sense from a statistical perspective.",
"cite_spans": [
{
"start": 153,
"end": 156,
"text": "(5,",
"ref_id": null
},
{
"start": 157,
"end": 160,
"text": "10]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 5",
"sec_num": null
},
{
"text": "7.6.3 Sensitivity to Hyper-Parameters. We checked how sensitive SCN and SAN are regarding the size of word embedding and the maximum context length. Table 7 reports evaluation results of SCN last and SAN last with embedding sizes varying in {50, 100, 200}.",
"cite_spans": [],
"ref_spans": [
{
"start": 149,
"end": 156,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 5",
"sec_num": null
},
{
"text": "Evaluation results in terms of different word embedding sizes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 7",
"sec_num": null
},
{
"text": "R 2 @1 R 10 @1 R 10 @2 R 10 @5 MAP MRR P@1 R 10 @1 R 10 @2 R 10 @5 We can see that SAN is more sensitive to the word embedding size than SCN. SCN becomes stable after the embedding size exceeds 100, whereas SAN keeps improving with the increase of the embedding size. Our explanation of the phenomenon is that SCN transforms word vectors and hidden vectors of GRU to scalars in the similarity matrices by dot products, thus information in extra dimensions (e.g., entries with indices larger than 100) might be lost; on the other hand, SAN leverages the whole d-dimensional vectors in matching, so the information in the embedding can be exploited more sufficiently. Figure 6 shows the performance of SCN and SAN with respect to the maximum context length. We find that both models significantly become better with the increase of maximum context length when it is lower than 5, and become stable after the maximum context length reaches 10. The results indicate that utterances from early history can provide useful information to response selection. Moreover, model performance is more sensitive to the maximum context length on the Ubuntu corpus than it is on the Douban 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15",
"cite_spans": [],
"ref_spans": [
{
"start": 666,
"end": 674,
"text": "Figure 6",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Ubuntu Corpus Douban Corpus",
"sec_num": null
},
{
"text": "Maximum context length corpus. This is because utterances in the Douban corpus are longer than those in the Ubuntu corpus (average length 18.5 vs. 12.4), which means single utterances in the Douban corpus could contain more information than those in the Ubuntu corpus. In practice, we set the maximum context length to 10 to balance effectiveness and efficiency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ubuntu Corpus Douban Corpus",
"sec_num": null
},
{
"text": "7.6.4 Model Efficiency. In Section 5.1.3, we theoretically analyzed the efficiency of SCN and SAN. To verify the theoretical results, we further empirically compared their efficiency using the training data and the test data of the two data sets. The experiments were conducted using Theano on a Tesla K80 GPU with a Windows Server 2012 operation system. The parameters of the two models are described in Section 7.4. Figure 7 gives the training time and the test time of SAN and SCN. We can see that SCN is twice as fast as SAN in the training process (as a result of low time complexity and ease of parallelization), and saves 3 msec per batch in the test process. Moreover, different matching functions do not influence the running time as much, because the bottleneck is the utterance representation learning.",
"cite_spans": [],
"ref_spans": [
{
"start": 418,
"end": 426,
"text": "Figure 7",
"ref_id": "FIGREF8"
}
],
"eq_spans": [],
"section": "Ubuntu Corpus Douban Corpus",
"sec_num": null
},
{
"text": "The empirical results are consistent with our theoretical results: SCN is faster than SAN. The results indicate that SCN is suitable for systems that care more about efficiency, whereas SAN can reach a higher accuracy with a little sacrifice of efficiency. 7.6.5 Visualization. We finally explained how SAN and SCN understand the semantics of conversation contexts by visualizing the similarity matrices of SCN, the attention weights of SAN, and the update gate and the reset gate of the accumulation GRU of the two models using an example from the Ubuntu corpus. Table 8 shows an example that is selected from the test set of the Ubuntu corpus and ranked at the top position by both SAN and SCN. Figure 8 (a) illustrates word-word similarity matrices M 1 in SCN. We can see that important words in u 1 such as \"unzip,\" \"rar,\" and \"files\" are recognized and highlighted by words like \"command,\" \"extract,\" and \"directory\" in r. On the other hand, the similarity matrix of r and u 3 is almost blank, as there is no important information conveyed by u 3 . Figure 8(b) shows the sequence-to-sequence similarity matrices M 2 in SCN. We find that important segments like \"unzip many rar\" are highlighted, and the matrices Table 8 An example for visualization from the Ubuntu corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 564,
"end": 571,
"text": "Table 8",
"ref_id": null
},
{
"start": 697,
"end": 705,
"text": "Figure 8",
"ref_id": null
},
{
"start": 1054,
"end": 1065,
"text": "Figure 8(b)",
"ref_id": null
},
{
"start": 1217,
"end": 1224,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ubuntu Corpus Douban Corpus",
"sec_num": null
},
{
"text": "u 1 : how can unzip many rar files at once? u 2 : sure you can do that in bash u 3 : okay how? u 4 : are the files all in the same directory? u 5 : yes they all are;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context",
"sec_num": null
},
{
"text": "Response: then the command glebihan should extract them all from/to that directory also provide complementary matching information to M 1 . Figure 8(c) visualizes the reset gate and the update gate of the accumulation GRU, respectively. Higher values in the update gate represent more information from the corresponding matching vector flowing into matching accumulation. From Figure 8 (c), we can see that u 1 is crucial to response selection and nearly all information from u 1 and r flows to the hidden state of GRU, whereas other utterances are less informative and the corresponding gates are almost \"closed\" to keep the information from u 1 and r until the final state.",
"cite_spans": [],
"ref_spans": [
{
"start": 140,
"end": 151,
"text": "Figure 8(c)",
"ref_id": null
},
{
"start": 377,
"end": 385,
"text": "Figure 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Response",
"sec_num": null
},
{
"text": "Regarding SAN, Figure 9 (a) and Figure 9 (b) illustrate the word level attention weights A 1 and segment level attention weights A 2 , respectively. Similar to SCN, important words such as \"zip\" and \"file\" and important segments like \"unzip many rar\" get high weights, whereas function words like \"that\" and \"for\" are less attended. It should be noted that as the attention weights are normalized, the gaps between high and low values in A 1 and A 2 are not so large as those in M 1 and M 2 of SCN. Figure 9 (c) visualizes the gates of the accumulation GRU, from which we observed similar distributions as those of SCN.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 23,
"text": "Figure 9",
"ref_id": null
},
{
"start": 32,
"end": 40,
"text": "Figure 9",
"ref_id": null
},
{
"start": 499,
"end": 507,
"text": "Figure 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Response",
"sec_num": null
},
{
"text": "Although models under SMF outperform baseline methods on the two data sets, there are still several problems that cannot yet be handled perfectly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis and Future Work",
"sec_num": "7.7"
},
{
"text": "(1) Logical consistency. SMF models the context and response on a semantic level, but pays little attention to logical consistency. This leads to several bad cases in the Douban corpus. We give a typical example in Table 9 . In the conversation history, one of the speakers says that he thinks the item on Taobao is fake, and the response is expected to be why he dislikes the fake shoes. However, both SCN and SAN rank the response \"It is not a fake. I just worry about the date of manufacture.\" at the top position. The response is inconsistent with the context in terms of logic, as it claims that the jogging shoes are not fake, which is contradictory to the context.",
"cite_spans": [],
"ref_spans": [
{
"start": 215,
"end": 222,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Analysis and Future Work",
"sec_num": "7.7"
},
{
"text": "The reason behind this is that SMF only models semantics of context-response pairs. Logic, attitude, and sentiment are not taken into account in response selection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis and Future Work",
"sec_num": "7.7"
},
{
"text": "In the future, we shall explore the logic consistency problem in retrieval-based chatbots by leveraging more features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis and Future Work",
"sec_num": "7.7"
},
{
"text": "(2) No valid candidates. Another serious issue is the quality of candidates after retrieval. According to Wu et al. (2017) , the candidate retrieval method can be described as follows: given a message u n with {u 1 , . . . , u n\u22121 } utterances in its previous turns, the ",
"cite_spans": [
{
"start": 106,
"end": 122,
"text": "Wu et al. (2017)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis and Future Work",
"sec_num": "7.7"
},
{
"text": "Visualization of SAN.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 9",
"sec_num": null
},
{
"text": "An example in the Douban corpus. The response is ranked at the top position among candidates, but it is inconsistent on logic to the current context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 9",
"sec_num": null
},
{
"text": "u 1 : Does anyone know Newton jogging shoes? u 2 : 100 RMB on Taobao. u 3 : I know that. I do not want to buy it because that is a fake which is made in Qingdao, u 4 :Is it the only reason you do not want to buy it?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context",
"sec_num": null
},
{
"text": "Response: It is not a fake. I just worry about the date of manufacture.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Response",
"sec_num": null
},
{
"text": "top five keywords are extracted from {u 1 , . . . , u n\u22121 } based on their TF-IDF scores. 5 u n is then expanded with the keywords, and the expanded message is sent to the index to retrieve response candidates using the inline retrieval algorithm of the index. The performance of the heuristic message expansion method is not good enough. In the experiment, only 667 out of 1, 000 contexts have correct candidates after response candidate retrieval. This indicates that there is still much room to improve the retrieval component, and message expansion with several keywords from previous turns may not be enough for candidate retrieval. In the future, we will consider advanced methods for retrieving candidates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Response",
"sec_num": null
},
{
"text": "(3) Gap between training and test. The current method requires a huge amount of training data (i.e., context-response pairs) to learn a matching model. However, it is too expensive to obtain large-scale (e.g., millions of) human labeled pairs in practice. Therefore, we regard conversations with human replies as positive instances and conversations with randomly sampled replies as negative instances in model training. The negative sampling method, however, oversimplifies the learning of a matching model because most negative candidates are semantically far from human responses, and thus easy to recognize; and some negative candidates might be proper responses if they are judged by a human. Because of the gap in training and test, our matching models, although performing much better than the baseline models, are still far from perfect on the Douban corpus (see the low P@1 in Table 5 ). In the future, we may consider using small human labeled data sets but leveraging the large-scale unlabeled data to learn matching models.",
"cite_spans": [],
"ref_spans": [
{
"start": 886,
"end": 893,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Response",
"sec_num": null
},
{
"text": "In this paper we studied the problem of multi-turn response selection in which one has to model the relationships among utterances in a context and pay more attention to important parts of the context. We find that the existing models cannot address the two challenges at the same time when we summarize them into a general framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8."
},
{
"text": "Motivated by the analysis, we propose a sequential matching framework for contextresponse matching. The new framework is able to capture the important information in a context and model the utterance relationships simultaneously. Under the framework, we propose two specific models based on a convolution-pooling technique and an attention mechanism. We test the two models on two public data sets. The results indicate that both models can significantly outperform the state-of-the-art models. To further understand the models, we conduct ablation analysis and visualize key components of the two models. We also compare the two models in terms of their efficacy, efficiency, and sensitivity to hyper-parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8."
},
{
"text": "We borrow the operator from MATLAB.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.dropbox.com/s/2fdn26rj6h9bpvl/ubuntudata.zip?dl=0. 3 https://www.douban.com/group/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://code.google.com/archive/p/word2vec/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Tf is word frequency in the context, and IDF is calculated using the entire index.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Yu Wu is supported by an AdeptMind Scholarship and a Microsoft Scholarship. This work was supported in part by the Natural Science Foundation of China (grants U1636211, 61672081, 61370126), the Beijing Advanced Innovation Center for Imaging Technology (grant BAICIT-2016001), and the National Key R&D Program of China (grant 2016QY04W0802).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Modern Information Retrieval",
"authors": [
{
"first": "Ricardo",
"middle": [],
"last": "Baeza-Yates",
"suffix": ""
},
{
"first": "Berthier",
"middle": [],
"last": "Ribeiro-Neto",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "463",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baeza-Yates, Ricardo, Berthier Ribeiro-Neto, et al. 1999. Modern Information Retrieval, 463. ACM Press, New York.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Enhancing and combining sequential and tree LSTM for natural language inference",
"authors": [
{
"first": "Qian",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Zhen-Hua",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Si",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen, Qian, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, and Hui Jiang. 2016. Enhancing and combining sequential and tree LSTM for natural language inference. CoRR, abs/1609.06038.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1724--1734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cho, Kyunghyun, Bart Van Merri\u00ebnboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Conference on Empirical Methods in Natural Language Processing, pages 1724-1734, Doha.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Empirical evaluation of gated recurrent neural networks on sequence modeling",
"authors": [
{
"first": "Junyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Aglar G\u00fcl\u00e7ehre",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chung, Junyoung, \u00c7 aglar G\u00fcl\u00e7ehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR, abs/1412.3555.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Finding structure in time",
"authors": [
{
"first": "Jeffrey",
"middle": [
"L"
],
"last": "Elman",
"suffix": ""
}
],
"year": 1990,
"venue": "Cognitive Science",
"volume": "14",
"issue": "2",
"pages": "179--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elman, Jeffrey L. 1990. Finding structure in time. Cognitive Science, 14(2):179-211.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Measuring nominal scale agreement among many raters",
"authors": [
{
"first": "Joseph",
"middle": [
"L"
],
"last": "Fleiss",
"suffix": ""
}
],
"year": 1971,
"venue": "Psychological Bulletin",
"volume": "76",
"issue": "5",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fleiss, Joseph L. 1971. Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5):378.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Pairwise word interaction modeling with deep neural networks for semantic similarity measurement",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Abdel-Rahman",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "Vancouver",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [
"J"
],
"last": "Hua",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lin ; Higashinaka",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Ryuichiro",
"suffix": ""
},
{
"first": "Toyomi",
"middle": [],
"last": "Imamura",
"suffix": ""
},
{
"first": "Chiaki",
"middle": [],
"last": "Meguro",
"suffix": ""
},
{
"first": "Nozomi",
"middle": [],
"last": "Miyazaki",
"suffix": ""
},
{
"first": "Hiroaki",
"middle": [],
"last": "Kobayashi",
"suffix": ""
},
{
"first": "Toru",
"middle": [],
"last": "Sugiyama",
"suffix": ""
},
{
"first": "Toshiro",
"middle": [],
"last": "Hirano",
"suffix": ""
},
{
"first": "Yoshihiro",
"middle": [],
"last": "Makino",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Matsuo",
"suffix": ""
}
],
"year": 2013,
"venue": "The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "928--939",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graves, Alex, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages 6645-6649, Vancouver. He, Hua and Jimmy J. Lin. 2016. Pairwise word interaction modeling with deep neural networks for semantic similarity measurement. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 937-948, San Diego, CA. Higashinaka, Ryuichiro, Kenji Imamura, Toyomi Meguro, Chiaki Miyazaki, Nozomi Kobayashi, Hiroaki Sugiyama, Toru Hirano, Toshiro Makino, and Yoshihiro Matsuo. 2014. Towards an open-domain conversational system fully based on natural language processing. In COLING, pages 928-939, Dublin.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hochreiter, Sepp and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Convolutional neural network architectures for matching natural language sentences",
"authors": [
{
"first": "Baotian",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Qingcai",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "2042--2050",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hu, Baotian, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In Advances in Neural Information Processing Systems, pages 2042-2050, Montreal.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Learning deep structured semantic models for web search using clickthrough data",
"authors": [
{
"first": "",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Po-Sen",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Larry",
"middle": [],
"last": "Acero",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Heck",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 22nd ACM International Conference on Information & Knowledge Management",
"volume": "",
"issue": "",
"pages": "2333--2338",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang, Po-Sen, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM International Conference on Information & Knowledge Management, pages 2333-2338, San Francisco, CA.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "An information retrieval approach to short text conversation",
"authors": [
{
"first": "Zongcheng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ji, Zongcheng, Zhengdong Lu, and Hang Li. 2014. An information retrieval approach to short text conversation. CoRR, abs/1408.6988.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Improved deep learning baselines for Ubuntu corpus dialogs",
"authors": [
{
"first": "Rudolf",
"middle": [],
"last": "Kadlec",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kadlec, Rudolf, Martin Schmid, and Jan Kleindienst. 2015. Improved deep learning baselines for Ubuntu corpus dialogs. CoRR, abs/1510.03753.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1746--1751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kim, Yoon. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, pages 1746-1751, Doha.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "Diederik",
"middle": [
"P"
],
"last": "Kingma",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kingma, Diederik P. and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A diversity-promoting objective function for neural conversation models",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2016,
"venue": "The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "110--119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, Jiwei, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110-119, San Diego, CA.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A persona-based neural conversation model",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Georgios",
"middle": [
"P"
],
"last": "Spithourakis",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "William",
"middle": [
"B"
],
"last": "Dolan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016",
"volume": "",
"issue": "",
"pages": "994--1003",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, Jiwei, Michel Galley, Chris Brockett, Georgios P. Spithourakis, Jianfeng Gao, and William B. Dolan. 2016b. A persona-based neural conversation model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, pages 994-1003, Berlin.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Deep reinforcement learning for dialogue generation",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Monroe",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1192--1202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, Jiwei, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016c. Deep reinforcement learning for dialogue generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, pages 1192-1202, Austin, TX.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Adversarial learning for neural dialogue generation",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Monroe",
"suffix": ""
},
{
"first": "Tianlin",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "S\u00e9bastien",
"middle": [],
"last": "Jean",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2157--2169",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, Jiwei, Will Monroe, Tianlin Shi, S\u00e9bastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, pages 2157-2169, Copenhagen.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Deep fusion LSTMs for text semantic matching",
"authors": [
{
"first": "Pengfei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Jifan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1034--1043",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu, Pengfei, Xipeng Qiu, Jifan Chen, and Xuanjing Huang. 2016a. Deep fusion LSTMs for text semantic matching. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics Volume 1: Long Papers, pages 1034-1043, Berlin.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Modelling interaction of sentence pair with coupled-LSTMs",
"authors": [
{
"first": "Pengfei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Yaqian",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jifan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1703--1712",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu, Pengfei, Xipeng Qiu, Yaqian Zhou, Jifan Chen, and Xuanjing Huang. 2016b. Modelling interaction of sentence pair with coupled-LSTMs. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, pages 1703-1712, Austin, TX.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The Ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "Nissan",
"middle": [],
"last": "Pow",
"suffix": ""
},
{
"first": "Iulian",
"middle": [],
"last": "Serban",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the SIGDIAL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lowe, Ryan, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The Ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In Proceedings of the SIGDIAL 2015",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "The 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue",
"authors": [
{
"first": "",
"middle": [],
"last": "Conference",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "285--294",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Conference, The 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 285-294, Prague.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikolov, Tomas, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111-3119, Lake Tahoe, NV.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Sequence to backward and forward sequences: A content-introducing approach to generative short-text conversation",
"authors": [
{
"first": "Lili",
"middle": [],
"last": "Mou",
"suffix": ""
},
{
"first": "Yiping",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Ge",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhi",
"middle": [],
"last": "Jin",
"suffix": ""
}
],
"year": 2016,
"venue": "COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers",
"volume": "",
"issue": "",
"pages": "3349--3358",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mou, Lili, Yiping Song, Rui Yan, Ge Li, Lu Zhang, and Zhi Jin. 2016. Sequence to backward and forward sequences: A content-introducing approach to generative short-text conversation. In COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, pages 3349-3358, Osaka.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A decomposable attention model for natural language inference",
"authors": [
{
"first": "Ankur",
"middle": [
"P"
],
"last": "Parikh",
"suffix": ""
},
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2249--2255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Parikh, Ankur P., Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, pages 2249-2255, Austin, TX.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pennington, Jeffrey, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, pages 1532-1543, Doha.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Convolutional neural tensor network architecture for community-based question answering",
"authors": [
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI)",
"volume": "",
"issue": "",
"pages": "1305--1311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qiu, Xipeng and Xuanjing Huang. 2015. Convolutional neural tensor network architecture for community-based question answering. In Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI), pages 1305-1311, Buenos Aires.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Data-driven response generation in social media",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "William",
"middle": [
"B"
],
"last": "Dolan",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "583--593",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ritter, Alan, Colin Cherry, and William B. Dolan. 2011. Data-driven response generation in social media. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 583-593, Edinburgh.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Reasoning about entailment with neural attention",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Karl",
"middle": [
"Moritz"
],
"last": "Hermann",
"suffix": ""
},
{
"first": "Tom\u00e1s",
"middle": [],
"last": "Kocisk\u00fd",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rockt\u00e4schel, Tim, Edward Grefenstette, Karl Moritz Hermann, Tom\u00e1s Kocisk\u00fd, and Phil Blunsom. 2015. Reasoning about entailment with neural attention. CoRR, abs/1509.06664.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Multiresolution recurrent neural networks: An application to dialogue response generation",
"authors": [
{
"first": "",
"middle": [],
"last": "Courville",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "3288--3294",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Courville. 2017a. Multiresolution recurrent neural networks: An application to dialogue response generation. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pages 3288-3294, San Francisco, CA.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Building end-to-end dialogue systems using generative hierarchical neural network models",
"authors": [
{
"first": "Iulian",
"middle": [],
"last": "Serban",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Vlad",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Aaron",
"middle": [
"C"
],
"last": "Bengio",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Serban, Iulian Vlad, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "A hierarchical latent variable encoder-decoder model for generating dialogues",
"authors": [
{
"first": "Iulian",
"middle": [],
"last": "Serban",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Vlad",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Charlin",
"suffix": ""
},
{
"first": "Aaron",
"middle": [
"C"
],
"last": "Pineau",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "3295--3301",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pages 3776-3784, Phoenix, AZ. Serban, Iulian Vlad, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron C. Courville, and Yoshua Bengio. 2017b. A hierarchical latent variable encoder-decoder model for generating dialogues. In AAAI, pages 3295-3301, San Francisco, CA.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Learning to rank short text pairs with convolutional deep neural networks",
"authors": [
{
"first": "Aliaksei",
"middle": [],
"last": "Severyn",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "373--382",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Severyn, Aliaksei and Alessandro Moschitti. 2015. Learning to rank short text pairs with convolutional deep neural networks. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 373-382, Santiago.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Neural responding machine for short-text conversation",
"authors": [
{
"first": "Lifeng",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2015,
"venue": "ACL 2015",
"volume": "1",
"issue": "",
"pages": "1577--1586",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shang, Lifeng, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In ACL 2015, Volume 1: Long Papers, pages 1577-1586, Beijing.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A latent semantic model with convolutional-pooling structure for information retrieval",
"authors": [
{
"first": "Yelong",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Gr\u00e9goire",
"middle": [],
"last": "Mesnil",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "101--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shen, Yelong, Xiaodong He, Jianfeng Gao, Li Deng, and Gr\u00e9goire Mesnil. 2014. A latent semantic model with convolutional-pooling structure for information retrieval. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, pages 101-110, Shanghai.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Reasoning with neural tensor networks for knowledge base completion",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "926--934",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Socher, Richard, Danqi Chen, Christopher D. Manning, and Andrew Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Advances in Neural Information Processing Systems, pages 926-934, Lake Tahoe, NV.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "A neural network approach to context-sensitive generation of conversational responses",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Jian-Yun",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2015,
"venue": "The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "464--473",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sordoni, Alessandro, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. In NAACL HLT 2015, The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 196-205, Denver, CO. Tan, Ming, C\u00edcero Nogueira dos Santos, Bing Xiang, and Bowen Zhou. 2016. Improved representation learning for question answer matching. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, Volume 1: Long Papers, pages 464-473, Berlin.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Theano: A python framework for fast computation of mathematical expressions",
"authors": [],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theano Development Team. 2016. Theano: A python framework for fast computation of mathematical expressions. CoRR, abs/1605.02688.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "The TREC-8 question answering track",
"authors": [
{
"first": "Ellen",
"middle": [
"M"
],
"last": "Voorhees",
"suffix": ""
},
{
"first": "Dawn",
"middle": [
"M"
],
"last": "Tice",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Second International Conference on Language Resources and Evaluation, LREC 2000",
"volume": "",
"issue": "",
"pages": "26--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Voorhees, Ellen M. and Dawn M. Tice. 2000. The TREC-8 question answering track. In Proceedings of the Second International Conference on Language Resources and Evaluation, LREC 2000, pages 26-34, Athens.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "A deep architecture for semantic matching with multiple positional sentence representations",
"authors": [
{
"first": "",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Yanyan",
"middle": [],
"last": "Shengxian",
"suffix": ""
},
{
"first": "Jiafeng",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xueqi",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cheng",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "2835--2841",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wan, Shengxian, Yanyan Lan, Jiafeng Guo, Jun Xu, Liang Pang, and Xueqi Cheng. 2016a. A deep architecture for semantic matching with multiple positional sentence representations. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pages 2835-2841, Phoenix, AZ.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Match-SRNN: Modeling the recursive matching structure with spatial RNN",
"authors": [
{
"first": "",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Yanyan",
"middle": [],
"last": "Shengxian",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Jiafeng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Xueqi",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cheng",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016",
"volume": "",
"issue": "",
"pages": "2922--2928",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wan, Shengxian, Yanyan Lan, Jun Xu, Jiafeng Guo, Liang Pang, and Xueqi Cheng. 2016b. Match-SRNN: Modeling the recursive matching structure with spatial RNN. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016, pages 2922-2928, New York, NY.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Inner attention based recurrent neural networks for answer selection",
"authors": [
{
"first": "Bingning",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1288--1297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang, Bingning, Kang Liu, and Jun Zhao. 2016. Inner attention based recurrent neural networks for answer selection. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, Volume 1: Long Papers, pages 1288-1297, Berlin.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "A dataset for research on short-text conversations",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Enhong",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "935--945",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang, Hao, Zhengdong Lu, Hang Li, and Enhong Chen. 2013. A dataset for research on short-text conversations. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, pages 935-945, Seattle, WA.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Syntax-based deep matching of short texts",
"authors": [
{
"first": "Mingxuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "Twenty-Fourth International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1354--1361",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang, Mingxuan, Zhengdong Lu, Hang Li, and Qun Liu. 2015. Syntax-based deep matching of short texts. In Twenty-Fourth International Joint Conference on Artificial Intelligence, pages 1354-1361, Buenos Aires.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "A compare-aggregate model for matching text sequences",
"authors": [
{
"first": "Shuohang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang, Shuohang and Jing Jiang. 2016a. A compare-aggregate model for matching text sequences. CoRR, abs/1611.01747.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "ELIZA: a computer program for the study of natural language communication between man and machine",
"authors": [
{
"first": "Shuohang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 1966,
"venue": "The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "9",
"issue": "",
"pages": "36--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang, Shuohang and Jing Jiang. 2016b. Learning natural language inference with LSTM. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1442-1451, San Diego, CA. Weizenbaum, Joseph. 1966. ELIZA: a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1):36-45.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Learning matching models with weak supervision for response selection in retrieval-based chatbots",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhoujun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "420--425",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wu, Yu, Wei Wu, Zhoujun Li, and Ming Zhou. 2018a. Learning matching models with weak supervision for response selection in retrieval-based chatbots. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Volume 2: Short Papers, pages 420-425, Melbourne.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Xing",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zhoujun",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "496--505",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wu, Yu, Wei Wu, Chen Xing, Ming Zhou, and Zhoujun Li. 2017. Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Volume 1: Long Papers, pages 496-505, Vancouver.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Neural response generation with dynamic vocabularies",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Dejian",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Can",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Zhoujun",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th Innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18)",
"volume": "",
"issue": "",
"pages": "5594--5601",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wu, Yu, Wei Wu, Dejian Yang, Can Xu, and Zhoujun Li. 2018b. Neural response generation with dynamic vocabularies. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th Innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), pages 5594-5601, New Orleans, LA.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Topic aware neural response generation",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Xing",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yalou",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei-Ying",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "3351--3357",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xing, Chen, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pages 3351-3357, San Francisco, CA.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Hierarchical recurrent attention network for response generation",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Xing",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yalou",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th Innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18)",
"volume": "",
"issue": "",
"pages": "5610--5617",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xing, Chen, Yu Wu, Wei Wu, Yalou Huang, and Ming Zhou. 2018. Hierarchical recurrent attention network for response generation. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th Innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), pages 5610-5617, New Orleans, LA.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Incorporating loose-structured knowledge into conversation modeling via recall-gate LSTM",
"authors": [
{
"first": "Zhen",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Bingquan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Baoxun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chengjie",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Xiaolong",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 International Joint Conference on Neural Networks",
"volume": "",
"issue": "",
"pages": "3506--3513",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xu, Zhen, Bingquan Liu, Baoxun Wang, Chengjie Sun, and Xiaolong Wang. 2017. Incorporating loose-structured knowledge into conversation modeling via recall-gate LSTM. In 2017 International Joint Conference on Neural Networks, IJCNN 2017, pages 3506-3513, Anchorage, AK.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Learning to respond with deep neural networks for retrieval-based human-computer conversation system",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Yiping",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2016",
"volume": "",
"issue": "",
"pages": "55--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yan, Rui, Yiping Song, and Hua Wu. 2016. Learning to respond with deep neural networks for retrieval-based human-computer conversation system. In Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2016, pages 55-64, Pisa.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "MultigranCNN: An architecture for general matching of text chunks on multiple levels of granularity",
"authors": [
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "63--73",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yin, Wenpeng and Hinrich Sch\u00fctze. 2015. MultigranCNN: An architecture for general matching of text chunks on multiple levels of granularity. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL), pages 63-73, Beijing.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "ABCNN: Attention-based convolutional neural network for modeling sentence pairs",
"authors": [
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2016,
"venue": "TACL",
"volume": "4",
"issue": "",
"pages": "259--272",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yin, Wenpeng, Hinrich Sch\u00fctze, Bing Xiang, and Bowen Zhou. 2016. ABCNN: Attention-based convolutional neural network for modeling sentence pairs. TACL, 4:259-272.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "The hidden information state model: A practical framework for POMDP-based spoken dialogue management",
"authors": [
{
"first": "Steve",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Milica",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Keizer",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Mairesse",
"suffix": ""
},
{
"first": "Jost",
"middle": [],
"last": "Schatzmann",
"suffix": ""
},
{
"first": "Blaise",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Kai",
"middle": [
"Yu"
],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "Computer Speech & Language",
"volume": "24",
"issue": "2",
"pages": "150--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Young, Steve, Milica Ga\u0161i\u0107, Simon Keizer, Fran\u00e7ois Mairesse, Jost Schatzmann, Blaise Thomson, and Kai Yu. 2010. The hidden information state model: A practical framework for POMDP-based spoken dialogue management. Computer Speech & Language, 24(2):150-174.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Emotional chatting machine: Emotional conversation generation with internal and external memory",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Tianyang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th Innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18)",
"volume": "",
"issue": "",
"pages": "730--739",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhou, Hao, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2018. Emotional chatting machine: Emotional conversation generation with internal and external memory. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th Innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), pages 730-739, New Orleans, LA.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Multi-view response selection for human-computer conversation",
"authors": [
{
"first": "Xiangyang",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Daxiang",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Shiqi",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Dianhai",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Xuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "372--381",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhou, Xiangyang, Daxiang Dong, Hua Wu, Shiqi Zhao, Dianhai Yu, Hao Tian, Xuan Liu, and Rui Yan. 2016. Multi-view response selection for human-computer conversation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, pages 372-381, Austin, TX.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "match a context and a response with recurrent neural networks (RNNs); Yan et al. (2016) present",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Existing models can be interpreted with a unified framework. f (\u2022), f (\u2022), h(\u2022), and m(\u2022, \u2022) are utterance representation function, response representation function, context representation function, and matching function, respectively.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Figure 1. The framework consists of utterance representation f (\u2022), response representation f (\u2022), context representation h(\u2022), and matching calculation m(\u2022, \u2022). Given a context s = {u 1 , . . . , u n } and a response candidate r, f (\u2022) and f (\u2022) represent each u i in s and r as vectors or matrices by f (u i ) and f (r), respectively. {f (u i )} n i=1 are then uploaded to h(\u2022), which transforms the utterance representations into h f (u 1 ), . . . , f (u n ) as a representation of the context s. Finally, m(\u2022, \u2022) takes h f (u 1 ), . . . , f (u n ) and f (r) as input and calculates a matching score for s and r. To sum up, the framework performs context-response matching following a paradigm that context s and response r are first individually represented as vectors and then their matching degree is determined by the vectors. Under the framework, the matching model g(s, r) can be defined with f (\u2022), h(\u2022), f (\u2022), and m(\u2022, \u2022), as follows:",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "and m(\u2022, \u2022). Specifically, the RNN models in Lowe et al. (2015) can be defined as",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "Figure 2gives the architecture of SMF. SMF consists of utterance-response matching f (\u2022, \u2022), matching accumulation h(\u2022), and matching prediction m(\u2022). The three",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF5": {
"text": "\u2022, \u2022), h(\u2022, \u2022), and m(\u2022), then the objective function L(D, \u0398) can be written as",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF6": {
"text": "Performance of SAN across different context lengths.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF7": {
"text": "Performance with respect to different maximum context lengths.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF8": {
"text": "Efficiency of SCN and SAN. The left panel shows the training time per batch with 200 dimensional word embeddings, and the right panel shows the inference time per batch. One batch contains 200 instances.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF9": {
"text": "of gates. Darker squares refer to higher values.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF10": {
"text": "Visualization of A 1 in SAN. Darker squares refer to higher values. Visualization of A 2 in SAN. Darker squares refer to higher values. of gates. Darker squares refer to higher values.",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF2": {
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "its hidden states. Finally, in the third layer, m(\u2022) takes {h 1 , . . . , h n } as an input and predicts a matching score for (s, r). In brief, SMF matches s and r with a g(s, r) defined as",
"num": null
},
"TABREF3": {
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"3\">Ubuntu Corpus</td><td colspan=\"3\">Douban Corpus</td></tr><tr><td/><td>train</td><td>val</td><td>test</td><td>train</td><td>val</td><td>test</td></tr><tr><td># context-response pairs</td><td>1M</td><td>0.5M</td><td>0.5M</td><td>1M</td><td>50k</td><td>10k</td></tr><tr><td># candidates per context</td><td>2</td><td>10</td><td>10</td><td>2</td><td>2</td><td>10</td></tr><tr><td># positive candidates per context</td><td>1</td><td>1</td><td>1</td><td>1</td><td>1</td><td>1.18</td></tr><tr><td>Min. # turns per context</td><td>3</td><td>3</td><td>3</td><td>3</td><td>3</td><td>3</td></tr><tr><td>Max. # turns per context</td><td>19</td><td>19</td><td>19</td><td>98</td><td>91</td><td>45</td></tr><tr><td>Avg. # turns per context</td><td colspan=\"2\">10.10 10.10</td><td>10.11</td><td>6.69</td><td>6.75</td><td>6.45</td></tr><tr><td>Avg. # words per utterance</td><td colspan=\"2\">12.45 12.44</td><td>12.48</td><td colspan=\"3\">18.56 18.50 20.74</td></tr><tr><td colspan=\"7\">neural networks with the concatenation of utterances as inputs and the final matching</td></tr><tr><td colspan=\"7\">score is computed by a bilinear function of the context representation and the response</td></tr><tr><td colspan=\"7\">representation. Models including RNN, CNN, LSTM, and BiLSTM were selected as</td></tr><tr><td>baselines.</td><td/><td/><td/><td/><td/><td/></tr></table>",
"html": null,
"text": "Statistics of the two data sets.",
"num": null
},
"TABREF8": {
"type_str": "table",
"content": "<table><tr><td/><td>Value of M_1 (u_1 and r)</td><td/><td/><td>Value of M_1 (u_2 and r)</td><td/><td>Value of M_1 (u_3 and r)</td><td/></tr><tr><td>how can</td><td/><td/><td>sure</td><td/><td/><td/><td/></tr><tr><td>unzip many rar ( _number_ for example ) files</td><td/><td>0.00 0.15 0.30 0.45 0.60 0.75 0.90 1.05 1.20 1.35 1.50 value</td><td>you can do that in</td><td/><td>0.00 0.15 0.30 0.45 0.60 0.75 0.90 1.05 1.20 1.35 1.50 value</td><td>ok how</td><td>0.00 0.15 0.30 0.45 0.60 0.75 1.50 1.35 1.20 1.05 0.90 value</td></tr><tr><td>at once</td><td>th en th e co m m an d gl eb ih an sh ou ld ex tr ac t th em al l</td><td>fr om /t o th at di re ct or y</td><td>bash</td><td>th en th e co m m an d gl eb ih an sh ou ld ex tr ac t th em al l</td><td>fr om /t o th at di re ct or y</td><td>th en th e co m m an d gl eb ih an sh ou ld ex tr ac t th em al l</td><td>fr om /t o th at di re ct or y</td></tr><tr><td/><td>Value of M_1 (u_4 and r)</td><td/><td/><td>Value of M_1 (u_5 and r)</td><td/><td/><td/></tr><tr><td>all the files all in the directory same</td><td/><td>0.00 0.15 0.30 0.45 0.60 0.75 0.90 1.05 1.20 1.35 1.50 value</td><td>yes are they all</td><td/><td>0.00 0.15 0.30 0.45 0.60 0.75 1.50 1.35 1.20 1.05 0.90 value</td><td/><td/></tr><tr><td/><td>th en th e co m m an d gl eb ih an sh ou ld ex tr ac t th em al l</td><td>fr om /t o th at di re ct or y</td><td/><td>th en th e co m m an d gl eb ih an sh ou ld ex tr ac t th em al l</td><td>fr om /t o th at di re ct or y</td><td/><td/></tr><tr><td/><td colspan=\"6\">(a) Visualization of M 1 in SCN. Darker squares refer to higher values.</td><td/></tr><tr><td/><td>Value of M_2 (u_1 and r)</td><td/><td/><td>Value of M_2 (u_2 and r)</td><td/><td>Value of M_2 (u_3 and r)</td><td/></tr><tr><td>how can</td><td/><td/><td>sure</td><td/><td/><td/><td/></tr><tr><td>unzip many rar ( _number_ for example ) files</td><td/><td>0.00 0.15 0.30 0.45 0.60 0.75 0.90 1.05 1.20 1.35 1.50 value</td><td>you can do that in</td><td/><td>0.00 0.15 0.30 0.45 0.60 0.75 0.90 1.05 1.20 1.35 1.50 value</td><td>ok how</td><td>0.00 0.15 0.30 0.45 0.60 0.75 1.50 1.35 1.20 1.05 0.90 value</td></tr><tr><td>at once</td><td>th en th e co m m an d gl eb ih an sh ou ld ex tr ac t th em al l</td><td>fr om /t o th at di re ct or y</td><td>bash</td><td>th en th e co m m an d gl eb ih an sh ou ld ex tr ac t th em al l</td><td>fr om /t o th at di re ct or y</td><td>th en th e co m m an d gl eb ih an sh ou ld ex tr ac t th em al l</td><td>fr om /t o th at di re ct or y</td></tr><tr><td/><td>Value of M_2 (u_4 and r)</td><td/><td/><td>Value of M_2 (u_5 and r)</td><td/><td/><td/></tr><tr><td>all the files all in the directory same</td><td/><td>0.00 0.15 0.30 0.45 0.60 0.75 0.90 1.05 1.20 1.35 1.50 value</td><td>yes are they all</td><td/><td>0.00 0.15 0.30 0.45 0.60 0.75 1.50 1.35 1.20 1.05 0.90 value</td><td/><td/></tr><tr><td/><td>th en th e co m m an d gl eb ih an sh ou ld ex tr ac t th em al l</td><td>fr om /t o th at di re ct or y</td><td/><td>th en th e co m m an d gl eb ih an sh ou ld ex tr ac t th em al l</td><td>fr om /t o th at di re ct or y</td><td/><td/></tr><tr><td/><td>(b)</td><td/><td/><td/><td/><td/><td/></tr></table>",
"html": null,
"text": "Visualization of M 2 in SCN. Darker sqaures refer to higher values.",
"num": null
}
}
}
}