ACL-OCL / Base_JSON /prefixP /json /P17 /P17-1046.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P17-1046",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:17:36.027356Z"
},
"title": "Sequential Matching Network: A New Architecture for Multi-turn Response Selection in Retrieval-Based Chatbots",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "State Key Lab of Software Development Environment",
"institution": "Beihang University",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "wuyu@buaa.edu.cn"
},
{
"first": "Wei",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {},
"email": "wuwei@microsoft.com"
},
{
"first": "Chen",
"middle": [],
"last": "Xing",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nankai University",
"location": {
"settlement": "Tianjin",
"country": "China"
}
},
"email": "v-chxing@microsoft.com"
},
{
"first": "Zhoujun",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "State Key Lab of Software Development Environment",
"institution": "Beihang University",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": "",
"affiliation": {},
"email": "mingzhou@microsoft.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We study response selection for multiturn conversation in retrieval-based chatbots. Existing work either concatenates utterances in context or matches a response with a highly abstract context vector finally, which may lose relationships among utterances or important contextual information. We propose a sequential matching network (SMN) to address both problems. SMN first matches a response with each utterance in the context on multiple levels of granularity, and distills important matching information from each pair as a vector with convolution and pooling operations. The vectors are then accumulated in a chronological order through a recurrent neural network (RNN) which models relationships among utterances. The final matching score is calculated with the hidden states of the RNN. An empirical study on two public data sets shows that SMN can significantly outperform stateof-the-art methods for response selection in multi-turn conversation.",
"pdf_parse": {
"paper_id": "P17-1046",
"_pdf_hash": "",
"abstract": [
{
"text": "We study response selection for multiturn conversation in retrieval-based chatbots. Existing work either concatenates utterances in context or matches a response with a highly abstract context vector finally, which may lose relationships among utterances or important contextual information. We propose a sequential matching network (SMN) to address both problems. SMN first matches a response with each utterance in the context on multiple levels of granularity, and distills important matching information from each pair as a vector with convolution and pooling operations. The vectors are then accumulated in a chronological order through a recurrent neural network (RNN) which models relationships among utterances. The final matching score is calculated with the hidden states of the RNN. An empirical study on two public data sets shows that SMN can significantly outperform stateof-the-art methods for response selection in multi-turn conversation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Conversational agents include task-oriented dialog systems and non-task-oriented chatbots. Dialog systems focus on helping people complete specific tasks in vertical domains (Young et al., 2010) , while chatbots aim to naturally and meaningfully converse with humans on open domain topics (Ritter et al., 2011) . Existing work on building chatbots includes generation -based methods and retrieval-based methods. Retrieval based chatbots enjoy the advantage of informative and fluent responses, because they select a proper response for Table 1 : An example of multi-turn conversation the current conversation from a repository with response selection algorithms. While most existing work on retrieval-based chatbots studies response selection for single-turn conversation (Wang et al., 2013) which only considers the last input message, we consider the problem in a multi-turn scenario. In a chatbot, multi-turn response selection takes a message and utterances in its previous turns as input and selects a response that is natural and relevant to the whole context.",
"cite_spans": [
{
"start": 174,
"end": 194,
"text": "(Young et al., 2010)",
"ref_id": "BIBREF30"
},
{
"start": 289,
"end": 310,
"text": "(Ritter et al., 2011)",
"ref_id": "BIBREF11"
},
{
"start": 772,
"end": 791,
"text": "(Wang et al., 2013)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 536,
"end": 543,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The key to response selection lies in inputresponse matching. Different from single-turn conversation, multi-turn conversation requires matching between a response and a conversation context in which one needs to consider not only the matching between the response and the input message but also matching between responses and utterances in previous turns. The challenges of the task include (1) how to identify important information (words, phrases, and sentences) in context, which is crucial to selecting a proper response and leveraging relevant information in matching; and (2) how to model relationships among the utterances in the context. Table 1 illustrates the challenges with an example. First, \"hold a drum class\" and \"drum\" in context are very important. Without them, one may find responses relevant to the message (i.e., the fifth utterance of the context) but nonsense in the context (e.g., \"what lessons do you want?\"). Second, the message highly depends on the second utterance in the context, and the order of the utterances matters in response selection: exchanging the third utterance and the fifth utterance may lead to different responses. Existing work, however, either ignores relationships among utterances when concatenating them together (Lowe et al., 2015) , or loses important information in context in the process of converting the whole context to a vector without enough supervision from responses (e.g., by a hierarchical RNN ).",
"cite_spans": [
{
"start": 1266,
"end": 1285,
"text": "(Lowe et al., 2015)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 647,
"end": 654,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose a sequential matching network (SMN), a new context based matching model that can tackle both challenges in an end-to-end way. The reason that existing models lose important information in the context is that they first represent the whole context as a vector and then match the context vector with a response vector. Thus, responses in these models connect with the context until the final step in matching. To avoid information loss, SMN matches a response with each utterance in the context at the beginning and encodes important information in each pair into a matching vector. The matching vectors are then accumulated in the utterances' temporal order to model their relationships. The final matching degree is computed with the accumulation of the matching vectors. Specifically, for each utterance-response pair, the model constructs a word-word similarity matrix and a sequence-sequence similarity matrix by the word embeddings and the hidden states of a recurrent neural network with gated recurrent units (GRU) (Chung et al., 2014) respectively. The two matrices capture important matching information in the pair on a word level and a segment (word subsequence) level respectively, and the information is distilled and fused as a matching vector through an alternation of convolution and pooling operations on the matrices. By this means, important information from multiple levels of granularity in context is recognized under sufficient supervision from the response and carried into matching with minimal loss. The matching vectors are then uploaded to another GRU to form a matching score for the context and the response. The GRU accumulates the pair matching in its hidden states in the chronological order of the utterances in context. It models relationships and dependencies among the utterances in a matching fashion and has the utterance order supervise the accumulation of pair matching. The matching degree of the context and the response is computed by a logit model with the hidden states of the GRU. SMN extends the powerful \"2D\" matching paradigm in text pair matching for single-turn conversation to context based matching for multi-turn conversation, and enjoys the advantage of both important information in utterance-response pairs and relationships among utterances being sufficiently preserved and leveraged in matching.",
"cite_spans": [
{
"start": 1032,
"end": 1052,
"text": "(Chung et al., 2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We test our model on the Ubuntu dialogue corpus (Lowe et al., 2015) which is a large scale publicly available English data set for research in multi-turn conversation. The results show that our model can significantly outperform state-ofthe-art methods, and improvement to the best baseline model on R 10 @1 is over 6%. In addition to the Ubuntu corpus, we create a human-labeled Chinese data set, namely the Douban Conversation Corpus, and test our model on it. In contrast to the Ubuntu corpus in which data is collected from a specific domain and negative candidates are randomly sampled, conversations in this data come from the open domain, and response candidates in this data set are collected from a retrieval engine and labeled by three human judges. On this data, our model improves the best baseline model by 3% on R 10 @1 and 4% on P@1. As far as we know, Douban Conversation Corpus is the first human-labeled data set for multi-turn response selection and could be a good complement to the Ubuntu corpus. We have released Douban Conversation Corups and our source code at https://github.com/MarkWuNLP/ MultiTurnResponseSelection Our contributions in this paper are three-folds: (1) the proposal of a new context based matching model for multi-turn response selection in retrieval-based chatbots; (2) the publication of a large human-labeled data set to research communities; (3) empirical verification of the effectiveness of the model on public data sets.",
"cite_spans": [
{
"start": 48,
"end": 67,
"text": "(Lowe et al., 2015)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, building a chatbot with data driven approaches (Ritter et al., 2011; Ji et al., 2014) has drawn significant attention. Existing work along this line includes retrieval-based methods (Hu et al., 2014; Ji et al., 2014; Wu et al., 2016b; Wu et al., 2016a) and generation-based methods (Shang et al., 2015; Vinyals and Le, 2015; Li et al., 2015 Li et al., , 2016 Xing et al., ",
"cite_spans": [
{
"start": 57,
"end": 78,
"text": "(Ritter et al., 2011;",
"ref_id": "BIBREF11"
},
{
"start": 79,
"end": 95,
"text": "Ji et al., 2014)",
"ref_id": "BIBREF4"
},
{
"start": 192,
"end": 209,
"text": "(Hu et al., 2014;",
"ref_id": "BIBREF3"
},
{
"start": 210,
"end": 226,
"text": "Ji et al., 2014;",
"ref_id": "BIBREF4"
},
{
"start": 227,
"end": 244,
"text": "Wu et al., 2016b;",
"ref_id": "BIBREF25"
},
{
"start": 245,
"end": 262,
"text": "Wu et al., 2016a)",
"ref_id": "BIBREF24"
},
{
"start": 292,
"end": 312,
"text": "(Shang et al., 2015;",
"ref_id": "BIBREF14"
},
{
"start": 313,
"end": 334,
"text": "Vinyals and Le, 2015;",
"ref_id": "BIBREF18"
},
{
"start": 335,
"end": 350,
"text": "Li et al., 2015",
"ref_id": "BIBREF14"
},
{
"start": 351,
"end": 368,
"text": "Li et al., , 2016",
"ref_id": "BIBREF8"
},
{
"start": 369,
"end": 381,
"text": "Xing et al.,",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": ".... .... .... Score 1 2 , M M Convolution Pooling ( ) L .... .... .... 1 u 1 n u \uf02d n u r Word Embedding GRU1 GRU2 .... 1 v 1 n v \uf02d n v 1 ' n h \uf02d Utterance-Response Matching (First Layer) Matching Accumulation (Second Layer) Segment Pairs Word Pairs Matching Prediction (Third Layer) 1 ' h ' n h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Figure 1: Architecture of SMN 2016; Serban et al., 2016). Our work is a retrievalbased method, in which we study context-based response selection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Early studies of retrieval-based chatbots focus on response selection for single-turn conversation (Wang et al., 2013; Ji et al., 2014; Wu et al., 2016b) . Recently, researchers have begun to pay attention to multi-turn conversation. For example, Lowe et al. (2015) match a response with the literal concatenation of context utterances. concatenate context utterances with the input message as reformulated queries and perform matching with a deep neural network architecture. improve multi-turn response selection with a multi-view model including an utterance view and a word view. Our model is different in that it matches a response with each utterance at first and accumulates matching information instead of sentences by a GRU, thus useful information for matching can be sufficiently retained.",
"cite_spans": [
{
"start": 99,
"end": 118,
"text": "(Wang et al., 2013;",
"ref_id": "BIBREF21"
},
{
"start": 119,
"end": 135,
"text": "Ji et al., 2014;",
"ref_id": "BIBREF4"
},
{
"start": 136,
"end": 153,
"text": "Wu et al., 2016b)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Suppose that we have a data set D",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formalization",
"sec_num": "3.1"
},
{
"text": "= {(y i , s i , r i )} N i=1 , where s i = {u i,1 , . . . , u i,n i } rep- resents a conversation context with {u i,k } n i k=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formalization",
"sec_num": "3.1"
},
{
"text": "as utterances. r i is a response candidate and y i \u2208 {0, 1} denotes a label. y i = 1 means r i is a proper response for s i , otherwise y i = 0. Our goal is to learn a matching model g(\u2022, \u2022) with D. For any context-response pair (s, r), g(s, r) measures the matching degree between s and r.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formalization",
"sec_num": "3.1"
},
{
"text": "We propose a sequential matching network (SMN) to model g(\u2022, \u2022). Figure 1 gives the architecture. SMN first decomposes context-response matching into several utterance-response pair matching and then all pairs matching are accumulated as a context based matching through a recurrent neural network. SMN consists of three layers. The first layer matches a response candidate with each utterance in the context on a word level and a segment level, and important matching information from the two levels is distilled by convolution, pooling and encoded in a matching vector. The matching vectors are then fed into the second layer where they are accumulated in the hidden states of a recurrent neural network with GRU following the chronological order of the utterances in the context. The third layer calculates the final matching score with the hidden states of the second layer. SMN enjoys several advantages over existing models. First, a response candidate can match each utterance in the context at the very beginning, thus matching information in every utteranceresponse pair can be sufficiently extracted and carried to the final matching score with minimal loss. Second, information extraction from each utterance is conducted on different levels of granularity and under sufficient supervision from the response, thus semantic structures that are useful for response selection in each utterance can be well identified and extracted. Third, matching and utterance relationships are coupled rather than separately modeled, thus utterance relationships (e.g., order), as a kind of knowledge, can supervise the formation of the matching score.",
"cite_spans": [],
"ref_spans": [
{
"start": 65,
"end": 73,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Overview",
"sec_num": "3.2"
},
{
"text": "By taking utterance relationships into account, SMN extends the \"2D\" matching that has proven effective in text pair matching for single-turn response selection to sequential \"2D\" matching for context based matching in response selection for multi-turn conversation. In the following sections, we will describe details of the three layers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Overview",
"sec_num": "3.2"
},
{
"text": "Given an utterance u in a context s and a response candidate r, the model looks up an embedding table and represents u and r as U = [e u,1 , . . . , e u,nu ] and R = [e r,1 , . . . , e r,nr ] respectively, where e u,i , e r,i \u2208 R d are the embeddings of the i-th word of u and r respectively. U \u2208 R d\u00d7nu and R \u2208 R d\u00d7nr are then used to construct a word-word similarity matrix M 1 \u2208 R nu\u00d7nr and a sequence-sequence similarity matrix M 2 \u2208 R nu\u00d7nr which are two input channels of a convolutional neural network (CNN). The CNN distills important matching information from the matrices and encodes the information into a matching vector v.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utterance-Response Matching",
"sec_num": "3.3"
},
{
"text": "Specifically, \u2200i, j, the",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utterance-Response Matching",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(i, j)-th element of M 1 is defined by e1,i,j = e u,i \u2022 er,j.",
"eq_num": "(1)"
}
],
"section": "Utterance-Response Matching",
"sec_num": "3.3"
},
{
"text": "M 1 models the matching between u and r on a word level. To construct M 2 , we first employ a GRU to transform U and R to hidden vectors. Suppose that H u = [h u,1 , . . . , h u,nu ] are the hidden vectors of U, then \u2200i, h u,i \u2208 R m is defined by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utterance-Response Matching",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "zi = \u03c3(Wzeu,i + Uzhu,i\u22121) ri = \u03c3(Wreu,i + Urhu,i\u22121) hu,i = tanh(W h eu,i + U h (ri hu,i\u22121)) hu,i = zi hu,i + (1 \u2212 zi) hu,i\u22121,",
"eq_num": "(2)"
}
],
"section": "Utterance-Response Matching",
"sec_num": "3.3"
},
{
"text": "where h u,0 = 0, z i and r i are an update gate and a reset gate respectively, \u03c3(\u2022) is a sigmoid function, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utterance-Response Matching",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "W z , W h , W r , U z , U r ,U h are parameters. Similarly, we have H r = [h r,1 , . . . , h r,nr ] as the hidden vectors of R. Then, \u2200i, j, the (i, j)-th ele- ment of M 2 is defined by e2,i,j = h u,i Ahr,j,",
"eq_num": "(3)"
}
],
"section": "Utterance-Response Matching",
"sec_num": "3.3"
},
{
"text": "where A \u2208 R m\u00d7m is a linear transformation. \u2200i, GRU models the sequential relationship and the dependency among words up to position i and encodes the text segment until the i-th word to a hidden vector. Therefore, M 2 models the matching between u and r on a segment level. M 1 and M 2 are then processed by a CNN to form v. \u2200f = 1, 2, CNN regards M f as an input channel, and alternates convolution and max-pooling operations. Suppose that z (l,f ) ",
"cite_spans": [
{
"start": 444,
"end": 450,
"text": "(l,f )",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Utterance-Response Matching",
"sec_num": "3.3"
},
{
"text": "= z (l,f ) i,j I (l,f ) \u00d7J (l,f )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utterance-Response Matching",
"sec_num": "3.3"
},
{
"text": "denotes the output of feature maps of type-f on layer-l, where z (0,f ) = M f , \u2200f = 1, 2. On the convolution layer, we employ a 2D convolution operation with a window size",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utterance-Response Matching",
"sec_num": "3.3"
},
{
"text": "r (l,f ) w \u00d7 r (l,f ) h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utterance-Response Matching",
"sec_num": "3.3"
},
{
"text": ", and define z",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utterance-Response Matching",
"sec_num": "3.3"
},
{
"text": "(l,f ) i,j as z (l,f ) i,j = \u03c3( F l\u22121 f =0 r (l,f ) w s=0 r (l,f ) h t=0 W (l,f ) s,t \u2022 z (l\u22121,f ) i+s,j+t + b l,k ), (4) where \u03c3(\u2022) is a ReLU, W (l,f ) \u2208 R r (l,f ) w \u00d7r (l,f ) h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utterance-Response Matching",
"sec_num": "3.3"
},
{
"text": "and b l,k are parameters, and F l\u22121 is the number of feature maps on the (l \u2212 1)-th layer. A max pooling operation follows a convolution operation and can be formulated as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utterance-Response Matching",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "z (l,f ) i,j = max p (l,f ) w >s\u22650 max p (l,f ) h >t\u22650 zi+s,j+t,",
"eq_num": "(5)"
}
],
"section": "Utterance-Response Matching",
"sec_num": "3.3"
},
{
"text": "where p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utterance-Response Matching",
"sec_num": "3.3"
},
{
"text": "(l,f ) w and p (l,f ) h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utterance-Response Matching",
"sec_num": "3.3"
},
{
"text": "are the width and the height of the 2D pooling respectively. The output of the final feature maps are concatenated and mapped to a low dimensional space with a linear transformation as the matching vector v \u2208 R q .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utterance-Response Matching",
"sec_num": "3.3"
},
{
"text": "According to Equation (1), (3), (4), and (5), we can see that by learning word embedding and parameters of GRU from training data, words or segments in an utterance that are useful for recognizing the appropriateness of a response may have high similarity with some words or segments in the response and result in high value areas in the similarity matrices. These areas will be transformed and selected by convolution and pooling operations and carry important information in the utterance to the matching vector. This is how our model identifies important information in context and leverage it in matching under the supervision of the response. We consider multiple channels because we want to capture important matching information on multiple levels of granularity of text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utterance-Response Matching",
"sec_num": "3.3"
},
{
"text": "Suppose that [v 1 , . . . , v n ] is the output of the first layer (corresponding to n pairs), at the second layer, a GRU takes [v 1 , . . . , v n ] as an input and encodes the matching sequence into its hidden states H m = [h 1 , . . . , h n ] \u2208 R q\u00d7n with a detailed parameterization similar to Equation (2). This layer has two functions: (1) it models the dependency and the temporal relationship of utterances in the context; (2) it leverages the temporal relationship to supervise the accumulation of the pair matching as a context based matching. Moreover, from Equation (2), we can see that the reset gate (i.e., r i ) and the update gate (i.e., z i ) control how much information from the previous hidden state and the current input flows to the current hidden state, thus important matching vectors (corresponding to important utterances) can be accumulated while noise in the vectors can be filtered out.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching Accumulation",
"sec_num": "3.4"
},
{
"text": "With [h 1 , . . . , h n ], we define g(s, r) as g(s, r) = sof tmax(W2L[h 1 , . . . , h n ] + b2), (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching Prediction and Learning",
"sec_num": "3.5"
},
{
"text": "where W 2 and b 2 are parameters. We consider three parameterizations for L[h 1 , . . . , h n ]:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching Prediction and Learning",
"sec_num": "3.5"
},
{
"text": "(1) only the last hidden state is used. Then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching Prediction and Learning",
"sec_num": "3.5"
},
{
"text": "L[h 1 , . . . , h n ] = h n . (2) the hidden states are linearly combined. Then, L[h 1 , . . . , h n ] = n i=1 w i h i , where w i \u2208 R.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching Prediction and Learning",
"sec_num": "3.5"
},
{
"text": "(3) we follow (Yang et al., 2016) and employ an attention mechanism to combine the hidden states. Then,",
"cite_spans": [
{
"start": 14,
"end": 33,
"text": "(Yang et al., 2016)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Matching Prediction and Learning",
"sec_num": "3.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L[h 1 , . . . , h n ] is defined as ti = tanh(W1,1hu i ,nu + W1,2h i + b1), \u03b1i = exp(t i ts) i (exp(t i ts)) , L[h 1 , . . . , h n ] = n i=1 \u03b1ih i ,",
"eq_num": "(7)"
}
],
"section": "Matching Prediction and Learning",
"sec_num": "3.5"
},
{
"text": "where W 1,1 \u2208 R q\u00d7m , W 1,2 \u2208 R q\u00d7q and b 1 \u2208 R q are parameters. h i and h u i ,nu are the i-th matching vector and the final hidden state of the i-th utterance respectively. t s \u2208 R q is a virtual context vector which is randomly initialized and jointly learned in training. Both (2) and (3) aim to learn weights for {h 1 , . . . , h n } from training data and highlight the effect of important matching vectors in the final matching. The difference is that weights in (2) are static, because the weights are totally determined by the positions of utterances, while weights in (3) are dynamically computed by the matching vectors and utterance vectors. We denote our model with the three parameterizations of L[h 1 , . . . , h n ] as SMN last , SMN static , and SMN dynamic , and empirically compare them in experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching Prediction and Learning",
"sec_num": "3.5"
},
{
"text": "We learn g(\u2022, \u2022) by minimizing cross entropy with D. Let \u0398 denote the parameters of SMN, then the objective function L(D, \u0398) of learning can be formulated as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching Prediction and Learning",
"sec_num": "3.5"
},
{
"text": "\u2212 N i=1 [yilog(g(si, ri)) + (1 \u2212 yi)log(1 \u2212 g(si, ri))] . (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching Prediction and Learning",
"sec_num": "3.5"
},
{
"text": "In practice, a retrieval-based chatbot, to apply the matching approach to the response selection, one needs to retrieve a number of response candidates from an index beforehand. While candidate retrieval is not the focus of the paper, it is an important step in a real system. In this work, we exploit a heuristic method to obtain response candidates from the index. Given a message u n with {u 1 , . . . , u n\u22121 } utterances in its previous turns, we extract the top 5 keywords from {u 1 , . . . , u n\u22121 } based on their tf-idf scores 1 and expand u n with the keywords. Then we send the expanded message to the index and retrieve response candidates using the inline retrieval algorithm of the index. Finally, we use g(s, r) to rerank the candidates and return the top one as a response to the context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Response Candidate Retrieval",
"sec_num": "4"
},
{
"text": "We tested our model on a publicly available English data set and a Chinese data set published with this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "The English data set is the Ubuntu Corpus (Lowe et al., 2015) which contains multi-turn dialogues collected from chat logs of the Ubuntu Forum. The data set consists of 1 million context-response pairs for training, 0.5 million pairs for validation, and 0.5 million pairs for testing. Positive responses are true responses from humans, and negative ones are randomly sampled. The ratio of the positive and the negative is 1:1 in training, and 1:9 in validation and testing. We used the copy shared by 2 in which numbers, urls, and paths are replaced by special placeholders. We followed (Lowe et al., 2015) and employed recall at position k in n candidates (R n @k) as evaluation metrics.",
"cite_spans": [
{
"start": 42,
"end": 61,
"text": "(Lowe et al., 2015)",
"ref_id": "BIBREF9"
},
{
"start": 587,
"end": 606,
"text": "(Lowe et al., 2015)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ubuntu Corpus",
"sec_num": "5.1"
},
{
"text": "The Ubuntu Corpus is a domain specific data set, and response candidates are obtained from negative sampling without human judgment. To further verify the efficacy of our model, we created a new data set with open domain conversations, called the Douban Conversation Corpus. Response candidates in the test set of the Douban Conversation Corpus are collected following the procedure of a retrieval-based chatbot and are labeled by human judges. It simulates the real scenario of a retrievalbased chatbot. We publish it to research communities to facilitate the research of multi-turn response selection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Douban Conversation Corpus",
"sec_num": "5.2"
},
{
"text": "Specifically, we crawled 1.1 million dyadic dialogues (conversation between two persons) longer than 2 turns from Douban group 3 which is a popular social networking service in China. We randomly sampled 0.5 million dialogues for creating a training set, 25 thousand dialouges for creating a validation set, and 1, 000 dialogues for creating a test set, and made sure that there is no overlap between the three sets. For each dialogue in training and validation, we took the last turn as a positive response for the previous turns as a context and randomly sampled another response from the 1.1 million data as a negative response. There are 1 million context-response pairs in the training set and 50 thousand pairs in the validation set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Douban Conversation Corpus",
"sec_num": "5.2"
},
{
"text": "To create the test set, we first crawled 15 million post-reply pairs from Sina Weibo 4 which is the largest microblogging service in China and indexed the pairs with Lucene 5 . We took the last turn of each Douban dyadic dialogue in the test set as a message, retrieved 10 response candidates from the index following the method in Section 4, and finally formed a test set with 10, 000 context-response pairs. We recruited three labelers to judge if a candidate is a proper response to the context. A proper response means the response can naturally reply to the message given the whole context. Each pair received three labels and the majority of the labels were taken as the final decision. Table 2 gives the statistics of the three sets. Note that the Fleiss' kappa (Fleiss, 1971 ) of the labeling is 0.41, which indicates that the three labelers reached a relatively high agreement.",
"cite_spans": [
{
"start": 769,
"end": 782,
"text": "(Fleiss, 1971",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 693,
"end": 700,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Douban Conversation Corpus",
"sec_num": "5.2"
},
{
"text": "Besides R n @ks, we also followed the conven- et al., 1999) , and precision at position 1 (P@1) as evaluation metrics. We did not calculate R 2 @1 because in Douban corpus one context could have more than one correct responses, and we have to randomly sample one for R 2 @1, which may bring bias to evaluation. When using the labeled set, we removed conversations with all negative responses or all positive responses, as models make no difference with them. There are 6, 670 contextresponse pairs left in the test set.",
"cite_spans": [
{
"start": 46,
"end": 59,
"text": "et al., 1999)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Douban Conversation Corpus",
"sec_num": "5.2"
},
{
"text": "We considered the following baselines:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "5.3"
},
{
"text": "Basic models: models in (Lowe et al., 2015) and (Kadlec et al., 2015) including TF-IDF, RNN, CNN, LSTM and BiLSTM.",
"cite_spans": [
{
"start": 24,
"end": 43,
"text": "(Lowe et al., 2015)",
"ref_id": "BIBREF9"
},
{
"start": 48,
"end": 69,
"text": "(Kadlec et al., 2015)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "5.3"
},
{
"text": "Multi-view: the model proposed by that utilizes a hierarchical recurrent neural network to model utterance relationships.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "5.3"
},
{
"text": "Deep learning to respond (DL2R): the model proposed by that reformulates the message with other utterances in the context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "5.3"
},
{
"text": "Advanced single-turn matching models: since BiLSTM does not represent the state-ofthe-art matching model, we concatenated the utterances in a context and matched the long text with a response candidate using more powerful models including MV-LSTM (Wan et al., 2016) (2D matching), Match-LSTM (Wang and Jiang, 2015) , Attentive-LSTM (Tan et al., 2015 ) (two attention based models), and Multi-Channel which is described in Section 3.3. Multi-Channel is a simple version of our model without considering utterance relationships. We also appended the top 5 tf-idf words in context to the input message, and computed the score between the expanded message and a response with Multi-Channel, denoted as Multi-Channel exp .",
"cite_spans": [
{
"start": 292,
"end": 314,
"text": "(Wang and Jiang, 2015)",
"ref_id": "BIBREF23"
},
{
"start": 332,
"end": 349,
"text": "(Tan et al., 2015",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "5.3"
},
{
"text": "Douban Conversation Corpus R2@1 R10@1 R10@2 R10@5 MAP MRR P@1 R10@1 R10@2 R10@5 TF-IDF Table 3 : Evaluation results on the two data sets. Numbers in bold mean that the improvement is statistically significant compared with the best baseline.",
"cite_spans": [],
"ref_spans": [
{
"start": 87,
"end": 94,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ubuntu Corpus",
"sec_num": null
},
{
"text": "For baseline models, if their results are available in existing literature (e.g., those on the Ubuntu corpus), we just copied the numbers, otherwise we implemented the models following the settings in the literatures. All models were implemented using Theano (Theano Development Team, 2016). Word embeddings were initialized by the results of word2vec (Mikolov et al., 2013) which ran on the training data, and the dimensionality of word vectors is 200. For Multi-Channel and layer one of our model, we set the dimensionality of the hidden states of GRU as 200. We tuned the window size of convolution and pooling in {(2, 2), (3, 3)(4, 4)} and chose (3, 3) finally. The number of feature maps is 8. In layer two, we set the dimensionality of matching vectors and the hidden states of GRU as 50. The parameters were updated by stochastic gradient descent with Adam algorithm (Kingma and Ba, 2014) on a single Tesla K80 GPU. The initial learning rate is 0.001, and the parameters of Adam, \u03b2 1 and \u03b2 2 are 0.9 and 0.999 respectively. We employed early-stopping as a regularization strategy. Models were trained in minibatches with a batch size of 200, and the maximum utterance length is 50. We set the maximum context length (i.e., number of utterances) as 10, because the performance of models does not improve on contexts longer than 10 (details are shown in the Section 5.6). We padded zeros if the number of utterances in a context is less than 10, otherwise we kept the last 10 utterances. Table 3 shows the evaluation results on the two data sets. Our models outperform baselines greatly in terms of all metrics on both data sets, with the improvements being statistically significant (t-test with p-value \u2264 0.01, except R 10 @5 on Douban Corpus). Even the state-of-the-art singleturn matching models perform much worse than our models. The results demonstrate that one cannot neglect utterance relationships and simply perform multi-turn response selection by concatenating utterances together. Our models achieve significant improvements over Multi-View, which justified our \"matching first\" strategy. DL2R is worse than our models, indicating that utterance reformulation with heuristic rules is not a good method for utilizing context information. R n @ks are low on the Douban Corpus as there are multiple correct candidates for a context (e.g., if there are 3 correct responses, then the maximum R 10 @1 is 0.33). SMN dynamic is only slightly better than SMN static and SMN last . The reason might be that the GRU can select useful signals from the matching sequence and accumulate them in the final state with its gate mechanism, thus the efficacy of an attention mechanism is not obvious for the task at hand.",
"cite_spans": [
{
"start": 352,
"end": 374,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 1493,
"end": 1500,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parameter Tuning",
"sec_num": "5.4"
},
{
"text": "Visualization: we visualize the similarity matrices and the gates of GRU in layer two using an example from the Ubuntu corpus to further clarify how our model identifies important information in the context and how it selects important matching vectors with the gate mechanism of GRU as described in Section 3.3 and Section 3.4. The example is {u 1 : how can unzip many rar ( number for example ) files at once; u 2 : sure you can do that in bash; u 3 : okay how? u 4 : are the files all",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Further Analysis",
"sec_num": "5.6"
},
{
"text": "Douban Conversation Corpus R2@1 R10@1 R10@2 R10@5 MAP MRR P@1 R10@1 R10@2 R10@ in the same directory? u 5 : yes they all are; r: then the command glebihan should extract them all from/to that directory}. It is from the test set and our model successfully ranked the correct response to the top position. Due to space limitation, we only visualized M 1 , M 2 and the update gate (i.e. z) in Figure 2 . We can see that in u 1 important words including \"unzip\", \"rar\", \"files\" are recognized and carried to matching by \"command\", \"extract\", and \"directory\" in r, while u 3 is almost useless and thus little information is extracted from it. u 1 is crucial to response selection and nearly all information from u 1 and r flows to the hidden state of GRU, while other utterances are less informative and the corresponding gates are almost \"closed\" to keep the information from u 1 and r until the final state.",
"cite_spans": [],
"ref_spans": [
{
"start": 390,
"end": 398,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Ubuntu Corpus",
"sec_num": null
},
{
"text": "Model ablation: we investigate the effect of different parts of SMN by removing them one by one from SMN last , shown in Table 4 . First, replacing the multi-channel \"2D\" matching with a neural tensor network (NTN) (Socher et al., 2013 ) (denoted as Replace M ) makes the performance drop dramatically. This is because NTN only matches a pair by an utterance vector and a response vector and loses important information in the pair. Together with the visualization, we can conclude that \"2D\" matching plays a key role in the \"matching first\" strategy as it captures the important matching information in each pair with minimal loss. Second, the performance drops slightly when replacing the GRU for matching accumulation with a multi-layer perceptron (denoted as Replace A ). This indicates that utterance relationships are useful. Finally, we left only one channel in matching and found that M 2 is a little more powerful than M 1 and we achieve the best results with both of them (except on R 10 @5 on the Douban Corpus).",
"cite_spans": [
{
"start": 215,
"end": 235,
"text": "(Socher et al., 2013",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 121,
"end": 128,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Ubuntu Corpus",
"sec_num": null
},
{
"text": "Performance across context length: we study how our model (SMN last ) performs across the length of contexts. Figure 3 shows the comparison on MAP in different length intervals on the Douban corpus. Our model consistently performs better than the baselines, and when contexts become longer, the gap becomes larger. The results demonstrate that our model can well capture the dependencies, especially long dependencies, among utterances in contexts. Figure 4 shows the performance of SMN on Ubuntu Corpus and Douban Corpus with respect to maximum context length. From Figure 4 , we find that performance improves significantly when the maximum context length is lower than 5, and becomes stable after the context length reaches 10. This indicates that context information is important for multi-turn response selection, and we can set the maximum context length as 10 to balance effectiveness and efficiency.",
"cite_spans": [],
"ref_spans": [
{
"start": 110,
"end": 118,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 449,
"end": 457,
"text": "Figure 4",
"ref_id": "FIGREF2"
},
{
"start": 567,
"end": 575,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Ubuntu Corpus",
"sec_num": null
},
{
"text": "Error analysis: although SMN outperforms baseline methods on the two data sets, there are (1) Logical consistency. SMN models the context and response on the semantic level, but pays little attention to logical consistency. This leads to several DSATs in the Douban Corpus. For example, given a context {a: Does anyone know Newton jogging shoes? b: 100 RMB on Taobao. a: I know that. I do not want to buy it because that is a fake which is made in Qingdao ,b: Is it the only reason you do not want to buy it? }, SMN gives a large score to the response { It is not a fake. I just worry about the date of manufacture}. The response is inconsistent with the context on logic, as it claims that the jogging shoes are not fake. In the future, we shall explore the logic consistency problem in retrieval-based chatbots.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ubuntu Corpus",
"sec_num": null
},
{
"text": "(2) No correct candidates after retrieval. In the experiment, we prepared 1000 contexts for testing, but only 667 contexts have correct candidates after candidate response retrieval. This indicates that there is still room for candidate retrieval components to improve, and only expanding the input message with several keywords in context may not be a perfect approach for candidate retrieval. In the future, we will consider advanced methods for retrieving candidates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ubuntu Corpus",
"sec_num": null
},
{
"text": "We present a new context based model for multiturn response selection in retrieval-based chatbots.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "Experiment results on open data sets show that the model can significantly outperform the stateof-the-art methods. Besides, we publish the first human-labeled multi-turn response selection data set to research communities. In the future, we shall study how to model logical consistency of responses and improve candidate retrieval.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "Tf is word frequency in the context, while idf is calculated using the entire index.2 https://www.dropbox.com/s/ 2fdn26rj6h9bpvl/ubuntudata.zip?dl=0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.douban.com/group 4 http://weibo.com/ 5 https://lucenenet.apache.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We appreciate valuable comments provided by anonymous reviewers and our discussions with Zhao Yan. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgment",
"sec_num": "7"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Modern information retrieval",
"authors": [
{
"first": "Ricardo",
"middle": [],
"last": "Baeza-Yates",
"suffix": ""
},
{
"first": "Berthier",
"middle": [],
"last": "Ribeiro-Neto",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "463",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ricardo Baeza-Yates, Berthier Ribeiro-Neto, et al. 1999. Modern information retrieval, volume 463. ACM press New York.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Empirical evaluation of gated recurrent neural networks on sequence modeling",
"authors": [
{
"first": "Junyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.3555"
]
},
"num": null,
"urls": [],
"raw_text": "Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence model- ing. arXiv preprint arXiv:1412.3555 .",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Measuring nominal scale agreement among many raters",
"authors": [
{
"first": "L",
"middle": [],
"last": "Joseph",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fleiss",
"suffix": ""
}
],
"year": 1971,
"venue": "Psychological bulletin",
"volume": "76",
"issue": "5",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph L Fleiss. 1971. Measuring nominal scale agree- ment among many raters. Psychological bulletin 76(5):378.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Convolutional neural network architectures for matching natural language sentences",
"authors": [
{
"first": "Baotian",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Qingcai",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "2042--2050",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network archi- tectures for matching natural language sentences. In Advances in Neural Information Processing Sys- tems. pages 2042-2050.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An information retrieval approach to short text conversation",
"authors": [
{
"first": "Zongcheng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1408.6988"
]
},
"num": null,
"urls": [],
"raw_text": "Zongcheng Ji, Zhengdong Lu, and Hang Li. 2014. An information retrieval approach to short text conver- sation. arXiv preprint arXiv:1408.6988 .",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Improved deep learning baselines for ubuntu corpus dialogs",
"authors": [
{
"first": "Rudolf",
"middle": [],
"last": "Kadlec",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1510.03753"
]
},
"num": null,
"urls": [],
"raw_text": "Rudolf Kadlec, Martin Schmid, and Jan Kleindienst. 2015. Improved deep learning baselines for ubuntu corpus dialogs. arXiv preprint arXiv:1510.03753 .",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "Diederik",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 .",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A diversity-promoting objective function for neural conversation models",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1510.03055"
]
},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A diversity-promoting objec- tive function for neural conversation models. arXiv preprint arXiv:1510.03055 .",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A persona-based neural conversation model",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1603.06155"
]
},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A persona-based neural con- versation model. arXiv preprint arXiv:1603.06155 .",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "Nissan",
"middle": [],
"last": "Pow",
"suffix": ""
},
{
"first": "Iulian",
"middle": [],
"last": "Serban",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1506.08909"
]
},
"num": null,
"urls": [],
"raw_text": "Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dia- logue systems. arXiv preprint arXiv:1506.08909 .",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems. pages 3111-3119.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Data-driven response generation in social media",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "William B",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "583--593",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Ritter, Colin Cherry, and William B Dolan. 2011. Data-driven response generation in social media. In Proceedings of the Conference on Empirical Meth- ods in Natural Language Processing. Association for Computational Linguistics, pages 583-593.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Building end-to-end dialogue systems using generative hierarchical neural network models",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Iulian V Serban",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1507.04808"
]
},
"num": null,
"urls": [],
"raw_text": "Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2015. Build- ing end-to-end dialogue systems using generative hi- erarchical neural network models. arXiv preprint arXiv:1507.04808 .",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Multiresolution recurrent neural networks: An application to dialogue response generation",
"authors": [
{
"first": "Iulian",
"middle": [],
"last": "Vlad Serban",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Klinger",
"suffix": ""
},
{
"first": "Gerald",
"middle": [],
"last": "Tesauro",
"suffix": ""
},
{
"first": "Kartik",
"middle": [],
"last": "Talamadupula",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.00776"
]
},
"num": null,
"urls": [],
"raw_text": "Iulian Vlad Serban, Tim Klinger, Gerald Tesauro, Kartik Talamadupula, Bowen Zhou, Yoshua Ben- gio, and Aaron Courville. 2016. Multiresolu- tion recurrent neural networks: An application to dialogue response generation. arXiv preprint arXiv:1606.00776 .",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Neural responding machine for short-text conversation",
"authors": [
{
"first": "Lifeng",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2015,
"venue": "ACL 2015",
"volume": "1",
"issue": "",
"pages": "1577--1586",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversa- tion. In ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers. pages 1577-1586.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Reasoning with neural tensor networks for knowledge base completion",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "926--934",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning with neural ten- sor networks for knowledge base completion. In Ad- vances in Neural Information Processing Systems. pages 926-934.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Lstmbased deep learning models for non-factoid answer selection",
"authors": [
{
"first": "Ming",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.04108"
]
},
"num": null,
"urls": [],
"raw_text": "Ming Tan, Bing Xiang, and Bowen Zhou. 2015. Lstm- based deep learning models for non-factoid answer selection. arXiv preprint arXiv:1511.04108 .",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Theano: A Python framework for fast computation of mathematical expressions",
"authors": [],
"year": 2016,
"venue": "Theano Development Team",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theano Development Team. 2016. Theano: A Python framework for fast computation of mathe- matical expressions. arXiv e-prints abs/1605.02688. http://arxiv.org/abs/1605.02688.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A neural conversational model",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1506.05869"
]
},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals and Quoc Le. 2015. A neural conversa- tional model. arXiv preprint arXiv:1506.05869 .",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The trec-8 question answering track report",
"authors": [
{
"first": "M",
"middle": [],
"last": "Ellen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Voorhees",
"suffix": ""
}
],
"year": 1999,
"venue": "Trec",
"volume": "99",
"issue": "",
"pages": "77--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellen M Voorhees et al. 1999. The trec-8 question an- swering track report. In Trec. volume 99, pages 77- 82.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Match-srnn: Modeling the recursive matching structure with spatial rnn",
"authors": [
{
"first": "Yanyan",
"middle": [],
"last": "Shengxian Wan",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Jiafeng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Xueqi",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cheng",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1604.04378"
]
},
"num": null,
"urls": [],
"raw_text": "Shengxian Wan, Yanyan Lan, Jun Xu, Jiafeng Guo, Liang Pang, and Xueqi Cheng. 2016. Match-srnn: Modeling the recursive matching structure with spa- tial rnn. arXiv preprint arXiv:1604.04378 .",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A dataset for research on short-text conversations",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Enhong",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2013,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "935--945",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Wang, Zhengdong Lu, Hang Li, and Enhong Chen. 2013. A dataset for research on short-text conversations. In EMNLP. pages 935-945.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Syntax-based deep matching of short texts",
"authors": [
{
"first": "Mingxuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1503.02427"
]
},
"num": null,
"urls": [],
"raw_text": "Mingxuan Wang, Zhengdong Lu, Hang Li, and Qun Liu. 2015. Syntax-based deep matching of short texts. arXiv preprint arXiv:1503.02427 .",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Learning natural language inference with lstm",
"authors": [
{
"first": "Shuohang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1512.08849"
]
},
"num": null,
"urls": [],
"raw_text": "Shuohang Wang and Jing Jiang. 2015. Learning nat- ural language inference with lstm. arXiv preprint arXiv:1512.08849 .",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Ranking responses oriented to conversational relevance in chat-bots",
"authors": [
{
"first": "Bowen",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Baoxun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "16",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bowen Wu, Baoxun Wang, and Hui Xue. 2016a. Rank- ing responses oriented to conversational relevance in chat-bots. COLING16 .",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Topic augmented neural network for short text conversation",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhoujun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Wu, Wei Wu, Zhoujun Li, and Ming Zhou. 2016b. Topic augmented neural network for short text con- versation. CoRR abs/1605.00090.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Topic augmented neural response generation with a joint attention mechanism",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Xing",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yalou",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei-Ying",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.08340"
]
},
"num": null,
"urls": [],
"raw_text": "Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2016. Topic augmented neural response generation with a joint attention mechanism. arXiv preprint arXiv:1606.08340 .",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Incorporating loose-structured knowledge into lstm with recall gate for conversation modeling",
"authors": [
{
"first": "Zhen",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Bingquan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Baoxun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chengjie",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Xiaolong",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1605.05110"
]
},
"num": null,
"urls": [],
"raw_text": "Zhen Xu, Bingquan Liu, Baoxun Wang, Chengjie Sun, and Xiaolong Wang. 2016. Incorporating loose-structured knowledge into lstm with recall gate for conversation modeling. arXiv preprint arXiv:1605.05110 .",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Learning to respond with deep neural networks for retrievalbased human-computer conversation system",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Yiping",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2016,
"venue": "SI-GIR 2016",
"volume": "",
"issue": "",
"pages": "55--64",
"other_ids": {
"DOI": [
"10.1145/2911451.2911542"
]
},
"num": null,
"urls": [],
"raw_text": "Rui Yan, Yiping Song, and Hua Wu. 2016. Learning to respond with deep neural networks for retrieval- based human-computer conversation system. In SI- GIR 2016, Pisa, Italy, July 17-21, 2016. pages 55- 64. https://doi.org/10.1145/2911451.2911542.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Hierarchical attention networks for document classification",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Smola",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchi- cal attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The hidden information state model: A practical framework for pomdp-based spoken dialogue management",
"authors": [
{
"first": "Steve",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Milica",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Keizer",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Mairesse",
"suffix": ""
},
{
"first": "Jost",
"middle": [],
"last": "Schatzmann",
"suffix": ""
},
{
"first": "Blaise",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Kai",
"middle": [
"Yu"
],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "Computer Speech & Language",
"volume": "24",
"issue": "2",
"pages": "150--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steve Young, Milica Ga\u0161i\u0107, Simon Keizer, Fran\u00e7ois Mairesse, Jost Schatzmann, Blaise Thomson, and Kai Yu. 2010. The hidden information state model: A practical framework for pomdp-based spoken dia- logue management. Computer Speech & Language 24(2):150-174.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Multiview response selection for human-computer conversation",
"authors": [
{
"first": "Xiangyang",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Daxiang",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Shiqi",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Xuan",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tian",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiangyang Zhou, Daxiang Dong, Hua Wu, Shiqi Zhao, R Yan, D Yu, Xuan Liu, and H Tian. 2016. Multi- view response selection for human-computer con- versation. EMNLP16 .",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Model visualization. Darker areas mean larger value.",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "Comparison across context length Maximum context length: we investigate the influence of maximum context length for SMN.",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "Performance of SMN across maximum context length still several problems that cannot be handled perfectly.",
"uris": null
},
"TABREF1": {
"type_str": "table",
"text": "",
"num": null,
"html": null,
"content": "<table><tr><td>: Statistics of Douban Conversation Corpus</td></tr><tr><td>tion of information retrieval and employed mean</td></tr><tr><td>average precision (MAP) (Baeza-Yates et al.,</td></tr><tr><td>1999), mean reciprocal rank (MRR) (Voorhees</td></tr></table>"
},
"TABREF4": {
"type_str": "table",
"text": "Evaluation results of model ablation.",
"num": null,
"html": null,
"content": "<table><tr><td>how can unzip many rar ( _number_ for example ) once at files</td><td>0.00 0.15 0.30 0.45 0.60 0.75 1.50 1.35 1.20 1.05 0.90 value</td></tr><tr><td>th e n th e c o m m a n d g le b ih a n s h o u ld e x tr a c t th e m a ll fr o m /t o th a t d ir e c to r y</td><td/></tr></table>"
}
}
}
}