| { |
| "paper_id": "P16-1049", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:58:52.619032Z" |
| }, |
| "title": "DocChat: An Information Retrieval Approach for Chatbot Engines Using Unstructured Documents", |
| "authors": [ |
| { |
| "first": "Zhao", |
| "middle": [], |
| "last": "Yan", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "State Key Laboratory of Software Development Environment", |
| "institution": "Beihang University", |
| "location": {} |
| }, |
| "email": "yanzhao@buaa.edu.cn" |
| }, |
| { |
| "first": "Nan", |
| "middle": [], |
| "last": "Duan", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "nanduan@microsoft.com" |
| }, |
| { |
| "first": "Junwei", |
| "middle": [], |
| "last": "Bao", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "baojunwei001@gmail.com" |
| }, |
| { |
| "first": "Peng", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Ming", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "mingzhou@microsoft.com" |
| }, |
| { |
| "first": "Zhoujun", |
| "middle": [], |
| "last": "Li", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "State Key Laboratory of Software Development Environment", |
| "institution": "Beihang University", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Jianshe", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "zhoujs@cnu.edu.cn" |
| }, |
| { |
| "first": "Microsoft", |
| "middle": [], |
| "last": "Research", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Most current chatbot engines are designed to reply to user utterances based on existing utterance-response (or Q-R) 1 pairs. In this paper, we present DocChat, a novel information retrieval approach for chatbot engines that can leverage unstructured documents, instead of Q-R pairs, to respond to utterances. A learning to rank model with features designed at different levels of granularity is proposed to measure the relevance between utterances and responses directly. We evaluate our proposed approach in both English and Chinese: (i) For English, we evaluate Doc-Chat on WikiQA and QASent, two answer sentence selection tasks, and compare it with state-of-the-art methods. Reasonable improvements and good adaptability are observed. (ii) For Chinese, we compare DocChat with XiaoIce 2 , a famous chitchat engine in China, and side-by-side evaluation shows that DocChat is a perfect complement for chatbot engines using Q-R pairs as main source of responses.", |
| "pdf_parse": { |
| "paper_id": "P16-1049", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Most current chatbot engines are designed to reply to user utterances based on existing utterance-response (or Q-R) 1 pairs. In this paper, we present DocChat, a novel information retrieval approach for chatbot engines that can leverage unstructured documents, instead of Q-R pairs, to respond to utterances. A learning to rank model with features designed at different levels of granularity is proposed to measure the relevance between utterances and responses directly. We evaluate our proposed approach in both English and Chinese: (i) For English, we evaluate Doc-Chat on WikiQA and QASent, two answer sentence selection tasks, and compare it with state-of-the-art methods. Reasonable improvements and good adaptability are observed. (ii) For Chinese, we compare DocChat with XiaoIce 2 , a famous chitchat engine in China, and side-by-side evaluation shows that DocChat is a perfect complement for chatbot engines using Q-R pairs as main source of responses.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Building chatbot engines that can interact with humans with natural language is one of the most challenging problems in artificial intelligence. Along with the explosive growth of social media, like community question answering (CQA) websites (e.g., Yahoo Answers and WikiAnswers) and social media websites (e.g., Twitter and Weibo), the amount of utterance-response (or Q-R) pairs has experienced massive growth in recent years, and such a corpus greatly promotes the emergence of various data-driven chatbot approaches.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Instead of multiple rounds of conversation, we only consider a much simplified task, short text conversation (STC) in which the response R is a short text and only depends on the last user utterance Q. Previous methods for the STC task mostly rely on Q-R pairs and fall into two categories: Retrieval-based methods (e.g., Ji et al., 2014) . This type of methods first retrieve the most possible Q ,R pair from a set of existing Q-R pairs, which best matches current utterance Q based on semantic matching models, then takeR as the response R. One disadvantage of such a method is that, for many specific domains, collecting such Q-R pairs is intractable. Generation based methods (e.g., Shang et al., 2015) . This type of methods usually uses an encoder-decoder framework which first encode Q as a vector representation, then feed this representation to decoder to generate response R. Similar to retrieval-based methods, such approaches also depend on existing Q-R pairs as training data. Like other language generation tasks, such as machine translation and paraphrasing, the fluency and naturality of machine generated text is another drawback.", |
| "cite_spans": [ |
| { |
| "start": 322, |
| "end": 338, |
| "text": "Ji et al., 2014)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 687, |
| "end": 706, |
| "text": "Shang et al., 2015)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To overcome the issues mentioned above, we present a novel response retrieval approach, DocChat, to find responses based on unstructured documents. For each user utterance, instead of looking for the best Q-R pair or generating a word sequence based on language generation techniques, our method selects a sentence from given documents directly, by ranking all possible sentences based on features designed at different levels of granularity. On one hand, using documents rather than Q-R pairs greatly improve the adapt-ability of chatbot engines on different chatting topics. On the other hand, all responses come from existing documents, which guarantees their fluency and naturality. We also show promising results in experiments, on both QA and chatbot scenarios.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Formally, given an utterance Q and a document set D, the document-based chatbot engine retrieves response R based on the following three steps:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Description", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 response retrieval, which retrieves response candidates C from D based on Q:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Description", |
| "sec_num": "2" |
| }, |
| { |
| "text": "C = Retrieve(Q, D)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Description", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Each S \u2208 C is a sentence existing in D.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Description", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 response ranking, which ranks all response candidates in C and selects the most possible response candidate as\u015c:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Description", |
| "sec_num": "2" |
| }, |
| { |
| "text": "S = arg max S\u2208C Rank(S, Q)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Description", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 response triggering, which decides whether it is confident enough to response Q using\u015c:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Description", |
| "sec_num": "2" |
| }, |
| { |
| "text": "I = T rigger(\u015c, Q)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Description", |
| "sec_num": "2" |
| }, |
| { |
| "text": "where I is a binary value. When I equals to true, let the response R =\u015c and output R; otherwise, output nothing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Description", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In the following three sections, we will describe solutions of these three components one by one.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Description", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Given a user utterance Q, the goal of response retrieval is to efficiently find a small number of sentences from D, which have high possibility to contain suitable sentences as Q's response. Although it is not necessarily true that a good response always shares more words with a given utterance, this measurement is still helpful in finding possible response candidates (Ji et al., 2014) .", |
| "cite_spans": [ |
| { |
| "start": 371, |
| "end": 388, |
| "text": "(Ji et al., 2014)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Response Retrieval", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In this paper, the BM25 term weighting formulas (Jones et al., 2000) is used to retrieve response candidates from documents. Given each document D k \u2208 D, we collect a set of sentence triples S prev , S, S next from D k , where S denotes a sentence in D k , S prev and S next denote S's previous sentence and next sentence respectively. Two special tags, BOD and EOD , are added at the beginning and end of each passage, to make sure that such sentence triples can be extracted for every sentence in the document. The reason for indexing each sentence together with its context sentences is intuitive: If a sentence within a document can respond to an utterance, then its context should be revelent to the utterance as well.", |
| "cite_spans": [ |
| { |
| "start": 48, |
| "end": 68, |
| "text": "(Jones et al., 2000)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Response Retrieval", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Given a user utterance Q and a response candidate S, the ranking function Rank(S, Q) is designed as an ensemble of individual matching features:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Response Ranking", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Rank(S, Q) = k \u03bb k \u2022 h k (S, Q) where h k (\u2022) denotes the k-th feature function, \u03bb k denotes h k (\u2022)'s corresponding weight.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Response Ranking", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We design features at different levels of granularity to measure the relevance between S and Q, including word-level, phrase-level, sentencelevel, document-level, relation-level, type-level and topic-level, which will be introduced below.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Response Ranking", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We define three word-level features in this work:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word-level Feature", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "(1) h W M (S, Q) denotes a word matching feature that counts the number (weighted by the IDF value of each word in S) of non-stopwords shared by S and Q. (2) h W 2W (S, Q) denotes a word-toword translation-based feature that calculates the IBM model 1 score (Brown et al., 1993) of S and Q based on word alignments trained on 'questionrelated question' pairs using GIZA++ (Och and Ney, 2003) . (3) h W 2V (S, Q) denotes a word embedding-based feature that calculates the average cosine distance between word embeddings of all non-stopword pairs", |
| "cite_spans": [ |
| { |
| "start": 258, |
| "end": 278, |
| "text": "(Brown et al., 1993)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 372, |
| "end": 391, |
| "text": "(Och and Ney, 2003)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word-level Feature", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "v S j , v Q i . v S j represent", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word-level Feature", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "the word vector of j th word in S and v Q j represent the word vector of i th word in Q.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word-level Feature", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We first describe how to extract phrase-level paraphrases from an existing SMT (statistical machine translation) phrase table.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Paraphrase", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "P T = { s i , t i , p(t i |s i ), p(s i |t i ) } 3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Paraphrase", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "is a phrase table, which is extracted from a bilingual corpus, where s i (or t i ) denotes a phrase, in source (or target) language, p(t i |s i ) (or p(s i |t i )) denotes the translation probability from s i (or t i ) to t i (or s i ). We follow Bannard and Callison-Burch (2005) to extract a paraphrase table P P = { s i , s j , score(s j ; s i ) }. s i and s j denote two phrases in source language, score(s j ; s i ) denotes a confidence score that s i can be paraphrased to s j , which is computed based on P T :", |
| "cite_spans": [ |
| { |
| "start": 247, |
| "end": 280, |
| "text": "Bannard and Callison-Burch (2005)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Paraphrase", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "score(s j ; s i ) = t {p(t|s i ) \u2022 p(s j |t)}", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Paraphrase", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "The underlying idea of this approach is that, two source phrases that are aligned to the same target phrase trend to be paraphrased.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Paraphrase", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "We then define a paraphrase-based feature as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Paraphrase", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "h P P (S, Q) = N n=1 |S|\u2212n j=0 Count P P (S j+n\u22121 j ,Q) |S|\u2212n+1 N", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Paraphrase", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "where S j+n\u22121 j denotes the consecutive word sequence (or phrase) in S, which starts from S j and ends with S j+n\u22121 , N denotes the maximum n-gram order (here is 3).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Paraphrase", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "Count P P (S j+n\u22121 j , Q)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Paraphrase", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "is computed based on the following rules:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Paraphrase", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "\u2022 If S j+n\u22121 j \u2208 Q, then CountP P (S j+n\u22121 j , Q) = 1; \u2022 Else, if S j+n\u22121 j , s, score(s; S j+n\u22121 j ) \u2208 P P and S j+n\u22121 j 's paraphrase s occurs in Q, then CountP P (S j+n\u22121 j , Q) = score(s; S j+n\u22121 j ) \u2022 Else, CountP P (S j+n\u22121 j , Q) = 0.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Paraphrase", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "Similar to h P P (S, Q), a phrase translation-based feature based on a phrase table P T is defined as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase-to-Phrase Translation", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "h P T (S, Q) = N n=1 |S|\u2212n j=0 Count P T (S j+n\u22121 j ,Q) |S|\u2212n+1 N where Count P T (S j+n\u22121 j , Q)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase-to-Phrase Translation", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "is computed based on the following rules:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase-to-Phrase Translation", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "\u2022 If S j+n\u22121 j \u2208 Q, then CountP T (S j+n\u22121 j , Q) = 1; \u2022 Else, if S j+n\u22121 j , s, p(S j+n\u22121 j |s), p(s|S j+n\u22121 j ) \u2208 P T and S j+n\u22121 j 's translation s \u2208 Q, then CountP T (S j+n\u22121 j , Q) = p(S j+n\u22121 j |s) \u2022 p(s|S j+n\u22121 j ) \u2022 Else, CountP T (S j+n\u22121 j , Q) = 0", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase-to-Phrase Translation", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "We train a phrase table based on 'question-answer' pairs crawled from community QA websites.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase-to-Phrase Translation", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "We first present an attention-based sentence embedding method based on a convolution neural network (CNN), whose input is a sentence pair and output is a sentence embedding pair. Two features will be introduced in Section 4.3.1 and 4.3.2, which are designed based on two sentence embedding models trained using different types of data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "In the input layer, given a sentence pair S X , S Y , an attention matrix A \u2208 R |S X |\u00d7|S Y | is generated based on pre-trained word embeddings of S X and S Y , where each element A i,j \u2208 A is computed as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "A i,j = cosine(v S X i , v S Y j ) where v S X i (or v S Y j ) denotes the embedding vector of the i th (or j th ) word in S X (or S Y ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Then, column-wise and row-wise max-pooling are applied to A to generate two attention vectors", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "V S X \u2208 R |S X | and V S Y \u2208 R |S Y | , where the k th elements of V S X and V S Y are computed as: V S X k = max 1<l<|S Y | {A k,l } and V S Y k = max 1<l<|S X | {A l,k } V S X k (or V S Y", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "k ) can be interpreted as the attention score of the k th word in S X (or S Y ) with regard to all words in S Y (or S X ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Next, two attention distributions", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "D S X \u2208 R |S X | and D S Y \u2208 R |S Y |", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "are generated for S X and S Y based on V S X and V S Y respectively, where the k th elements of D S X and D S Y are computed as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "D S X k = e V S X k |S X | l=1 e V S X l and D S Y k = e V S Y k |S Y | l=1 e V S Y l D S X k (or D S Y", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "k ) can be interpreted as the normalized attention score of the k th word in S X (or S Y ) with regard to all words in S Y (or S X ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Last, we update each pre-trained word embedding", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "v S X k (or v S Y k ) tov S X k (orv S Y k ), by multiplying every value in v S X k (or v S Y k ) with D S X k (or D S Y k )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": ". The underlying intuition of updating pre-trained word embeddings is to re-weight the importance of each word in S X (or S Y ) based on S Y (or S X ), instead of treating them in an equal manner.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "In the convolution layer, we first derive an input matrix", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Z S X = {l 1 , ..., l |S X | }, where l t is the concatenation of a sequence of m = 2d\u22121 4 updat- ed word embeddings [v S X t\u2212d , ...,v S X t , ...,v S X t+d ]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": ", centralized in the t th word in S X . Then, the convo-lution layer performs sliding window-based feature extraction to project each vector representation l t \u2208 Z S X to a contextual feature vector h S X t :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "h S X t = tanh(W c \u2022 l t )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "where W c is the convolution matrix,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "tanh(x) = 1\u2212e \u22122x", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "1+e \u22122x is the activation function. The same operation is performed to S Y as well.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "In the pooling layer, we aggregate local features extracted by the convolution layer from S X , and form a sentence-level global feature vector with a fixed size independent of the length of the input sentence. Here, max-pooling is used to force the network to retain the most useful local features by", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "l S X p = [v S X 1 , ..., v S X K ], where: v S X i = max t=1,...,|S X | {h S X t (i)} h S X t (i) denotes the i th value in the vector h S X t .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The same operation are performed to S Y as well.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "In the output layer, one more non-linear transformation is applied to l S X p :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "y(S X ) = tanh(W s \u2022 l S X p )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "W s is the semantic projection matrix, y(S X ) is the final sentence embedding of S X . The same operation is performed to S Y to obtain y(S Y ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We train model parameters W c and W s by minimizing the following ranking loss function:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "L = max{0, M \u2212 cosine(y(S X ), y(S Y )) +cosine(y(S X ), y(S \u2212 Y ))}", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "where M is a constant, S \u2212 Y is a negative instance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence-level Feature", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We train the first attention-based sentence embedding model based on a set of 'question-answer' pairs as input sentence pairs, and then design a causality relationship-based feature as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Causality Relationship Modeling", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "h SCR (S, Q) = cosine(y SCR (S), y SCR (Q))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Causality Relationship Modeling", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "y SCR (S) and y SCR (Q) denote the sentence embeddings of S and Q respectively. We expect this feature captures the causality relationship between questions and their corresponding answers, and works on question-like utterances.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Causality Relationship Modeling", |
| "sec_num": "4.3.1" |
| }, |
| { |
| "text": "We train the second attention-based sentence embedding model based on a set of 'sentence-next sentence' pairs as input sentence pairs, and then design a discourse relationship-based feature as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discourse Relationship Modeling", |
| "sec_num": "4.3.2" |
| }, |
| { |
| "text": "h SDR (S, Q) = cosine(y SDR (S), y SDR (Q))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discourse Relationship Modeling", |
| "sec_num": "4.3.2" |
| }, |
| { |
| "text": "y SDR (S) and y SDR (Q) denote the sentence embeddings of S and Q respectively. We expect this feature learns and captures the discourse relationship between sentences and their next sentences, and works on statement-like utterances. Here, a large number of 'sentence-next sentence' pairs can be easily obtained from documents.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discourse Relationship Modeling", |
| "sec_num": "4.3.2" |
| }, |
| { |
| "text": "We take document-level information into consideration to measure the semantic similarity between Q and S, and define two context features as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Document-level Feature", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "h DM (S * , Q) = cosine(y SCR (S * ), y SCR (Q))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Document-level Feature", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "where S * can be S prev and S next that denote previous and next sentences of S in the original document. The sentence embedding model trained based on 'question-answer' pairs (in Section 4.3.1) is directly used to generate context embeddings for h DM (S prev , Q) and h DM (S next , Q). So no further training data is needed for this feature.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Document-level Feature", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Given a structured knowledge base, such as Freebase 5 , a single relation question Q (in natural language) with its answer can be first parsed into a fact formatted as e sbj , rel, e obj , where e sbj denotes a subject entity detected from the question, rel denotes the relationship expressed by the question, e obj denotes an object entity found from the knowledge base based on e sbj and rel. Then we can get Q, rel pairs. This rel can help for modeling semantic relationships between Q and R. For example, the Q-A pair What does Jimmy Neutron do? \u2212 inventor can be parsed into Jimmy Neutron, fictional character occupation, inventor where the rel is fictional character occupation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relation-level Feature", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "Similar to Yih et al. (2014), We use Q, rel pairs as training data, and learn a rel-CNN model, which can encode each question Q (or each relation rel) into a relation embedding. For a given question Q, the corresponding relation rel + is treated as a positive example, and randomly selected other relations are used as negative examples rel \u2212 . The posterior probability of rel + given Q is computed as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relation-level Feature", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "P (rel + |Q) =", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relation-level Feature", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "e cosine(y(rel + ),y(Q))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relation-level Feature", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "rel \u2212 e cosine(y(rel \u2212 ),y(Q)) y(rel) and y(Q) denote relation embeddings of rel and Q based on rel-CNN. rel-CNN is trained by maximizing the log-posterior.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relation-level Feature", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "We then define a relation-based feature as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relation-level Feature", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "h RE (S, Q) = cosine(y RE (Q), y RE (S))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relation-level Feature", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "y RE (S) and y RE (Q) denote relation embeddings of S and Q respectively, coming from rel-CNN.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relation-level Feature", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "We extend each Q, e sbj , rel, e obj in the Sim-pleQuestions data set to Q, e sbj , rel, e obj , type , where type denotes the type name of e obj based on Freebase. Thus, we obtain Q, type pairs. Similar to rel-CNN, we use Q, type pairs to train another CNN model, denoted as type-CNN. Based on which, we define a type-based feature as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Type-level Feature", |
| "sec_num": "4.6" |
| }, |
| { |
| "text": "h T E (S, Q) = cosine(y T E (Q), y T E (S))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Type-level Feature", |
| "sec_num": "4.6" |
| }, |
| { |
| "text": "y T E (S) and y T E (Q) denote type embeddings of S and Q respectively, coming from type-CNN.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Type-level Feature", |
| "sec_num": "4.6" |
| }, |
| { |
| "text": "As the assumption that Q-R pair should share similar topic distribution, We define an unsupervised topic model-based feature h U T M as the average cosine distance between topic vectors of all non-stopword pairs", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unsupervised Topic Model", |
| "sec_num": "4.7.1" |
| }, |
| { |
| "text": "v S j , v Q i , where v w = [p(t 1 |w), ..., p(t N |w)]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unsupervised Topic Model", |
| "sec_num": "4.7.1" |
| }, |
| { |
| "text": "T denotes the topic vector of a given word w. Given a corpus, various topic modeling methods, such as pLSI (probabilistic latent semantic indexing) and LDA (latent Dirichlet allocation), can be used to estimate p(t i |w), which denotes the probability that w belongs to a topic t i .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unsupervised Topic Model", |
| "sec_num": "4.7.1" |
| }, |
| { |
| "text": "One shortcoming of the unsupervised topic model is that, the topic size is pre-defined, which might not reflect the truth on a specific corpus. In this paper, we explore a supervised topic model approach as well, based on 'sentence-topic' pairs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supervised Topic Model", |
| "sec_num": "4.7.2" |
| }, |
| { |
| "text": "We crawl a large number of S, topic pairs from Wikipedia documents, where S denotes a sentence, topic denotes the content name of the section that S extracted from. Such content names are labeled by Wikipedia article editors, and can be found in the Contents fields.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supervised Topic Model", |
| "sec_num": "4.7.2" |
| }, |
| { |
| "text": "Similar to rel-CNN and type-CNN, we use the S, topic pairs to train another CNN model, denoted as topic-CNN. Based on which, we define a supervised topic model-based feature as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supervised Topic Model", |
| "sec_num": "4.7.2" |
| }, |
| { |
| "text": "h ST M (S, Q) = cosine(y ST M (S), y ST M (Q))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supervised Topic Model", |
| "sec_num": "4.7.2" |
| }, |
| { |
| "text": "y ST M (S) and y ST M (Q) denote topic embeddings of S and Q respectively, coming from topic-CNN.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Supervised Topic Model", |
| "sec_num": "4.7.2" |
| }, |
| { |
| "text": "We employ a regression-based learning to rank method (Nallapati, 2004) to train response ranking model, based on a set of labeled Q, C pairs, Feature weights in the ranking model are trained by SGD based on the training data that consists of a set of Q, C pairs, where Q denotes a user utterance and C denotes a set of response candidates. Each candidate S in C is labeled by + or \u2212, which indicates whether S is a suitable response of Q (+), or not (\u2212).", |
| "cite_spans": [ |
| { |
| "start": 53, |
| "end": 70, |
| "text": "(Nallapati, 2004)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning to Ranking Model", |
| "sec_num": "4.8" |
| }, |
| { |
| "text": "As manually labeled data, such as WikiQA (Yang et al., 2015) , needs expensive human annotation effort, we propose an automatic way to collect training data. First, 'question-answer' (or Q-A) pairs {Q i , A i } M i=1 are crawled from community QA websites. Q i denotes a question. A i denotes Q i 's answer, which includes one or more sentences A i = {s 1 , ..., s K }. Then, we index answer sentences of all questions. Next, for each question Q i , we run response retrieval to obtain answer sentence candidates C i = {s 1 , ..., s N }. Last, if we know the correct answer sentences of each question Q i , we can then label each candidate in C i as + or \u2212. In experiments, manually labeled data (WikiQA) is used in open domain question answering scenario, and automatically generated data is used in chatbot scenario.", |
| "cite_spans": [ |
| { |
| "start": 41, |
| "end": 60, |
| "text": "(Yang et al., 2015)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning to Ranking Model", |
| "sec_num": "4.8" |
| }, |
| { |
| "text": "There are two types of utterances, chit-chat utterances and informative utterances. The former should be handled by chit-chat engines, and the latter is more suitable to our work, as documents usually contain formal and informative contents. Thus, we have to respond to informative utterances only. Response retrieval cannot always guarantee to return a candidate set that contains at least one suitable response, but response ranking will output the best possible candidate all the time. So, we have to decide which responses are confident enough to be output, and which are not.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Response Triggering", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In this paper, we define response triggering as a function that decides whether a response candidate S has enough confidence to be output:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Response Triggering", |
| "sec_num": "5" |
| }, |
| { |
| "text": "I = T rigger(S, Q) = I U (Q) \u2227 I Rank (S, Q) \u2227 I R (S)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Response Triggering", |
| "sec_num": "5" |
| }, |
| { |
| "text": "where T rigger(Q, S) returns true, if and only if all its three sub-functions return true.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Response Triggering", |
| "sec_num": "5" |
| }, |
| { |
| "text": "I U (Q) returns true, if Q is an informative query. We collect and label chit-chat queries based on conversational exchanges from social media websites to train the classifier.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Response Triggering", |
| "sec_num": "5" |
| }, |
| { |
| "text": "I Rank (S, Q) returns true, if the score s(S, Q) exceeds an empirical threshold \u03c4 : Rank(S,Q) where \u03b1 is the scaling factor that controls the distribution of s(\u2022) smooth or sharp. Both \u03b1 and \u03c4 are selected based on a separated development set.", |
| "cite_spans": [ |
| { |
| "start": 84, |
| "end": 93, |
| "text": "Rank(S,Q)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Response Triggering", |
| "sec_num": "5" |
| }, |
| { |
| "text": "s(S, Q) = 1 1 + e \u2212\u03b1\u2022", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Response Triggering", |
| "sec_num": "5" |
| }, |
| { |
| "text": "I R (S) returns true, if (i) the length of S is less than a pre-defined threshold, and (ii) S does not start with a phrase that expresses a progressive relation, such as but also, besides, moreover and etc., as the contents of sentences starting with such phrases usually depend on their context sentences, and they are not suitable for responses.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Response Triggering", |
| "sec_num": "5" |
| }, |
| { |
| "text": "For modeling dialogue. Previous works mainly focused on rule-based or learning-based approaches (Litman et al., 2000; Schatzmann et al., 2006; Williams and Young, 2007) . These methods require efforts on designing rules or labeling data for training, which suffer the coverage issue.", |
| "cite_spans": [ |
| { |
| "start": 96, |
| "end": 117, |
| "text": "(Litman et al., 2000;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 118, |
| "end": 142, |
| "text": "Schatzmann et al., 2006;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 143, |
| "end": 168, |
| "text": "Williams and Young, 2007)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "For short text conversation. With the fast development of social media, such as microblog and CQA services, large scale conversation data and data-driven approaches become possible. Ritter et al. (2011) proposed an SMT based method, which treats response generation as a machine translation task. Shang et al. (2015) presented an RNN based method, which is trained based on a large number of single round conversation data. Grammatical and fluency problems are the biggest issue for such generation-based approaches. Retrievalbased methods selects the most suitable response to the current utterance from the large number of Q-R pairs. Ji et al. (2014) built a conversation system using learning to rank and semantic matching techniques. However, collecting enough Q-R pairs to build chatbots is often intractable for many domains. Compared to previous methods, DocChat learns internal relationships between utterances and responses based on statistical models at different levels of granularity, and relax the dependency on Q-R pairs as response sources. These make DocChat as a general response generation solution to chatbots, with high adaptation capability.", |
| "cite_spans": [ |
| { |
| "start": 182, |
| "end": 202, |
| "text": "Ritter et al. (2011)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 297, |
| "end": 316, |
| "text": "Shang et al. (2015)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 636, |
| "end": 652, |
| "text": "Ji et al. (2014)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "For answer sentence selection. Prior work in measuring the relevance between question and answer is mainly in word-level and syntactic-level (Wang and Manning, 2010; Heilman and Smith, 2010; Yih et al., 2013) . Learning representation by neural network architecture (Yu et al., 2014; Wang and Nyberg, 2015; Severyn and Moschitti, 2015) has become a hot research topic to go beyond word-level or phrase-level methods. Compared to previous works we find that, (i) Large scale existing resources with noise have more advantages as training data. (ii) Knowledge-based semantic models can play important roles.", |
| "cite_spans": [ |
| { |
| "start": 141, |
| "end": 165, |
| "text": "(Wang and Manning, 2010;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 166, |
| "end": 190, |
| "text": "Heilman and Smith, 2010;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 191, |
| "end": 208, |
| "text": "Yih et al., 2013)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 266, |
| "end": 283, |
| "text": "(Yu et al., 2014;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 284, |
| "end": 306, |
| "text": "Wang and Nyberg, 2015;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 307, |
| "end": 335, |
| "text": "Severyn and Moschitti, 2015)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Take into account response ranking task and answer selection task are similar, we first evaluate DocChat in a QA scenario as a simulation. Here, response ranking is treated as the answer selection task, and response triggering is treated as the answer triggering task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation on QA (English)", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "We select WikiQA 6 as the evaluation data, as it is precisely constructed based on natural language questions and Wikipedia documents, which contains 2,118 'question-document' pairs in the training set, 296 'question-document' pairs in development set, and 633 'question-document' pairs in testing set. Each sentence in the document of a given question is labeled as 1 or 0, where 1 denotes the current sentence is a correct answer sentence, and 0 denotes the opposite meaning. Given a question, the task of WikiQA is to select answer sentences from all sentences in a question's corresponding document. The training data settings of response ranking features are described below.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "7.1.1" |
| }, |
| { |
| "text": "F w denotes 3 word-level features, h W M , h W 2W and h W 2V . For h W 2W , GIZA++ is used to train word alignments on 11.6M 'question-related question' pairs (Fader et al., 2013) crawled from WikiAnswers. 7 . For h W 2V , Word2Vec (Mikolov et al., 2013) is used to train word embedding on sentences from Wikipedia in English.", |
| "cite_spans": [ |
| { |
| "start": 159, |
| "end": 179, |
| "text": "(Fader et al., 2013)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 232, |
| "end": 254, |
| "text": "(Mikolov et al., 2013)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "7.1.1" |
| }, |
| { |
| "text": "F p denotes 2 phrase-level features, h P P and h P T . For h P P , bilingual data 8 is used to extract a phrase-based translation table (Koehn et al., 2003) , from which paraphrases are extracted (Section 4.2.1). For h P T , GIZA++ trains word alignments on 4M 'question-answer' pairs 9 crawled from Yahoo Answers 10 , and then a phrase table is extracted from word alignments using the intersect-diag-grow refinement.", |
| "cite_spans": [ |
| { |
| "start": 136, |
| "end": 156, |
| "text": "(Koehn et al., 2003)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "7.1.1" |
| }, |
| { |
| "text": "F s denotes 2 sentence-level features, h SCR and h SDR . For h SCR , 4M 'question-answer' pairs (the same to h P T ) is used to train the CNN model. For h SDR , we randomly select 0.5M 'sentence-next sentence' pairs from English Wikipedia.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "7.1.1" |
| }, |
| { |
| "text": "F d denotes document-level feature h DM .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "7.1.1" |
| }, |
| { |
| "text": "Here, we didn't train a new model. Instead, we just reuse the CNN model used in h SCR .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "7.1.1" |
| }, |
| { |
| "text": "F r and F ty denote relation-level feature h RE and type-level feature h T E . Bordes et al. (2015) released the SimpleQuestions data set 11 , which consists of 108,442 English questions. Each question (e.g., What does Jimmy Neutron do?) is written by human annotators based on a triple in Freebase which formatted as e sbj , rel, e obj (e.g., Jimmy Neutron, fictional character occupation, inventor ) Here, as described in Section 4.5 and 4.6, 'question-relation' pairs and 'question-type' pairs based upon SimpleQuestions data set are used to train h RE and h T E . F to denotes 2 topic-level features, h U T M and h ST M . For h U T M , we run LightLDA (Yuan et al., 2015) on sentences from English Wikipedia, where the topic is set to 1,000. For h ST M , 4M 'sentence-topic' pairs are extracted from English Wikipedia (Section 4.7.2), where the most frequent 25,000 content names are used as topics. ", |
| "cite_spans": [ |
| { |
| "start": 79, |
| "end": 99, |
| "text": "Bordes et al. (2015)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 656, |
| "end": 675, |
| "text": "(Yuan et al., 2015)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "7.1.1" |
| }, |
| { |
| "text": "The performance of answer selection is evaluated by Mean Average Precision (MAP) and Mean Reciprocal Rank (MRR). Among all 'questiondocument' pairs in WikiQA, only one-third of documents contain answer sentences to their corresponding questions. Similar to previous work, questions without correct answers in the candidate sentences are not taken into account. We first evaluate the impact of features at each level, and show results in Table 1 . F w , F p , and F s perform best among all features, which makes sense, as they can capture lexical features. F r and F ty perform not very good, but make sense, as the training data (i.e. SimpleQuestions) are based on Freebase instead of Wikipedia. Interestingly, we find that F to and F d can achieve comparable results as well. We think the reason is that, their training data come from Wikipedia, which fit the WikiQA task very well.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 437, |
| "end": 444, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results on Answer Selection (AS)", |
| "sec_num": "7.1.2" |
| }, |
| { |
| "text": "We evaluate the quality of DocChat on Wik-iQA, and show results in Table 2 . The first four rows in Table 2 represent four baseline methods, including: (1) Yih et al. 2013, which makes use of rich lexical semantic features; (2) Yang et al. (2015) , which uses a bi-gram CNN model with average pooling; (3) Miao et al. (2015) , which uses an enriched LSTM with a latent stochastic attention mechanism to model similarity between Q-R pairs; and (4) Yin et al. (2015) , which adds the attention mechanism to the CNN architecture. formance with state-of-the-art baselines. Furthermore, by combining the CNN model proposed by Yang et al. (2015) and trained on WikiQA training set, we achieve the best result on both metrics. Compared to previous methods, we think Doc-Chat has the following two advantages: First, our feature models depending on existing resources are readily available (such as Q-Q pairs, Q-A pairs, 'sentence-next sentence' pairs, and etc.), instead of requiring manually annotated data (such as WikiQA and QASent). Training of the response ranking model does need labeled data, but the size demanded is acceptable. Second, as the training data used in our approach come from open domain resources, we can expect a high adaptation capability and comparable results on other WikiQAlike tasks, as our models are task-independent.", |
| "cite_spans": [ |
| { |
| "start": 228, |
| "end": 246, |
| "text": "Yang et al. (2015)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 306, |
| "end": 324, |
| "text": "Miao et al. (2015)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 447, |
| "end": 464, |
| "text": "Yin et al. (2015)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 621, |
| "end": 639, |
| "text": "Yang et al. (2015)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 67, |
| "end": 74, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 100, |
| "end": 107, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results on Answer Selection (AS)", |
| "sec_num": "7.1.2" |
| }, |
| { |
| "text": "To verify the second advantage, we evaluate DocChat on another answer selection data set, QASent (Wang et al., 2007) , and list results in Table 3 . CN N W ikiQA and CN NQASent refer to the results of Yang et al. (2015) 's method, where the CNN models are trained on WikiQA's training set and QASent's training set respectively. All these three methods train feature weights using QASent's development set. Table 3 tells, DocChat outperforms CN N W ikiQA in terms of MAP and MRR, and achieves comparable results compared to CN NQASent. The comparisons results show a good adaptation capability of DocChat. Table 4 evaluates the contributions of features at different levels of granularity. To highlight the differences, we report the percent deviation by removing different features at the same level from DocChat. From Table 4 we can see that, 1) Each feature group is indispensable to DocChat; 2) Features at sentence-level are most important than other feature groups; 3) Compared to results in Table 1 , combining all features can significantly promote the performance.", |
| "cite_spans": [ |
| { |
| "start": 97, |
| "end": 116, |
| "text": "(Wang et al., 2007)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 201, |
| "end": 219, |
| "text": "Yang et al. (2015)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 139, |
| "end": 146, |
| "text": "Table 3", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 407, |
| "end": 414, |
| "text": "Table 3", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 606, |
| "end": 613, |
| "text": "Table 4", |
| "ref_id": "TABREF6" |
| }, |
| { |
| "start": 820, |
| "end": 827, |
| "text": "Table 4", |
| "ref_id": "TABREF6" |
| }, |
| { |
| "start": 998, |
| "end": 1005, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results on Answer Selection (AS)", |
| "sec_num": "7.1.2" |
| }, |
| { |
| "text": "In both QA and chatbot, response triggering is important. Similar to Yang et al. (2015) , we also evaluate answer triggering using Precision, Recall, and F1 score as metrics. We use the WikiQA de- velopment set to tune the scaling factor \u03b1 and trigger threshold \u03c4 that are described in Section 5, where \u03b1 is set to 0.9 and \u03c4 is set to 0.5. Table 5 shows the evaluation results compare to Yang et al. (2015) . We think the improvements come from the fact that our response ranking model are more discriminative, as more semantic-level features are leveraged.", |
| "cite_spans": [ |
| { |
| "start": 69, |
| "end": 87, |
| "text": "Yang et al. (2015)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 388, |
| "end": 406, |
| "text": "Yang et al. (2015)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 340, |
| "end": 347, |
| "text": "Table 5", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation of Answer Triggering (AT)", |
| "sec_num": "7.1.3" |
| }, |
| { |
| "text": "XiaoIce is a famous Chinese chatbot engine, which can be found in many platforms including WeChat official accounts (like business pages on Facebook Messenger). The documents that each official account maintains and post to their followers can be easily obtained from the Web. Meanwhile, a WeChat official account can choose to authorize XiaoIce to respond to its followers' utterances. We design an interesting evaluation below to compare DocChat with XiaoIce, based on the publicly available documents. (Beijing is a historical city that can be traced back to 3,000 years ago.) Table 6 : XiaoIce response is more colloquial, as it comes from Q-R pairs; while DocChat response is more formal, as it comes from documents.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 580, |
| "end": 587, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation on Chatbot (Chinese)", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "h ST M . As there is no knowledge base based labeled data for Chinese, we ignore relation-level feature h RE and type-level feature h T E . For ranking weights, we generate 90,321 Q, C pairs based on Baidu Zhidao Q-A pairs by the automatic method described in Section 4.8. This data set is used to train the learning to rank model feature weights {\u03bb k } by SGD.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "7.2.1" |
| }, |
| { |
| "text": "For documents, we randomly select 10 WeChat official accounts, and index their documents separately. The average number of documents is 600.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "7.2.1" |
| }, |
| { |
| "text": "Human annotators are asked to freely issue 100 queries to each official account to get XiaoIce response. Thus, we obtain 100 query, XiaoIce response pairs for each official account. We also send the same 100 queries of each official account to DocChat based on official account's corresponding document index, and obtain another 100 query, DocChat response pairs. Given these 1,000 query, XiaoIce response, DocChat response triples, we let human annotators do a side-by-side evaluation, by asking them which response is better for each query. Note that, the source of each response is masked during evaluation procedure. Table 6 gives an example. 7.2.2 DocChat v.s. XiaoIce Table 7 shows the results. Better (or Worse) denotes a DocChat response is better (or worse) than a XiaoIce response, Tie denotes a DocChat response and a XiaoIce response are equally good or bad. From Table 7 we observe that: (1) 156 Doc-Chat responses (58+47+51) out of 1,000 queries are triggered. The trigger rate of DocChat is 15.6%. We check un-triggered queries, and find most of them are chitchat, such as \"hi\", \"hello\", \"who are you?\". (2) Better cases are more than worse cases. Most queries in better cases are nonchitchat ones, and their contents are highly related to the domain of their corresponding WeChat official accounts. (3) Our proposed method is a perfect complement for chitchat engines on in-Better Worse Tie Compare to XiaoIce 58 47 51 Table 7 : Chatbot side-by-side evaluation.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 621, |
| "end": 628, |
| "text": "Table 6", |
| "ref_id": null |
| }, |
| { |
| "start": 674, |
| "end": 681, |
| "text": "Table 7", |
| "ref_id": null |
| }, |
| { |
| "start": 876, |
| "end": 883, |
| "text": "Table 7", |
| "ref_id": null |
| }, |
| { |
| "start": 1435, |
| "end": 1442, |
| "text": "Table 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "7.2.1" |
| }, |
| { |
| "text": "formative utterances. The reasons for bad cases are two-fold: First, a DocChat response overlaps with a query, but cannot actually response it. For this issue, we need to refine the capability of our response ranking model on measuring causality relationships. Second, we wrongly send a chitchat query to DocChat, as currently, we only use a white list of chitchat queries for chitchat/non-chitchat classification (Section 5).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "7.2.1" |
| }, |
| { |
| "text": "This paper presents a response retrieval method for chatbot engines based on unstructured documents. We evaluate our method on both question answering and chatbot scenarios, and obtain promising results. We leave better triggering component and multiple rounds of conversation handling to be addressed in our future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8" |
| }, |
| { |
| "text": "For convenience sake, we denote all utterance-response pairs (either QA pairs or conversational exchanges from social media websites like Twitter) as Q-R pairs in this paper.2 http://www.msxiaoice.com", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We omit lexical weights that are commonly used in phrase tables, as they are not useful in paraphrase extraction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In this paper, m is set to 3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.freebase.com/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://aka.ms/WikiQA", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://wiki.answers.com8 We use 0.5M Chinese-English bilingual sentences in phrase table extraction, i.e., LDC2003E07, LDC2003E14, LDC2005T06, LDC2005T10, LDC2005E83, LDC2006E26, LDC2006E34, LDC2006E85 and LDC2006E92.9 For each question, we only select the first sentence in its answer to construct a 'question-answer' pair, as it contains more causality information than sentences in other positions.10 https://answers.yahoo.com 11 https://research.facebook.com/research/-babi/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Paraphrasing with bilingual parallel corpora", |
| "authors": [ |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Bannard", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Callison-Burch", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of Annual Meeting of the Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "597--604", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Colin Bannard and Chris Callison-Burch. 2005. Para- phrasing with bilingual parallel corpora. In Pro- ceedings of Annual Meeting of the Association for Computational Linguistics (ACL), pages 597-604.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Large-scale simple question answering with memory networks", |
| "authors": [ |
| { |
| "first": "Antoine", |
| "middle": [], |
| "last": "Bordes", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicolas", |
| "middle": [], |
| "last": "Usunier", |
| "suffix": "" |
| }, |
| { |
| "first": "Sumit", |
| "middle": [], |
| "last": "Chopra", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1506.02075" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "The mathematics of statistical machine translation: Parameter estimation", |
| "authors": [ |
| { |
| "first": "Vincent J Della", |
| "middle": [], |
| "last": "Peter F Brown", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen A Della", |
| "middle": [], |
| "last": "Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert L", |
| "middle": [], |
| "last": "Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mercer", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational linguistics", |
| "volume": "19", |
| "issue": "2", |
| "pages": "263--311", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter F Brown, Vincent J Della Pietra, Stephen A Della Pietra, and Robert L Mercer. 1993. The mathemat- ics of statistical machine translation: Parameter esti- mation. Computational linguistics, 19(2):263-311.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Paraphrase-driven learning for open question answering", |
| "authors": [ |
| { |
| "first": "Anthony", |
| "middle": [], |
| "last": "Fader", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Oren", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of Annual Meeting of the Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2013. Paraphrase-driven learning for open question answering. In Proceedings of Annual Meeting of the Association for Computational Linguistics (ACL).", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Tree edit models for recognizing textual entailments, paraphrases, and answers to questions", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Heilman", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Noah", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", |
| "volume": "", |
| "issue": "", |
| "pages": "1011--1019", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Heilman and Noah A Smith. 2010. Tree ed- it models for recognizing textual entailments, para- phrases, and answers to questions. In Proceedings of Annual Conference of the North American Chap- ter of the Association for Computational Linguistic- s: Human Language Technologies (NAACL-HLT), pages 1011-1019.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "An information retrieval approach to short text conversation", |
| "authors": [ |
| { |
| "first": "Zongcheng", |
| "middle": [], |
| "last": "Ji", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhengdong", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "" |
| }, |
| { |
| "first": "Hang", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1408.6988" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zongcheng Ji, Zhengdong Lu, and Hang Li. 2014. An information retrieval approach to short text conver- sation. arXiv preprint arXiv:1408.6988.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A probabilistic model of information retrieval: development and comparative experiments: Part 2. Information Processing & Management", |
| "authors": [ |
| { |
| "first": "Steve", |
| "middle": [], |
| "last": "K Sparck Jones", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [ |
| "E" |
| ], |
| "last": "Walker", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Robertson", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "", |
| "volume": "36", |
| "issue": "", |
| "pages": "809--840", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K Sparck Jones, Steve Walker, and Stephen E. Robert- son. 2000. A probabilistic model of information re- trieval: development and comparative experiments: Part 2. Information Processing & Management, 36(6):809-840.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Statistical phrase-based translation", |
| "authors": [ |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "Franz", |
| "middle": [ |
| "Josef" |
| ], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", |
| "volume": "1", |
| "issue": "", |
| "pages": "48--54", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. Proceed- ings of Annual Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies (NAACL- HLT), 1:48-54.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Njfun: a reinforcement learning spoken dialogue system", |
| "authors": [ |
| { |
| "first": "Diane", |
| "middle": [], |
| "last": "Litman", |
| "suffix": "" |
| }, |
| { |
| "first": "Satinder", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Kearns", |
| "suffix": "" |
| }, |
| { |
| "first": "Marilyn", |
| "middle": [], |
| "last": "Walker", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of the 2000 ANLP/NAACL Workshop on Conversational systems", |
| "volume": "3", |
| "issue": "", |
| "pages": "17--20", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diane Litman, Satinder Singh, Michael Kearns, and Marilyn Walker. 2000. Njfun: a reinforcemen- t learning spoken dialogue system. In Proceedings of the 2000 ANLP/NAACL Workshop on Conversa- tional systems-Volume 3, pages 17-20.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Neural variational inference for text processing", |
| "authors": [ |
| { |
| "first": "Yishu", |
| "middle": [], |
| "last": "Miao", |
| "suffix": "" |
| }, |
| { |
| "first": "Lei", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Phil", |
| "middle": [], |
| "last": "Blunsom", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1511.06038" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yishu Miao, Lei Yu, and Phil Blunsom. 2015. Neu- ral variational inference for text processing. arXiv preprint arXiv:1511.06038.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Distributed representations of words and phrases and their compositionality", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [ |
| "S" |
| ], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeff", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Advances in neural information processing systems (NIPS)", |
| "volume": "", |
| "issue": "", |
| "pages": "3111--3119", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems (NIPS), pages 3111-3119.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Discriminative models for information retrieval", |
| "authors": [ |
| { |
| "first": "Ramesh", |
| "middle": [], |
| "last": "Nallapati", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the international ACM SIGIR conference on Research and development in information retrieval", |
| "volume": "", |
| "issue": "", |
| "pages": "64--71", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ramesh Nallapati. 2004. Discriminative models for information retrieval. In Proceedings of the inter- national ACM SIGIR conference on Research and development in information retrieval, pages 64-71.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "A systematic comparison of various statistical alignment models", |
| "authors": [ |
| { |
| "first": "Josef", |
| "middle": [], |
| "last": "Franz", |
| "suffix": "" |
| }, |
| { |
| "first": "Hermann", |
| "middle": [], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Computational Linguistics", |
| "volume": "29", |
| "issue": "1", |
| "pages": "19--51", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Franz Josef Och and Hermann Ney. 2003. A systemat- ic comparison of various statistical alignment mod- els. Computational Linguistics, 29(1):19-51.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Data-driven response generation in social media", |
| "authors": [ |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Ritter", |
| "suffix": "" |
| }, |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Cherry", |
| "suffix": "" |
| }, |
| { |
| "first": "William B", |
| "middle": [], |
| "last": "Dolan", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "583--593", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alan Ritter, Colin Cherry, and William B Dolan. 2011. Data-driven response generation in social media. In Proceedings of the conference on Empirical Meth- ods in Natural Language Processing (EMNLP), pages 583-593.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "A survey of statistical user simulation techniques for reinforcement-learning of dialogue management strategies. The knowledge engineering review", |
| "authors": [ |
| { |
| "first": "Jost", |
| "middle": [], |
| "last": "Schatzmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Karl", |
| "middle": [], |
| "last": "Weilhammer", |
| "suffix": "" |
| }, |
| { |
| "first": "Matt", |
| "middle": [], |
| "last": "Stuttle", |
| "suffix": "" |
| }, |
| { |
| "first": "Steve", |
| "middle": [], |
| "last": "Young", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "21", |
| "issue": "", |
| "pages": "97--126", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jost Schatzmann, Karl Weilhammer, Matt Stuttle, and Steve Young. 2006. A survey of statistical user sim- ulation techniques for reinforcement-learning of di- alogue management strategies. The knowledge en- gineering review, 21(02):97-126.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Learning to rank short text pairs with convolutional deep neural networks", |
| "authors": [ |
| { |
| "first": "Aliaksei", |
| "middle": [], |
| "last": "Severyn", |
| "suffix": "" |
| }, |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Moschitti", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of ACM SIGIR Conference on Research and Development in Information Retrieval", |
| "volume": "", |
| "issue": "", |
| "pages": "373--382", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aliaksei Severyn and Alessandro Moschitti. 2015. Learning to rank short text pairs with convolution- al deep neural networks. In Proceedings of ACM SIGIR Conference on Research and Development in Information Retrieval, pages 373-382.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Neural responding machine for short-text conversation", |
| "authors": [ |
| { |
| "first": "Lifeng", |
| "middle": [], |
| "last": "Shang", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhengdong", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "" |
| }, |
| { |
| "first": "Hang", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of Annual Meeting of the Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "1577--1586", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversa- tion. Proceedings of Annual Meeting of the Asso- ciation for Computational Linguistics (ACL), pages 1577-1586.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Probabilistic tree-edit models with structured latent variables for textual entailment and question answering", |
| "authors": [ |
| { |
| "first": "Mengqiu", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Christopher", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the International Conference on Computational Linguistics (COLING)", |
| "volume": "", |
| "issue": "", |
| "pages": "1164--1172", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mengqiu Wang and Christopher D Manning. 2010. Probabilistic tree-edit models with structured laten- t variables for textual entailment and question an- swering. In Proceedings of the International Con- ference on Computational Linguistics (COLING), pages 1164-1172.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "A long shortterm memory model for answer sentence selection in question answering", |
| "authors": [ |
| { |
| "first": "Di", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Nyberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of Annual Meeting of the Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "707--712", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Di Wang and Eric Nyberg. 2015. A long short- term memory model for answer sentence selection in question answering. In Proceedings of Annual Meeting of the Association for Computational Lin- guistics (ACL), pages 707-712.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "What is the jeopardy model? a quasisynchronous grammar for qa", |
| "authors": [ |
| { |
| "first": "Mengqiu", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Noah", |
| "suffix": "" |
| }, |
| { |
| "first": "Teruko", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mitamura", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
| "volume": "7", |
| "issue": "", |
| "pages": "22--32", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mengqiu Wang, Noah A Smith, and Teruko Mitamu- ra. 2007. What is the jeopardy model? a quasi- synchronous grammar for qa. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), volume 7, pages 22- 32.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Partially observable markov decision processes for spoken dialog systems", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Jason", |
| "suffix": "" |
| }, |
| { |
| "first": "Steve", |
| "middle": [], |
| "last": "Williams", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Young", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Computer Speech & Language", |
| "volume": "21", |
| "issue": "2", |
| "pages": "393--422", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jason D Williams and Steve Young. 2007. Partial- ly observable markov decision processes for spo- ken dialog systems. Computer Speech & Language, 21(2):393-422.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Wikiqa: A challenge dataset for open-domain question answering", |
| "authors": [ |
| { |
| "first": "Yi", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yih", |
| "middle": [], |
| "last": "Wen-Tau", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Meek", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "2013--2018", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. Wikiqa: A challenge dataset for open-domain ques- tion answering. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2013-2018.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Question answering using enhanced lexical semantic models", |
| "authors": [ |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Wen-Tau Yih", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrzej", |
| "middle": [], |
| "last": "Meek", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pastusiak", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of Annual Meeting of the Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "1744--1753", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wen-tau Yih, Ming-Wei Chang, Christopher Meek, and Andrzej Pastusiak. 2013. Question answering using enhanced lexical semantic models. In Proceedings of Annual Meeting of the Association for Computa- tional Linguistics (ACL), pages 1744-1753.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Semantic parsing for single-relation question answering", |
| "authors": [ |
| { |
| "first": "Xiaodong", |
| "middle": [], |
| "last": "Wen-Tau Yih", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Meek", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of Annual Meeting of the Association for Computational Linguistics (A-CL)", |
| "volume": "", |
| "issue": "", |
| "pages": "643--648", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wen-tau Yih, Xiaodong He, and Christopher Meek. 2014. Semantic parsing for single-relation ques- tion answering. In Proceedings of Annual Meeting of the Association for Computational Linguistics (A- CL), pages 643-648.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Abcnn: Attention-based convolutional neural network for modeling sentence pairs", |
| "authors": [ |
| { |
| "first": "Wenpeng", |
| "middle": [], |
| "last": "Yin", |
| "suffix": "" |
| }, |
| { |
| "first": "Hinrich", |
| "middle": [], |
| "last": "Sch\u00fctze", |
| "suffix": "" |
| }, |
| { |
| "first": "Bing", |
| "middle": [], |
| "last": "Xiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Bowen", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1512.05193" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wenpeng Yin, Hinrich Sch\u00fctze, Bing Xiang, and Bowen Zhou. 2015. Abcnn: Attention-based con- volutional neural network for modeling sentence pairs. arXiv preprint arXiv:1512.05193.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Deep learning for answer sentence selection", |
| "authors": [ |
| { |
| "first": "Lei", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Karl", |
| "middle": [ |
| "Moritz" |
| ], |
| "last": "Hermann", |
| "suffix": "" |
| }, |
| { |
| "first": "Phil", |
| "middle": [], |
| "last": "Blunsom", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Pulman", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "NIPS Deep Learning and Representation Learning Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014. Deep learning for answer sentence selection. NIPS Deep Learning and Repre- sentation Learning Workshop.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Lightlda: Big topic models on modest computer clusters", |
| "authors": [ |
| { |
| "first": "Jinhui", |
| "middle": [], |
| "last": "Yuan", |
| "suffix": "" |
| }, |
| { |
| "first": "Fei", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "Qirong", |
| "middle": [], |
| "last": "Ho", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Dai", |
| "suffix": "" |
| }, |
| { |
| "first": "Jinliang", |
| "middle": [], |
| "last": "Wei", |
| "suffix": "" |
| }, |
| { |
| "first": "Xun", |
| "middle": [], |
| "last": "Zheng", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [ |
| "Po" |
| ], |
| "last": "Xing", |
| "suffix": "" |
| }, |
| { |
| "first": "Tie-Yan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei-Ying", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the Annual International Conference on World Wide Web (WWW)", |
| "volume": "", |
| "issue": "", |
| "pages": "1351--1361", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jinhui Yuan, Fei Gao, Qirong Ho, Wei Dai, Jinliang Wei, Xun Zheng, Eric Po Xing, Tie-Yan Liu, and Wei-Ying Ma. 2015. Lightlda: Big topic model- s on modest computer clusters. In Proceedings of the Annual International Conference on World Wide Web (WWW), pages 1351-1361.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF1": { |
| "content": "<table><tr><td>#</td><td>Methods</td><td>MAP</td><td>MRR</td></tr><tr><td>(1)</td><td>Yih et al. (2013)</td><td>59.93%</td><td>60.68%</td></tr><tr><td colspan=\"3\">(2) Yang et al. (2015) 65.20%</td><td>66.52%</td></tr><tr><td colspan=\"3\">(3) Miao et al. (2015) 68.86%</td><td>70.69%</td></tr><tr><td>(4)</td><td>Yin et al. (2015)</td><td>69.21%</td><td>71.08%</td></tr><tr><td>(5)</td><td>DocChat</td><td>68.25%</td><td>70.73%</td></tr><tr><td>(6)</td><td>DocChat+(2)</td><td colspan=\"2\">70.08% 72.22%</td></tr></table>", |
| "text": "Impacts of features at different levels.", |
| "html": null, |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF2": { |
| "content": "<table/>", |
| "text": "Evaluation of AS task on WikiQA.", |
| "html": null, |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF3": { |
| "content": "<table><tr><td>shows that, without using WikiQA's</td></tr><tr><td>training set (only development set for ranking</td></tr><tr><td>weights), DocChat can achieve comparable per-</td></tr></table>", |
| "text": "", |
| "html": null, |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF4": { |
| "content": "<table/>", |
| "text": "Evaluation of AS on QASent.", |
| "html": null, |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF6": { |
| "content": "<table><tr><td>Methods</td><td colspan=\"2\">Precision Recall</td><td>F1</td></tr><tr><td>Yang et al. (2015)</td><td>28.34</td><td>35.80</td><td>31.64</td></tr><tr><td>DocChat</td><td>28.95</td><td>44.44</td><td>35.06</td></tr></table>", |
| "text": "Impacts of different feature groups.", |
| "html": null, |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF7": { |
| "content": "<table/>", |
| "text": "Evaluation of AT on WikiQA.", |
| "html": null, |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF8": { |
| "content": "<table><tr><td>Utterance</td><td>Response</td></tr><tr><td/><td>[XiaoIce Response]: \u2022 {\u00a4'AE \u00d8 \u00d0\"</td></tr><tr><td>\\\u2022 (Do you know the history \u00ae {\u00a4o\u00ba of Beijing?)</td><td>(I am not good at history class) [DocChat Response]: \u00ae{\u00a4a\u00c8 \u00a7OE\u00b1\u0134 3000cc\"</td></tr></table>", |
| "text": "For ranking features, 17M 'question-related questions' pairs crawled from Baidu Zhidao are used to train word alignments for h W 2W ; sentences from Chinese Wikipedia are used to train word embeddings for h W 2V and a topic model for h U T M ; the same bilingual phrase table described in last experiment is also used to extract a Chinese paraphrase table for h P P which use Chinese as the source language; 5M 'question-answer' pairs crawled from Baidu Zhidao are used for h P T , h SCR and h DM ; 0.5M 'sentence-next sentence' pairs from Chinese Wikipedia are used for h SDR ; 1.3M 'sentence-topic pairs' crawled from Chinese Wikipedia are used to train topic\u2212CNN for", |
| "html": null, |
| "num": null, |
| "type_str": "table" |
| } |
| } |
| } |
| } |