ACL-OCL / Base_JSON /prefixC /json /C18 /C18-1038.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C18-1038",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:10:09.835218Z"
},
"title": "One-shot Learning for Question-Answering in Gaokao History Challenge",
"authors": [
{
"first": "Zhuosheng",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Shanghai Jiao Tong University",
"location": {}
},
"email": "zhangzs@sjtu.edu.cn"
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Shanghai Jiao Tong University",
"location": {}
},
"email": "zhaohai@cs.sjtu.edu.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Answering questions from university admission exams (Gaokao in Chinese) is a challenging AI task since it requires effective representation to capture complicated semantic relations between questions and answers. In this work, we propose a hybrid neural model for deep questionanswering task from history examinations. Our model employs a cooperative gated neural network to retrieve answers with the assistance of extra labels given by a neural turing machine labeler. Empirical study shows that the labeler works well with only a small training dataset and the gated mechanism is good at fetching the semantic representation of lengthy answers. Experiments on question answering demonstrate the proposed model obtains substantial performance gains over various neural model baselines in terms of multiple evaluation metrics.",
"pdf_parse": {
"paper_id": "C18-1038",
"_pdf_hash": "",
"abstract": [
{
"text": "Answering questions from university admission exams (Gaokao in Chinese) is a challenging AI task since it requires effective representation to capture complicated semantic relations between questions and answers. In this work, we propose a hybrid neural model for deep questionanswering task from history examinations. Our model employs a cooperative gated neural network to retrieve answers with the assistance of extra labels given by a neural turing machine labeler. Empirical study shows that the labeler works well with only a small training dataset and the gated mechanism is good at fetching the semantic representation of lengthy answers. Experiments on question answering demonstrate the proposed model obtains substantial performance gains over various neural model baselines in terms of multiple evaluation metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Teaching a machine to pass admission exams is a hot AI challenge which has aroused a growing number of research (Guo et al., 2017; Cheng et al., 2016; Clark, 2015; Fujita et al., 2014; Henaff et al., 2016) . Among these studies, deep question-answering (QA) task (Ferrucci et al., 2010; Wang et al., 2014) is especially difficult, aiming to answer complex questions via deep feature learning from multisource datasets. Recently, a highly challenging deep QA task is from the Chinese University Admission Examination (Gaokao in Chinese), which is a national-wide standard examination for all senior middle school students in China and has been known as its large scale and strictness. This work focuses on comprehensive question-answering in Gaokao history exams as shown in Table 1 1 , which accounts for the major proportion in total scoring and is extremely difficult in the exams. (Cheng et al., 2016 ) made a preliminary attempt to take up the Gaokao challenge, trying to solve multiple-choice questions via retrieving and ranking evidence from Wikipedia articles to determine the right choices. Differently, this task is to solve comprehensive questions and has to be based on knowledge representation and semantic computation rather than word form matching in the previous work.",
"cite_spans": [
{
"start": 112,
"end": 130,
"text": "(Guo et al., 2017;",
"ref_id": "BIBREF13"
},
{
"start": 131,
"end": 150,
"text": "Cheng et al., 2016;",
"ref_id": "BIBREF5"
},
{
"start": 151,
"end": 163,
"text": "Clark, 2015;",
"ref_id": "BIBREF6"
},
{
"start": 164,
"end": 184,
"text": "Fujita et al., 2014;",
"ref_id": "BIBREF11"
},
{
"start": 185,
"end": 205,
"text": "Henaff et al., 2016)",
"ref_id": "BIBREF15"
},
{
"start": 263,
"end": 286,
"text": "(Ferrucci et al., 2010;",
"ref_id": "BIBREF10"
},
{
"start": 287,
"end": 305,
"text": "Wang et al., 2014)",
"ref_id": "BIBREF34"
},
{
"start": 884,
"end": 903,
"text": "(Cheng et al., 2016",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 774,
"end": 781,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although deep learning methods shine at various natural language processing tasks Zhang et al., 2016b; Qin et al., 2017; Zhang et al., 2018b; Zhang et al., 2018a; Zhang et al., 2018c; , they usually rely on a large scale of dataset for effective learning. The concerned task, unfortunately, cannot receive sufficient training data under ordinary circumstances. Different from previous typical QA tasks such as community QA which can enjoy the advantage of holding a very large known QA pair set, the concerned task is equal to retrieving a proper answer from textbooks organized as plain texts with guidelines of very limited number of known QA pairs. In addition, the questions are usually given in a quite indirect way to ask students to dig the exactly expected perspective of the concerned facts. If such kind of perspective fails to fall into the feature representation for either question or answer, the retrieval will hardly be successful.",
"cite_spans": [
{
"start": 82,
"end": 102,
"text": "Zhang et al., 2016b;",
"ref_id": "BIBREF44"
},
{
"start": 103,
"end": 120,
"text": "Qin et al., 2017;",
"ref_id": "BIBREF25"
},
{
"start": 121,
"end": 141,
"text": "Zhang et al., 2018b;",
"ref_id": "BIBREF47"
},
{
"start": 142,
"end": 162,
"text": "Zhang et al., 2018a;",
"ref_id": "BIBREF46"
},
{
"start": 163,
"end": 183,
"text": "Zhang et al., 2018c;",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Q: \"\u519c\u6c11\u53ef\u80fd\u5145\u5f53\u4e00\u79cd\u6781\u7aef\u4fdd\u5b88\u7684\u89d2\u8272\uff0c\u4e5f\u53ef\u80fd\u5145\u5f53\u4e00\u79cd\u5177\u6709\u9ad8\u5ea6\u9769\u547d\u6027\u7684\u89d2\u8272\u3002\"\u8bd5\u7ed3 \u5408\u6709\u5173\u53f2\u5b9e\u8bc4\u6790\u8fd9\u4e00\u89c2\u70b9\u3002 \"Peasants may act as an extremely conservative role, or may be highly revolutionary.\" Try to analyze this view in the light of the relevant historical facts. A: \u519c\u6c11\u9636\u7ea7\u7684\u7ecf\u6d4e\u5730\u4f4d\u548c\u6240\u5904\u7684\u65f6\u4ee3\u6761\u4ef6\u51b3\u5b9a\u4e86\u5b83\u53ef\u80fd\u5145\u5f53\u4e00\u79cd\u5177\u6709\u9ad8\u5ea6\u9769\u547d\u6027\u7684\u89d2\u8272\u3002\u4ee5 \u592a\u5e73\u5929\u56fd\u8fd0\u52a8\u4e3a\u4f8b\uff0c\u5728\u6597\u4e89\u8fc7\u7a0b\u4e2d\u9881\u5e03\u7684\u300a\u5929\u671d\u7530\u4ea9\u5236\u5ea6\u300b\u5c31\u7a81\u51fa\u53cd\u6620\u4e86\u519c\u6c11\u9636\u7ea7\u8981\u6c42\u5e9f\u9664 \u5c01\u5efa\u571f\u5730\u6240\u6709\u5236\u7684\u5f3a\u70c8\u613f\u671b\uff0c\u8868\u73b0\u51fa\u4e86\u9ad8\u5ea6\u7684\u9769\u547d\u6027\u3002\u519c\u6c11\u9636\u7ea7\u5b58\u5728\u7684\u8fd9\u79cd\u4e24\u9762\u6027\u662f\u7531\u5176\u7ecf \u6d4e\u5730\u4f4d\u5373\u53d7\u5730\u4e3b\u9636\u7ea7\u538b\u8feb\u548c\u5c0f\u751f\u4ea7\u8005\u7684\u5730\u4f4d\u6240\u51b3\u5b9a\u7684\u3002 The economic and social conditions of peasants determine that they may act as a highly revolutionary role. Taking the Taiping Heavenly Kingdom movement as an example, the promulgated regulation \"heavenly land system\" reflected the peasant class's demands and strong desire of abolishing the system of feudal land ownership, which shows a strong degree of revolution. The dual nature of the peasant class is also the result of its economic status, that is, under the oppression of the landlord class and the status of the small producers. Q: \u6709\u4eba\u8bf4\"\u6587\u827a\u590d\u5174\"\u662f\u4e00\u573a\u590d\u53e4\u8fd0\u52a8\uff0c\u4f60\u5982\u4f55\u770b\u5f85\uff1f\u5982\u4f55\u8bc4\u4ef7\u5176\u610f\u4e49\uff1f Some people think the Renaissance is a retro campaign. What's your opinion and how to evaluate its historic significance? A: \u6587\u827a\u590d\u5174\u8fd0\u52a8\u4ece\u8868\u9762\u7684\u542b\u4e49\u6765\u770b\uff0c\u662f\u4e00\u79cd\u590d\u5174\u53e4\u5e0c\u814a\u7f57\u9a6c\u65f6\u671f\u7684\u54f2\u5b66\u3001\u6587\u5b66\u548c\u827a\u672f\u7684\u6d3b \u52a8\uff0c\u662f\u4e00\u79cd\u590d\u53e4\u8fd0\u52a8\uff0c\u4f46\u4ece\u5176\u6df1\u5c42\u7684\u542b\u4e49\u770b\uff0c\u5b83\u5374\u662f\u4e00\u573a\u601d\u60f3\u89e3\u653e\u8fd0\u52a8\uff0c\u662f\u4e00\u6b21\u601d\u60f3\u9886\u57df\u91cc \u7684\u53d8\u9769\u3002 The Renaissance is seemingly regarded as a retro movement of philosophy, literature and art of ancient Greece and Rome. However, it is actually an ideological liberation movement and a revolution of thoughts from a deeper inspection. Generally speaking, for the Gaokao challenge, knowledge sources are extensive and no sufficient structured dataset is available, while most existing work on knowledge representation focused on structured and semi-structured types (Khot et al., 2017; Khashabi et al., 2016; Vieira, 2016) . With regard to the answer retrieval, most current research focused on the factoid QA (Dai et al., 2016; Yin et al., 2016) , community QA Lu and Kong, 2017) and episodic QA (Samothrakis et al., 2017; Xiong et al., 2016; Vinyals et al., 2016) . Compared with these existing studies, the concerned task is more compositive and more comprehensive and has to be done from unstructured knowledge sources like textbooks. Moreover, the answers of our Gaokao challenge QA task are always a group of detail-riddled sentences with various lengths rather than merely short sentences as focused on in previous work (Yin et al., 2016; Yih et al., 2014) .",
"cite_spans": [
{
"start": 1610,
"end": 1629,
"text": "(Khot et al., 2017;",
"ref_id": "BIBREF18"
},
{
"start": 1630,
"end": 1652,
"text": "Khashabi et al., 2016;",
"ref_id": "BIBREF17"
},
{
"start": 1653,
"end": 1666,
"text": "Vieira, 2016)",
"ref_id": "BIBREF32"
},
{
"start": 1754,
"end": 1772,
"text": "(Dai et al., 2016;",
"ref_id": "BIBREF7"
},
{
"start": 1773,
"end": 1790,
"text": "Yin et al., 2016)",
"ref_id": "BIBREF42"
},
{
"start": 1806,
"end": 1824,
"text": "Lu and Kong, 2017)",
"ref_id": "BIBREF23"
},
{
"start": 1841,
"end": 1867,
"text": "(Samothrakis et al., 2017;",
"ref_id": "BIBREF27"
},
{
"start": 1868,
"end": 1887,
"text": "Xiong et al., 2016;",
"ref_id": "BIBREF38"
},
{
"start": 1888,
"end": 1909,
"text": "Vinyals et al., 2016)",
"ref_id": "BIBREF33"
},
{
"start": 2271,
"end": 2289,
"text": "(Yin et al., 2016;",
"ref_id": "BIBREF42"
},
{
"start": 2290,
"end": 2307,
"text": "Yih et al., 2014)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent research has turned to semi-supervised methods to leverage unlabeled texts to enhance the performance of QA tasks via deep neural networks (Yang et al., 2017; . This task is somewhat different from previous ones that the expected extra labels are difficult to be annotated and the entire unlabeled data is kept in a very small scale, so that semi-supervised methods cannot be conveniently applied. Notably, one-shot learning has been proved effective for image recognition with few samples , which is a strategy similar to people learning concept. As an implementation of one-shot learning, neural turing machine (NTM) was proposed (Santoro et al., 2016; Vinyals et al., 2016) and showed great potential by learning effective features from a small amount of data, which caters to our mission requirements. Inspired by the latest advance of one-shot learning, we train a weakly supervised classifier to annotate salient labels for questions using a small amount of examples. For question answering, we propose a cooperative gated neural network (CGNN) to learn the semantic representation and corresponding relations between questions and answers. We also release a Chinese comprehensive deep question answering dataset to facilitate the research. The rest of this paper is organized as follows. The next section introduce our models, including retrieval model, CGNN, and NTM labeler. Task details and experimental results are reported in Section 3, followed by reviews of related work in Section 4 and conclusion in Section 5.",
"cite_spans": [
{
"start": 146,
"end": 165,
"text": "(Yang et al., 2017;",
"ref_id": "BIBREF40"
},
{
"start": 639,
"end": 661,
"text": "(Santoro et al., 2016;",
"ref_id": "BIBREF28"
},
{
"start": 662,
"end": 683,
"text": "Vinyals et al., 2016)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The proposed hybrid neural model is composed of two main parts: cooperative gated neural network (CGNN) for feature representation and answer retrieval, and NTM as one-shot learning module for extra labeling. As shown in Figure 2 , we use NTM to classify the question type and annotate the corresponding label. Then, the concatenated label vectors and question representation are fed to CGNN for jointly scoring candidate answers.",
"cite_spans": [],
"ref_spans": [
{
"start": 221,
"end": 229,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "2"
},
{
"text": "For the QA task, the key problem is how to effectively capture the semantic connections between a question and the corresponding answer. However, the questions and answers are lengthy noisy, resulting in poor feature extraction. Inspired by the success of the popular gated mechanism (Wang et al., 2017a; Chen et al., 2016a; and the gradient-easing Highway Network (Srivastava et al., 2015), a CGNN is proposed to score the correspondence of inputted QA pairs. The architecture is shown in Figure 3 .",
"cite_spans": [
{
"start": 284,
"end": 304,
"text": "(Wang et al., 2017a;",
"ref_id": "BIBREF36"
},
{
"start": 305,
"end": 324,
"text": "Chen et al., 2016a;",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 490,
"end": 498,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Main framework",
"sec_num": "2.1"
},
{
"text": "Embedding Our model reads each word of questions and answers as the input. For an input sequence, the embedding is represented as M \u2208 R d\u00d7n where d is dimension of word vector and n is the maximum length. When using the NTM module for question type labeling, the question embedding will be refined as the concatenation of the label vectors and its original embedding. Considering the calculation efficiency, we specialize a max number of words for the input and apply truncating or zero-padding when needed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main framework",
"sec_num": "2.1"
},
{
"text": "Filter matrices [W 1 , W 2 , . . . , W k ] with several variable sizes [l 1 , l 2 , . . . , l k ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional Layer",
"sec_num": null
},
{
"text": "are utilized to perform the convolution operations for input embeddings. Via parameter sharing, this feature extraction procedure is kept the same for questions and answers. For the sake of simplicity, we will explain the procedure for only one embedding sequence. The embedding will be transformed to sequences c j (j \u2208 [1, k]) :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional Layer",
"sec_num": null
},
{
"text": "c j = [. . . ; tanh(W j \u2022 M [i:i+l j \u22121] + b j ); . . . ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional Layer",
"sec_num": null
},
{
"text": "where [i : i + l j \u2212 1] indexes the convolution window. Additionally, we apply wide convolution operation between embedding layer and filter matrices, because it ensures that all weights in the filters reach the entire sentence, including the words at the margins.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional Layer",
"sec_num": null
},
{
"text": "To highlight the key information and ignore irrelevant ones during convolution, we adopt an adaptive gated decay mechanism for gated information flow. Three gates are added to optimize the feature representation. These gates are only influenced by the original input through different parameters. Let\u0109 n j denote n-gram features in c j , the gates are formulated as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gated Flow",
"sec_num": null
},
{
"text": "g i = \u03c3(W i\u0109 n j + b i ) g f = \u03c3(W f\u0109 n j + b f ) g o = \u03c3(W o\u0109 n j + b o )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gated Flow",
"sec_num": null
},
{
"text": "where \u03c3 denotes sigmoid function which guarantees the values in the gates are in [0,1] and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gated Flow",
"sec_num": null
},
{
"text": "W i , W f , W o , b i , b f , b o are model parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gated Flow",
"sec_num": null
},
{
"text": "The transformed inner cell is iteratively calculated by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gated Flow",
"sec_num": null
},
{
"text": "c n j =\u0109 n j\u22121 g i + g f (\u0109 n\u22121 j\u22121 + W n )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gated Flow",
"sec_num": null
},
{
"text": "where denotes element-wise multiplication and W o is the parameter. Through gated flow, the vector c t captures the filtered information. The output of our CGNN is h j = tanh(\u0109 n j ) g o . The gates are used to route information through the flow. Although these gates are generated independently, they work cooperatively since they jointly control the information flow of inner-cells. This procedure will help select salient features. When there is one gate in our network, the model works similarly to the original highway network (Srivastava et al., 2015) .",
"cite_spans": [
{
"start": 532,
"end": 557,
"text": "(Srivastava et al., 2015)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gated Flow",
"sec_num": null
},
{
"text": "A one-max-pooling operation is adopted after the gated flow and the current state vector s j is obtained through concatenating all the mappings for those k filters, we have s j = max(h j ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gated Flow",
"sec_num": null
},
{
"text": "For discriminative training, we use a max-margin framework for learning (or fine-tuning) parameters \u03b8. Specifically, a scoring function \u03d5(\u2022, \u2022; \u03b8) is defined to measure the similarity between the corresponding representations of questions and answers. In this work, we use cosine similarity for the measurement. Let p = {p 1 , ...p n } denote the answer corpus and a \u2208 p is the answer to the question q i , the optimal parameters \u03b8 are learned by minimizing the max-margin loss:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gated Flow",
"sec_num": null
},
{
"text": "max{\u03d5(q i , p i ; \u03b8) \u2212 \u03d5(q i , a; \u03b8) + \u03b4(p i , a)}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gated Flow",
"sec_num": null
},
{
"text": "where \u03b4(., .) denotes a non-negative margin and \u03b4(p i , a) is a small constant when a = p i and 0 otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gated Flow",
"sec_num": null
},
{
"text": "Since examinees studying for exams are required to understand the history knowledge from different sides, we exactly consider using especially-annotated labels as extra indicators for machines to capture such features. With external memory, NTM has shown great potential for one-shot learning (Santoro et al., 2016) . As in Figure 3 , an NTM consists of two primary components, controller and memory. The controller, often implemented as a recurrent neural network, interacts with an external memory module using soft read heads to retrieve representation from memory and write heads to store memory, respectively.",
"cite_spans": [
{
"start": 293,
"end": 315,
"text": "(Santoro et al., 2016)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 324,
"end": 332,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Question Type Labeling",
"sec_num": "2.2"
},
{
"text": "Controller The controller of NTMs can be implemented as either a recurrent neural network or a feedforward neural network. Our model adopts LSTM cells as the implementation for the best performance (Graves et al., 2014) . where \u2295 denotes vector concatenation, and i t , f t , u t , c t , h t are the input gates, forget gates, output gates, cell state and hidden state, respectively.",
"cite_spans": [
{
"start": 198,
"end": 219,
"text": "(Graves et al., 2014)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Question Type Labeling",
"sec_num": "2.2"
},
{
"text": "i t = \u03c3(W i w w t + W i h h t\u22121 + b i ) f t = \u03c3(W f w w t + W f h h t\u22121 + b f ) u t = \u03c3(W u w w t + W u h h t\u22121 + b u ) c t = f t c t\u22121 + i t tanh(W c w w t + W c h h t\u22121 + b c ) h t = tanh(c t ) u t o t = (h t \u2295 m t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Type Labeling",
"sec_num": "2.2"
},
{
"text": "Given the input sequence, the controller computes the concatenated state sequence o t by applying the formulation for each time step. Each word of the sequence is represented as word embedding. In order to simplify the calculation, we apply one-max-pooling to each input. As a result, each sequence is represented as a vector with the same dimension of word embedding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Type Labeling",
"sec_num": "2.2"
},
{
"text": "We use a rectangular matrix M \u2208 R N \u00d7M to denote the memory module. Given the read vector r t , the memory m t is retrieved by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Memory Manipulation",
"sec_num": null
},
{
"text": "m t = r t M t , r t = exp(K(k t , M t )) j exp(K(k t , M t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Memory Manipulation",
"sec_num": null
},
{
"text": "where r t is read vector and K is the similarity score. For writing, the memory is updated by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Memory Manipulation",
"sec_num": null
},
{
"text": "M t = M t\u22121 \u2022 (1 \u2212 w t e t ) + w t c t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Memory Manipulation",
"sec_num": null
},
{
"text": "where w t , e t and c t represent the write vectors, erase vectors and content vectors respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Memory Manipulation",
"sec_num": null
},
{
"text": "There are two categories for memory addressing mechanism of the original NTM: content-based addressing comparing each memory locations by some key k t and a location-based addressing by shifting the heads, akin to running along a tape. However, the location-based addressing is not optimal for conjunctive coding of information independent of sequence. Following (Santoro et al., 2016) , we use an addressing strategy called the Least Recently Used Access (LRUA).",
"cite_spans": [
{
"start": 363,
"end": 385,
"text": "(Santoro et al., 2016)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Memory Manipulation",
"sec_num": null
},
{
"text": "Addressing The weight u t is defined as follows,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Memory Manipulation",
"sec_num": null
},
{
"text": "u t = \u03b3u t\u22121 + r t + w t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Memory Manipulation",
"sec_num": null
},
{
"text": "where \u03b3 is a decay parameter. The least-used vectors, v t , are boolean variables. For a given time-step, each element of v t is set to 0 if it is greater than the smallest element of u t , otherwise is 1. The write weights are obtained by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Memory Manipulation",
"sec_num": null
},
{
"text": "w t = \u03c3(g t )r t\u22121 + (1 \u2212 \u03c3(g t ))v t\u22121",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Memory Manipulation",
"sec_num": null
},
{
"text": "where g t is a scalar interpolation in the range (0,1) that blends the weights of previous read weights and previous least-used weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Memory Manipulation",
"sec_num": null
},
{
"text": "Learning For our labeling task, we define a cost function as the negative log-likelihood:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Memory Manipulation",
"sec_num": null
},
{
"text": "L(\u03b8) = \u2212 N n=1 y n log(P r(\u0177 n ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Memory Manipulation",
"sec_num": null
},
{
"text": "where y n is the label and\u0177 n is the predicted one. After perspective detection, the label is then fed to the CGNN module as one-hot vectors and concatenated with the question representation. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Memory Manipulation",
"sec_num": null
},
{
"text": "Corpus Gaokao challenge is an imitation of open-book examination for computers. Therefore, only quite a limited number of resources can be used, which makes common semi-supervised methods unlikely beneficial from large scale unlabeled data. In addition, all our source for the answer should come from standard textbook which contains unstructured plain texts as we initialized our system building.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "The corpus is from the standard history textbook 2 . First, to compose an answer set, 1,929 text fragments 3 were extracted by human experts. Then equal numbers of questions were collected from past Gaokao exam papers, and their answers were manually assigned from the answer set to give 1,929 annotated QA pairs, which were equally split for training and test. All text fragments in either questions or answers are segmented into words using BaseSeg (Zhao et al., 2006) . We publish it to research communities to facilitate the research 4 . Data statistics are in Table 2 . Setup For the sake of computational efficiency, a maximal length of 100 words for each text fragment is specialized and truncating or zero-padding will be applied as necessary. Word embedding is trained by word2Vector (Mikolov et al., 2013) using Wikipedia corpus 5 . The diagonal variant of AdaGrad (Duchi et al., 2011) is used for neural network training. Table 3 shows the hyper-parameters of our models. For other neural models, we fine-tune the hyper-parameters with the following range of values: learning rate \u2208 {1e \u2212 3, 1e \u2212 2}, dropout probability \u2208 {0.1, 0.2}, CNN filter width \u2208 {2, 3, 4}. The hidden dimensions for all the neural networks are 200.",
"cite_spans": [
{
"start": 451,
"end": 470,
"text": "(Zhao et al., 2006)",
"ref_id": "BIBREF49"
},
{
"start": 793,
"end": 815,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF24"
},
{
"start": 875,
"end": 895,
"text": "(Duchi et al., 2011)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 565,
"end": 572,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 933,
"end": 940,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "In the following experiments, we use NTM to annotate salient labels for each question. Then, the question representation and corresponding label vectors are concatenated and fed to CGNN for feature learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "Labeling Examinees are required to understand the history knowledge from different sides. Thus, the solver should also be capable of capturing such kind of features. To specify a key perspective on historical questions, five classes 6 (background, cause, claim, fact and influence) are annotated as shown in Table 4 . Note that such kind of annotation is even not easy for human annotators or history teachers and our complete set for all answers is very small (less than 2,000). At last, we have 80 answers per class annotated for a total number of 400, in which 50 are selected for training 7 and 350 for test. For our detailed case, our data is too precious to be used too many for training.",
"cite_spans": [],
"ref_spans": [
{
"start": 308,
"end": 315,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "One-shot learning is supposed to effectively represent the questions in the feature space with a small set of training data. To evaluate its performance, we specify a fixed number of training data. At each epoch, NTM is randomly provided with a specific number (shot) of samples per class from training set according to the typical training setting of one-shot learning (Santoro et al., 2016) . One-shot learning, in previous practice, usually still took a large candidate training set (i.e., hundreds) as input, though for each class, only a small fixed number of shots (i.e., less than 10) were randomly selected and fed for learning. However, it is quite different from our data reality and we have to keep a least training set (10 for our case) for one-shot learning. Table 6 : Answer retrieval performance without and with NTM labels (in brackets). \"+\" means the performance gains with the help of NTM labels. Table 5 compares NTM with traditional classifiers (Naive Bayes, K-Means and SVM) and neural networks (Feedforward network and LSTM). All the baseline methods are fed the training instances as the same shot style of NTM and parameters are tuned for best performance. Figure 4 illustrates NTM learning curves with different shots. The results show that NTM works effectively on a small amount of samples. With memory cells as well, LSTM is chosen as the baseline. The hidden layer dimension of the LSTM is 200, which is the same as the NTM controller size. The LSTMs are fed with the training instances as the same shot style of NTM. Results shown in Table 5 indicate only NTM provides satisfactory performance with the least input instances. The effectiveness of NTM may attribute to its capability of slowly learning the representations of raw data through weight updates, and rapidly binding relevant information after a single representation via an external memory. To that extend, the NTM could generalize the meta features of each question type and distinguish the perspectives. This indicates one-shot learning is able to lessen over-fitting issues from sparse features of limited data.",
"cite_spans": [
{
"start": 370,
"end": 392,
"text": "(Santoro et al., 2016)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 772,
"end": 779,
"text": "Table 6",
"ref_id": null
},
{
"start": 915,
"end": 922,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 1181,
"end": 1189,
"text": "Figure 4",
"ref_id": "FIGREF4"
},
{
"start": 1564,
"end": 1571,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "Answer Retrieval For a discriminative training, we add negative QA pairs by randomly selecting 20 false answers for each positive QA pairs in the original training set. For evaluation, all answers in the corpus are ranked to match each question. NTM labeler is from 10-shot training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "We use baselines including BM25 implemented by a standard search engine Apache Lucene 8 and other neural models, Convolutional Neural Network (CNN), LSTM, Gated Recurrent Unit (GRU). Our evaluation is based on the following metrics: Precision, Recall and F1-score, which are widely used for relevance evaluation. Table 6 presents the experimental results of all the models with and without NTM labels. All of the neural networks greatly outperform BM25 which is purely based on word form matching. In addition, NTM labeler boosts the performance of all the methods. For instance, our CGNN model has obtained 17.7% gains on the F1-score metric with the assistance of extra labeling. Furthermore, our model using CGNN and NTM labeler outperforms all the other baselines. This superior performance indicates that the one-shot learning strategy is competent for our deep QA task with only a small amount of data and the adoption of gated mechanism is also well-suited to work with CNN, adaptively transforming and combining local features detected by the individual filters. Table 7 shows question type distribution and the performance of CGNN. Our model performs well on most types but drops on Fact since these questions are diverse kinds of facts whose patterns are extremely hard to distinguish, even for human. This also shows our task is challenging.",
"cite_spans": [],
"ref_spans": [
{
"start": 313,
"end": 320,
"text": "Table 6",
"ref_id": null
},
{
"start": 1071,
"end": 1078,
"text": "Table 7",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "Model Analysis The primary motivation of highway network is to ease gradient-based training of highly deep networks through utilizing gated units. When remaining one gate in our network, our model works similarly to the highway network. To compare different gated mechanisms, we carry out further experiments with the same CNN setting. As shown in Table 6 , CGNN achieves the best performance in terms of all evaluation metrics, indicating our gated mechanism is well-suited to work with CNN, adaptively transforming and combining local features detected by the individual filters. In order to give an insight into the effectiveness of the gated mechanism on information flow across neural network layers, we visualize the final weights for each word from question and answer after gated flow as shown in Figure 5 . We can see that the key information (human, productivity, development, world market, life and influence) of the lengthy questions and answers will be given higher weights, especially the common words (world market) of the question and corresponding answer. Besides, the label in the question is also assigned a high attention, indicating the label works essentially.",
"cite_spans": [],
"ref_spans": [
{
"start": 348,
"end": 355,
"text": "Table 6",
"ref_id": null
},
{
"start": 805,
"end": 813,
"text": "Figure 5",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "Various neural models have been proposed to tackle the tasks of QA task and related knowledge representation (Wang et al., 2017b; Chen et al., 2016b; Todor and Anette, 2018; Kundu and Ng, 2018) . Previous work mainly focused on factoid questions, falling into the architecture of knowledge base embedding and question answering from knowledge base (Khashabi et al., 2016; Angeli et al., 2015) . Yang et al. (2014) proposed a method that transforms natural questions into their corresponding logical forms and conducted question answering by leveraging semantic associations between lexical representations and knowledge base properties in the latent space. Yin et al. (2016) proposed an end-to-end neural model that can generate answers to simple factoid questions from a knowledge base. Nonetheless, the knowledge base construction requires a lot of human workload, and the related semantic parsing and knowledge representation will become much more complicated as the scale of the knowledge base increases.",
"cite_spans": [
{
"start": 109,
"end": 129,
"text": "(Wang et al., 2017b;",
"ref_id": "BIBREF37"
},
{
"start": 130,
"end": 149,
"text": "Chen et al., 2016b;",
"ref_id": "BIBREF4"
},
{
"start": 150,
"end": 173,
"text": "Todor and Anette, 2018;",
"ref_id": "BIBREF31"
},
{
"start": 174,
"end": 193,
"text": "Kundu and Ng, 2018)",
"ref_id": "BIBREF19"
},
{
"start": 348,
"end": 371,
"text": "(Khashabi et al., 2016;",
"ref_id": "BIBREF17"
},
{
"start": 372,
"end": 392,
"text": "Angeli et al., 2015)",
"ref_id": "BIBREF0"
},
{
"start": 395,
"end": 413,
"text": "Yang et al. (2014)",
"ref_id": "BIBREF39"
},
{
"start": 657,
"end": 674,
"text": "Yin et al. (2016)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Question Answering",
"sec_num": "4.1"
},
{
"text": "For non-factoid QA, most work formulized it as a semantic matching task on sentence pairs by vector modeling. Different from answer retrieval from knowledge bases, this type of studies used vector to represent QA pairs and compare the distance in vector space to match answer text (Tan et al., 2015; Feng et al., 2015) . However, the performance of these neural models greatly depends on a large amount of labeled data.",
"cite_spans": [
{
"start": 281,
"end": 299,
"text": "(Tan et al., 2015;",
"ref_id": "BIBREF30"
},
{
"start": 300,
"end": 318,
"text": "Feng et al., 2015)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Question Answering",
"sec_num": "4.1"
},
{
"text": "Our concerned task cannot be simply regarded as either factoid or non-factoid. In fact, questions in our task consist of both factoid and non-factoid ones with purely unstructured corresponding answers. Conventional networks might not be sufficient to represent these QA pairs and we need to seek more powerful models. A recent hot comprehensive QA task is the SQuAD challenge (Rajpurkar et al., 2016) which aims to find a text span from given paragraph to answer the question. However, our task is quite different from SQuAD. With sufficient learning data, neural models have shown satisfying performance in the SQuAD challenge. In real world practice, examinees are acquired to learn and comprehend metaknowledge from textbooks so as to pass the exams through review and reasoning among the whole knowledge space instead of simply finding an answer span from a given paragraph, which shows to be challenging for neural networks.",
"cite_spans": [
{
"start": 377,
"end": 401,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Question Answering",
"sec_num": "4.1"
},
{
"text": "For real-world applications of specific domains, the datasets are always inadequate. Semi-supervised Learning methods have been extensively studied to make use of unlabeled data. A batch of models have been proposed based on representation learning (Yang et al., 2017; . For question answering, current semi-supervised models generally use auto-encoder framework or generative model to obtain the representation of unlabeled data. However, these semi-supervised models can not be applied to generate labels for our raw QA pairs since our task requires diverse types of labels from discourse to sentence which are difficult to annotate, and it is more serious that the entire unlabeled data is also too small to support an effective semi-supervised learning. Thus, we must very carefully choose a proper learning strategy for our task.",
"cite_spans": [
{
"start": 249,
"end": 268,
"text": "(Yang et al., 2017;",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised Learning",
"sec_num": "4.2"
},
{
"text": "Different from the above research line, one-shot learning belongs to a kind of weakly supervised method (Bordes et al., 2014) that was first proposed to learn information about object categories from one or only a few training images . Recently, memory-based neural network models have greatly expanded the ways of storing and accessing information. Two promising neural networks with external memory have been proposed to connect deep neural network with one-shot learning. One is Neural Turing Machine (Graves et al., 2014) , which can read or write the facts to an external differentiable memory. Santoro et al. (2016) proved it to be an effective method for image recognition. The other is Memory Network (Vinyals et al., 2016) . The crucial difference between them is that the latter does not have a mechanism to modify the content of the external memory, which lets us choose the former as our one-shot learning implementation. Compared with previous methods for question answering, our model is more weakly supervised with a limited amount of training data. As to our best knowledge, this is the first attempt that adopts one-shot learning for a deep QA task by using NTM as automatic labeler.",
"cite_spans": [
{
"start": 104,
"end": 125,
"text": "(Bordes et al., 2014)",
"ref_id": "BIBREF1"
},
{
"start": 504,
"end": 525,
"text": "(Graves et al., 2014)",
"ref_id": "BIBREF12"
},
{
"start": 600,
"end": 621,
"text": "Santoro et al. (2016)",
"ref_id": "BIBREF28"
},
{
"start": 709,
"end": 731,
"text": "(Vinyals et al., 2016)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised Learning",
"sec_num": "4.2"
},
{
"text": "This paper presents a neural model with NTM for a challenging deep question answering task from history examinations. Our experimental results show that the adopted CGNN together with other neural models works much better than BM25. With an NTM labeler, all the deep neural models are further enhanced. We also release a Chinese comprehensive deep question answering dataset and have launched a new research line for the Gaokao challenge and solve more complicated questions using deep neural networks to learn semantic representation while the previous work focused on choice questions using simple information retrieval methods (Cheng et al., 2016) . In fact, open-domain questions that can often be distinguished into different aspects can always benefit from one-shot learnings over a few labeling samples, which has been verified the effectiveness in this paper by only relying on the least task-dependent assumptions.",
"cite_spans": [
{
"start": 630,
"end": 650,
"text": "(Cheng et al., 2016)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Standard Middle-school History Textbook (Vol. 1-3), published by the People's Education Press in May, 2015. 3 1,929 is the number of all must-to-be-mastered history facts required by national education quality control.4 Our source is available at: https://github.com/cooelf/OneshotQA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://dumps.wikimedia.org/ 6 These classes are determined according to national education quality control who requires every student should clearly distinguish these specific perspectives about history facts. More details are in Supplementary Materials.7 As our labeling is a five-class classification task, 50 samples therefore allow 10-shot learning per class at most.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://lucene.apache.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Leveraging linguistic structure for open domain information extraction",
"authors": [
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Melvin",
"middle": [
"Johnson"
],
"last": "Premkumar",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "344--354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabor Angeli, Melvin Johnson Premkumar, and Christopher D Manning. 2015. Leveraging linguistic structure for open domain information extraction. In ACL, pages 344-354.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Open question answering with weakly supervised embedding models",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Usunier",
"suffix": ""
}
],
"year": 2014,
"venue": "Joint European Conference on Machine Learning and Knowledge Discovery in Databases",
"volume": "",
"issue": "",
"pages": "165--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Bordes, Jason Weston, and Nicolas Usunier. 2014. Open question answering with weakly supervised em- bedding models. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 165-180.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A full end-to-end semantic role labeler, syntacticagnostic or syntactic-aware?",
"authors": [
{
"first": "Jiaxun",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Shexia",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Zuchao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiaxun Cai, Shexia He, Zuchao Li, and Hai Zhao. 2018. A full end-to-end semantic role labeler, syntactic- agnostic or syntactic-aware? In Proceedings of the 27th International Conference on Computational Linguistics (COLING 2018).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Implicit discourse relation detection via a deep architecture with gated relevance network",
"authors": [
{
"first": "Jifan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Pengfei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "1726--1735",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jifan Chen, Qi Zhang, Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016a. Implicit discourse relation detection via a deep architecture with gated relevance network. In ACL, pages 1726-1735.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Enhancing and combining sequential and tree lstm for natural language inference",
"authors": [
{
"first": "Qian",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Zhenhua",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Si",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.06038"
]
},
"num": null,
"urls": [],
"raw_text": "Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, and Hui Jiang. 2016b. Enhancing and combining sequential and tree lstm for natural language inference. arXiv preprint arXiv:1609.06038.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Taking up the gaokao challenge: An information retrieval approach",
"authors": [
{
"first": "Gong",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Weixi",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Ziwei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jianghui",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yuzhong",
"middle": [],
"last": "Qu",
"suffix": ""
}
],
"year": 2016,
"venue": "IJCAI",
"volume": "",
"issue": "",
"pages": "2479--2485",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gong Cheng, Weixi Zhu, Ziwei Wang, Jianghui Chen, and Yuzhong Qu. 2016. Taking up the gaokao challenge: An information retrieval approach. IJCAI, pages 2479-2485.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Elementary school science and math tests as a driver for ai: Take the aristo challenge! In IAAI",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "4019--4021",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Clark. 2015. Elementary school science and math tests as a driver for ai: Take the aristo challenge! In IAAI, pages 4019-4021.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "CFO: Conditional focused neural question answering with large-scale knowledge bases",
"authors": [
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.01994"
]
},
"num": null,
"urls": [],
"raw_text": "Zihang Dai, Lei Li, and Wei Xu. 2016. CFO: Conditional focused neural question answering with large-scale knowledge bases. arXiv preprint arXiv:1606.01994.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Adaptive subgradient methods for online learning and stochastic optimization",
"authors": [
{
"first": "John",
"middle": [
"C"
],
"last": "Duchi",
"suffix": ""
},
{
"first": "Elad",
"middle": [],
"last": "Hazan",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "39",
"pages": "2121--2159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John C. Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(39):2121-2159.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Applying deep learning to answer selection: A study and an open task",
"authors": [
{
"first": "Minwei",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"R"
],
"last": "Glass",
"suffix": ""
},
{
"first": "Lidan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2015,
"venue": "ASRU",
"volume": "",
"issue": "",
"pages": "813--820",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minwei Feng, Bing Xiang, Michael R. Glass, Lidan Wang, and Bowen Zhou. 2015. Applying deep learning to answer selection: A study and an open task. In ASRU, pages 813-820.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Building watson: An overview of the deepqa project",
"authors": [
{
"first": "David",
"middle": [],
"last": "Ferrucci",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Chu-Carroll",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Gondek",
"suffix": ""
},
{
"first": "Aditya",
"middle": [
"A"
],
"last": "Kalyanpur",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lally",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Murdock",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Nyberg",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Prager",
"suffix": ""
}
],
"year": 2010,
"venue": "AI magazine",
"volume": "31",
"issue": "3",
"pages": "59--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A Kalyanpur, Adam Lally, J William Murdock, Eric Nyberg, John Prager, et al. 2010. Building watson: An overview of the deepqa project. AI magazine, 31(3):59-79.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Overview of todai robot project and evaluation framework of its nlp-based problem solving",
"authors": [
{
"first": "Akira",
"middle": [],
"last": "Fujita",
"suffix": ""
},
{
"first": "Akihiro",
"middle": [],
"last": "Kameda",
"suffix": ""
},
{
"first": "Ai",
"middle": [],
"last": "Kawazoe",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": ""
}
],
"year": 2014,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "2590--2597",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akira Fujita, Akihiro Kameda, Ai Kawazoe, and Yusuke Miyao. 2014. Overview of todai robot project and evaluation framework of its nlp-based problem solving. ICLR, pages 2590-2597.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Neural turing machines",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Wayne",
"suffix": ""
},
{
"first": "Ivo",
"middle": [],
"last": "Danihelka",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1410.5401"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. arXiv preprint arXiv:1410.5401.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Which is the effective way for gaokao: Information retrieval or neural networks? In EACL",
"authors": [
{
"first": "Shangmin",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Xiangrong",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Shizhu",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "111--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shangmin Guo, Xiangrong Zeng, Shizhu He, Kang Liu, and Jun Zhao. 2017. Which is the effective way for gaokao: Information retrieval or neural networks? In EACL, pages 111-120.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Syntax for semantic role labeling, to be, or not to be",
"authors": [
{
"first": "Shexia",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Zuchao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Hongxiao",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Gongshen",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shexia He, Zuchao Li, Hai Zhao, Hongxiao Bai, and Gongshen Liu. 2018. Syntax for semantic role labeling, to be, or not to be. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Tracking the world state with recurrent entity networks",
"authors": [
{
"first": "Mikael",
"middle": [],
"last": "Henaff",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Szlam",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1612.03969"
]
},
"num": null,
"urls": [],
"raw_text": "Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, and Yann Lecun. 2016. Tracking the world state with recurrent entity networks. arXiv preprint arXiv:1612.03969.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Moon IME: neural-based chinese pinyin aided input method with customizable association",
"authors": [
{
"first": "Yafang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Zuchao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Zhuosheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yafang Huang, Zuchao Li, Zhuosheng Zhang, and Hai Zhao. 2018. Moon IME: neural-based chinese pinyin aided input method with customizable association. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018), System Demonstration.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Question answering via integer programming over semi-structured knowledge",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Khashabi",
"suffix": ""
},
{
"first": "Tushar",
"middle": [],
"last": "Khot",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Sabharwal",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2016,
"venue": "IJCAI",
"volume": "",
"issue": "",
"pages": "1145--1152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Khashabi, Tushar Khot, Ashish Sabharwal, Peter Clark, Oren Etzioni, and Dan Roth. 2016. Question answering via integer programming over semi-structured knowledge. In IJCAI, pages 1145-1152.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Answering complex questions using open information extraction",
"authors": [
{
"first": "Tushar",
"middle": [],
"last": "Khot",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Sabharwal",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "311--316",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tushar Khot, Ashish Sabharwal, and Peter Clark. 2017. Answering complex questions using open information extraction. ACL, pages 311-316.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A question-focused multi-factor attention network for question answering",
"authors": [
{
"first": "Souvik",
"middle": [],
"last": "Kundu",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1801.08290"
]
},
"num": null,
"urls": [],
"raw_text": "Souvik Kundu and Hwee Tou Ng. 2018. A question-focused multi-factor attention network for question answer- ing. arXiv preprint arXiv:1801.08290.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Semi-supervised question retrieval with gated convolutions",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Hrishikesh",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
},
{
"first": "Katerina",
"middle": [],
"last": "Tymoshenko",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Lluis",
"middle": [],
"last": "Marquez",
"suffix": ""
}
],
"year": 2016,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "1279--1289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tao Lei, Hrishikesh Joshi, Regina Barzilay, Tommi Jaakkola, Katerina Tymoshenko, Alessandro Moschitti, and Lluis Marquez. 2016. Semi-supervised question retrieval with gated convolutions. In NAACL, pages 1279- 1289.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "One-shot learning of object categories",
"authors": [
{
"first": "Fei-Fei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Fergus",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Perona",
"suffix": ""
}
],
"year": 2006,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "28",
"issue": "4",
"pages": "594--611",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei-Fei Li, R. Fergus, and P. Perona. 2006. One-shot learning of object categories. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(4):594-611.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Seq2seq dependency parsing",
"authors": [
{
"first": "Zuchao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jiaxun",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Shexia",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zuchao Li, Jiaxun Cai, Shexia He, and Hai Zhao. 2018. Seq2seq dependency parsing. In Proceedings of the 27th International Conference on Computational Linguistics (COLING 2018).",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Community-based question answering via contextual ranking metric network learning",
"authors": [
{
"first": "Hanqing",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Kong",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17)",
"volume": "",
"issue": "",
"pages": "4963--4964",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hanqing Lu and Ming Kong. 2017. Community-based question answering via contextual ranking metric network learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17), pages 4963- 4964.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Adversarial connective-exploiting networks for implicit discourse relation classification",
"authors": [
{
"first": "Lianhui",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Zhisong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Zhiting",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017)",
"volume": "",
"issue": "",
"pages": "1006--1017",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lianhui Qin, Zhisong Zhang, Hai Zhao, Zhiting Hu, and Eric P. Xing. 2017. Adversarial connective-exploiting networks for implicit discourse relation classification. In Proceedings of the 55th Annual Meeting of the Asso- ciation for Computational Linguistics (ACL 2017), pages 1006-1017.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Squad: 100,000+ questions for machine comprehension of text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "EMNLP 2016",
"volume": "",
"issue": "",
"pages": "2383--2392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In EMNLP 2016, pages 2383-2392.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Convolutional-match networks for question answering",
"authors": [
{
"first": "Spyridon",
"middle": [],
"last": "Samothrakis",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Vodopivec",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Fairbank",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Fasli",
"suffix": ""
}
],
"year": 2017,
"venue": "IJCAI",
"volume": "",
"issue": "",
"pages": "2686--2692",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Spyridon Samothrakis, Tom Vodopivec, Michael Fairbank, and Maria Fasli. 2017. Convolutional-match networks for question answering. In IJCAI, pages 2686-2692.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "One-shot learning with memory-augmented neural networks",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Santoro",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Bartunov",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Botvinick",
"suffix": ""
},
{
"first": "Daan",
"middle": [],
"last": "Wierstra",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Lillicrap",
"suffix": ""
}
],
"year": 2016,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "1842--1850",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. 2016. One-shot learning with memory-augmented neural networks. In ICML, pages 1842-1850.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "LSTM-based deep learning models for non-factoid answer selection",
"authors": [
{
"first": "Ming",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Cicero Dos Santos",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.04108"
]
},
"num": null,
"urls": [],
"raw_text": "Ming Tan, Cicero dos Santos, Bing Xiang, and Bowen Zhou. 2015. LSTM-based deep learning models for non-factoid answer selection. arXiv preprint arXiv:1511.04108.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Enhancing cloze-style reading comprehension with external common knowledge using explicit key-value memory",
"authors": [
{
"first": "Mihaylov",
"middle": [],
"last": "Todor",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Anette",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihaylov Todor and Frank Anette. 2018. Enhancing cloze-style reading comprehension with external common knowledge using explicit key-value memory. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018).",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Knowledge representation in graphs using convolutional neural networks",
"authors": [
{
"first": "Armando",
"middle": [],
"last": "Vieira",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1612.02255"
]
},
"num": null,
"urls": [],
"raw_text": "Armando Vieira. 2016. Knowledge representation in graphs using convolutional neural networks. arXiv preprint arXiv:1612.02255.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Matching networks for one shot learning",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Blundell",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Lillicrap",
"suffix": ""
},
{
"first": "Daan",
"middle": [],
"last": "Wierstra",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3630--3638",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, Charles Blundell, Tim Lillicrap, Daan Wierstra, et al. 2016. Matching networks for one shot learning. In Advances in Neural Information Processing Systems, pages 3630-3638.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "An overview of microsoft deep qa system on stanford webquestions benchmark",
"authors": [
{
"first": "Zhenghao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Shengquan",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Huaming",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xuedong",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenghao Wang, Shengquan Yan, Huaming Wang, and Xuedong Huang. 2014. An overview of microsoft deep qa system on stanford webquestions benchmark. Technical report, Technical report, Microsoft Research.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Bilingual continuous-space language model growing for statistical machine translation",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Bao",
"middle": [
"Liang"
],
"last": "Lu",
"suffix": ""
},
{
"first": "Masao",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2015,
"venue": "IEEE/ACM Transactions on Audio Speech & Language Processing",
"volume": "23",
"issue": "7",
"pages": "1209--1220",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rui Wang, Hai Zhao, Bao Liang Lu, Masao Utiyama, and Eiichiro Sumita. 2015. Bilingual continuous-space lan- guage model growing for statistical machine translation. IEEE/ACM Transactions on Audio Speech & Language Processing, 23(7):1209-1220.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Gated self-matching networks for reading comprehension and question answering",
"authors": [
{
"first": "Wenhui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Baobao",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "189--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017a. Gated self-matching networks for reading comprehension and question answering. In ACL, pages 189-198.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Bilateral multi-perspective matching for natural language sentences",
"authors": [
{
"first": "Zhiguo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Wael",
"middle": [],
"last": "Hamza",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1702.03814"
]
},
"num": null,
"urls": [],
"raw_text": "Zhiguo Wang, Wael Hamza, and Radu Florian. 2017b. Bilateral multi-perspective matching for natural language sentences. arXiv preprint arXiv:1702.03814.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Dynamic memory networks for visual and textual question answering. arXiv",
"authors": [
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Caiming Xiong, Stephen Merity, and Richard Socher. 2016. Dynamic memory networks for visual and textual question answering. arXiv, 1603.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Joint relational embeddings for knowledgebased question answering",
"authors": [
{
"first": "Min Chul",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Hae",
"middle": [
"Chang"
],
"last": "Rim",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "645--650",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Min Chul Yang, Nan Duan, Ming Zhou, and Hae Chang Rim. 2014. Joint relational embeddings for knowledge- based question answering. In EMNLP, pages 645-650.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Semi-supervised qa with generative domain-adaptive nets",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1702.02206"
]
},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Junjie Hu, Ruslan Salakhutdinov, and William W Cohen. 2017. Semi-supervised qa with generative domain-adaptive nets. arXiv preprint arXiv:1702.02206.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Semantic parsing for single-relation question answering",
"authors": [
{
"first": "Xiaodong",
"middle": [],
"last": "Wen Tau Yih",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Meek",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "643--648",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wen Tau Yih, Xiaodong He, and Christopher Meek. 2014. Semantic parsing for single-relation question answer- ing. In ACL, pages 643-648.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Neural generative question answering",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Lifeng",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiaoming",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2016,
"venue": "IJCAI",
"volume": "",
"issue": "",
"pages": "2972--2978",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun Yin, Xin Jiang, Zhengdong Lu, Lifeng Shang, Hang Li, and Xiaoming Li. 2016. Neural generative question answering. In IJCAI, pages 2972-2978.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Learning distributed representations of data in community question answering for question retrieval",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Fang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zhoujun",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2016,
"venue": "ACM",
"volume": "",
"issue": "",
"pages": "533--542",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai Zhang, Wei Wu, Fang Wang, Ming Zhou, and Zhoujun Li. 2016a. Learning distributed representations of data in community question answering for question retrieval. In ACM, pages 533-542.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Probabilistic graph-based dependency parsing with convolutional neural network",
"authors": [
{
"first": "Zhisong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Lianhui",
"middle": [],
"last": "Qin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016)",
"volume": "",
"issue": "",
"pages": "1382--1392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhisong Zhang, Hai Zhao, and Lianhui Qin. 2016b. Probabilistic graph-based dependency parsing with con- volutional neural network. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016), pages 1382-1392.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Attentive interactive neural networks for answer selection in community question answering",
"authors": [
{
"first": "Xiaodong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Sha",
"suffix": ""
},
{
"first": "Houfeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17)",
"volume": "",
"issue": "",
"pages": "3525--3531",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaodong Zhang, Sujian Li, Lei Sha, and Houfeng Wang. 2017. Attentive interactive neural networks for answer selection in community question answering. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17), pages 3525-3531.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Subword-augmented embedding for cloze reading comprehension",
"authors": [
{
"first": "Zhuosheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yafang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhuosheng Zhang, Yafang Huang, and Hai Zhao. 2018a. Subword-augmented embedding for cloze reading comprehension. In Proceedings of the 27th International Conference on Computational Linguistics (COLING 2018).",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Sjtu-nlp at semeval-2018 task 9: Neural hypernym discovery with term embeddings",
"authors": [
{
"first": "Zhuosheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jiangtong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Bingjie",
"middle": [],
"last": "Tang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 12th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhuosheng Zhang, Jiangtong Li, Hai Zhao, and Bingjie Tang. 2018b. Sjtu-nlp at semeval-2018 task 9: Neural hypernym discovery with term embeddings. In Proceedings of the 12th International Workshop on Semantic Evaluation (SemEval 2018), Workshop of NAACL-HLT 2018.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Modeling multi-turn conversation with deep utterance aggregation",
"authors": [
{
"first": "Zhuosheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jiangtong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Pengfei",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhuosheng Zhang, Jiangtong Li, Pengfei Zhu, and Hai Zhao. 2018c. Modeling multi-turn conversation with deep utterance aggregation. In Proceedings of the 27th International Conference on Computational Linguistics (COLING 2018).",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "An improved Chinese word segmentation system with conditional random field",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Chang-Ning",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Mu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Fifth Sighan Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "162--165",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao, Chang-Ning Huang, Mu Li, and Taku Kudo. 2006. An improved Chinese word segmentation system with conditional random field. Proceedings of the Fifth Sighan Workshop on Chinese Language Processing, pages 162-165.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"type_str": "figure",
"text": "Figure 2: Model architecture.",
"num": null,
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"text": "Figure 3: NTM architecture",
"num": null,
"uris": null
},
"FIGREF4": {
"type_str": "figure",
"text": "Labeling accuracy curve.",
"num": null,
"uris": null
},
"FIGREF5": {
"type_str": "figure",
"text": "analyze the relationship between human productivity development and world market formation, and evaluate the impact of the formation of global market on human lifestyles.increases national communication business mutual complement world market material preparation triggers technology revolution stimulates Answer (extract): the development of human productivity increases the frequency of national communication, strengthens the mutual complement among nations and business, provides sufficient material preparation for global market formation, triggers the technology revolution of transportation and stimulates both migration and liquidity among nations.",
"num": null,
"uris": null
},
"FIGREF6": {
"type_str": "figure",
"text": "Visualization for question and answer representation weights after gated flow. The darker color means the higher weights.",
"num": null,
"uris": null
},
"TABREF0": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Comprehensive question examples from Gaokao history exams.",
"num": null
},
"TABREF3": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Data statistics of the comprehensive question-answering corpus. .",
"num": null
},
"TABREF4": {
"content": "<table><tr><td>Class Labels</td><td>Answers</td></tr><tr><td/><td>\u8fd0\u7528\u6211\u4eec\u4ece\u53e4\u4ee3\u8bd7\u6587\u3001\u620f\u66f2\u3001\u6c11\u95f4\u4f20\u8bf4\u4e2d\u5df2\u7ecf\u5b66\u5230\u7684\u77e5\u8bc6\uff0c\u4e3e\u4f8b\u8bf4\u660e\u4e2d\u56fd</td></tr><tr><td/><td>\u53e4\u4ee3\u81ea\u7ed9\u81ea\u8db3\u7684\u81ea\u7136\u7ecf\u6d4e\u7684\u72b6\u51b5\u3002</td></tr><tr><td colspan=\"2\">\u80cc\u666f \u6625\u79cb\u6218\u56fd\u65f6\u671f\u662f\u793e\u4f1a\u5267\u70c8\u52a8\u8361\u7684\u5386\u53f2\u9636\u6bb5\uff0c\u4e3a\u4ec0\u4e48\u5728\u8fd9\u6837\u7684\u65f6\u671f \u4f1a\u51fa\u73b0\u601d</td></tr><tr><td/><td>\u60f3\u6587\u5316\u6d3b\u8dc3\u7684\u5c40\u9762\u3002</td></tr><tr><td>\u539f\u56e0 Cause</td><td>The Spring-autumn and Warring States Period was the historical stage of the vi-</td></tr><tr><td/><td>olent social unrest. Why would there be an active ideological and cultural phe-</td></tr><tr><td/><td>nomenon in such a period?</td></tr><tr><td/><td>\u5728\u542f\u8499\u8fd0\u52a8\u4e2d\uff0c\u4f17\u591a\u7684\u542f\u8499\u601d\u60f3\u5bb6\u7684\u5171\u6027\u601d\u60f3\u4e3b\u5f20\u662f\u4ec0\u4e48\uff1f \u4ed6\u4eec \u4e4b\u95f4\u6709\u4f55</td></tr><tr><td/><td>\u7ee7\u627f\u548c\u53d1\u5c55\u3002</td></tr><tr><td>\u4e3b\u5f20 Claim</td><td>During the Enlightenment, what is the common thought of those enlightening</td></tr><tr><td/><td>thinkers? What is the inheritance and development between them?</td></tr><tr><td/><td>\"\u519c\u6c11\u53ef\u80fd\u5145\u5f53\u4e00\u79cd\u6781\u7aef\u4fdd\u5b88\u7684\u89d2\u8272\uff0c\u4e5f\u53ef\u80fd\u5145\u5f53\u4e00\u79cd\u5177\u6709\u9ad8\u5ea6\u9769 \u547d\u6027\u7684</td></tr><tr><td/><td>\u89d2\u8272\u3002\" \u8bd5\u7ed3\u5408\u6709\u5173\u53f2\u5b9e\u8bc4\u6790\u8fd9\u4e00\u89c2\u70b9\u3002</td></tr><tr><td>\u4e8b\u5b9e Fact</td><td/></tr></table>",
"html": null,
"type_str": "table",
"text": "Hyper-parameters of our model. Background Using the knowledge learned from ancient poetic proses, operas, and folklores, exemplify the situation of self-sufficiency of natural economy in ancient China.",
"num": null
},
"TABREF5": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "",
"num": null
},
"TABREF7": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Accuracy for labeling task.",
"num": null
},
"TABREF10": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Question type distribution and model performance of CGNN.",
"num": null
}
}
}
}