ACL-OCL / Base_JSON /prefixP /json /P17 /P17-1018.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P17-1018",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:16:28.948262Z"
},
"title": "Gated Self-Matching Networks for Reading Comprehension and Question Answering",
"authors": [
{
"first": "Wenhui",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "MOE",
"institution": "Peking University",
"location": {
"country": "China"
}
},
"email": "wangwenhui@pku.edu.cn"
},
{
"first": "Nan",
"middle": [],
"last": "Yang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Research",
"location": {
"postCode": "221009",
"settlement": "Beijing, Xuzhou",
"country": "China, China"
}
},
"email": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Research",
"location": {
"postCode": "221009",
"settlement": "Beijing, Xuzhou",
"country": "China, China"
}
},
"email": "fuwei@microsoft.com"
},
{
"first": "Baobao",
"middle": [],
"last": "Chang",
"suffix": "",
"affiliation": {
"laboratory": "MOE",
"institution": "Peking University",
"location": {
"country": "China"
}
},
"email": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Research",
"location": {
"postCode": "221009",
"settlement": "Beijing, Xuzhou",
"country": "China, China"
}
},
"email": "mingzhou@microsoft.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we present the gated selfmatching networks for reading comprehension style question answering, which aims to answer questions from a given passage. We first match the question and passage with gated attention-based recurrent networks to obtain the question-aware passage representation. Then we propose a self-matching attention mechanism to refine the representation by matching the passage against itself, which effectively encodes information from the whole passage. We finally employ the pointer networks to locate the positions of answers from the passages. We conduct extensive experiments on the SQuAD dataset. The single model achieves 71.3% on the evaluation metrics of exact match on the hidden test set, while the ensemble model further boosts the results to 75.9%. At the time of submission of the paper, our model holds the first place on the SQuAD leaderboard for both single and ensemble model. * Contribution during internship at Microsoft Research. \u00a7 Equal contribution.",
"pdf_parse": {
"paper_id": "P17-1018",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we present the gated selfmatching networks for reading comprehension style question answering, which aims to answer questions from a given passage. We first match the question and passage with gated attention-based recurrent networks to obtain the question-aware passage representation. Then we propose a self-matching attention mechanism to refine the representation by matching the passage against itself, which effectively encodes information from the whole passage. We finally employ the pointer networks to locate the positions of answers from the passages. We conduct extensive experiments on the SQuAD dataset. The single model achieves 71.3% on the evaluation metrics of exact match on the hidden test set, while the ensemble model further boosts the results to 75.9%. At the time of submission of the paper, our model holds the first place on the SQuAD leaderboard for both single and ensemble model. * Contribution during internship at Microsoft Research. \u00a7 Equal contribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In this paper, we focus on reading comprehension style question answering which aims to answer questions given a passage or document. We specifically focus on the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016) , a largescale dataset for reading comprehension and question answering which is manually created through crowdsourcing. SQuAD constrains answers to the space of all possible spans within the reference passage, which is different from cloze-style reading comprehension datasets Hill et al., 2016) in which answers are single words or entities. Moreover, SQuAD requires different forms of logical reasoning to infer the answer (Rajpurkar et al., 2016) .",
"cite_spans": [
{
"start": 207,
"end": 231,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 510,
"end": 528,
"text": "Hill et al., 2016)",
"ref_id": "BIBREF8"
},
{
"start": 658,
"end": 682,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Rapid progress has been made since the release of the SQuAD dataset. Wang and Jiang (2016b) build question-aware passage representation with match-LSTM (Wang and Jiang, 2016a) , and predict answer boundaries in the passage with pointer networks (Vinyals et al., 2015) . Seo et al. (2016) introduce bi-directional attention flow networks to model question-passage pairs at multiple levels of granularity. Xiong et al. (2016) propose dynamic co-attention networks which attend the question and passage simultaneously and iteratively refine answer predictions. Lee et al. (2016) and Yu et al. (2016) predict answers by ranking continuous text spans within passages.",
"cite_spans": [
{
"start": 69,
"end": 91,
"text": "Wang and Jiang (2016b)",
"ref_id": "BIBREF28"
},
{
"start": 152,
"end": 175,
"text": "(Wang and Jiang, 2016a)",
"ref_id": "BIBREF27"
},
{
"start": 245,
"end": 267,
"text": "(Vinyals et al., 2015)",
"ref_id": "BIBREF26"
},
{
"start": 270,
"end": 287,
"text": "Seo et al. (2016)",
"ref_id": "BIBREF21"
},
{
"start": 404,
"end": 423,
"text": "Xiong et al. (2016)",
"ref_id": "BIBREF31"
},
{
"start": 558,
"end": 575,
"text": "Lee et al. (2016)",
"ref_id": "BIBREF11"
},
{
"start": 580,
"end": 596,
"text": "Yu et al. (2016)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Inspired by Wang and Jiang (2016b) , we introduce a gated self-matching network, illustrated in Figure 1 , an end-to-end neural network model for reading comprehension and question answering. Our model consists of four parts: 1) the recurrent network encoder to build representation for questions and passages separately, 2) the gated matching layer to match the question and passage, 3) the self-matching layer to aggregate information from the whole passage, and 4) the pointernetwork based answer boundary prediction layer. The key contributions of this work are three-fold.",
"cite_spans": [
{
"start": 12,
"end": 34,
"text": "Wang and Jiang (2016b)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 96,
"end": 104,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "First, we propose a gated attention-based recurrent network, which adds an additional gate to the attention-based recurrent networks Rockt\u00e4schel et al., 2015; Wang and Jiang, 2016a) , to account for the fact that words in the passage are of different importance to answer a particular question for reading comprehension and question answering. In Wang and Jiang (2016a) , words in a passage with their corresponding attention-weighted question context are en-coded together to produce question-aware passage representation. By introducing a gating mechanism, our gated attention-based recurrent network assigns different levels of importance to passage parts depending on their relevance to the question, masking out irrelevant passage parts and emphasizing the important ones.",
"cite_spans": [
{
"start": 133,
"end": 158,
"text": "Rockt\u00e4schel et al., 2015;",
"ref_id": "BIBREF20"
},
{
"start": 159,
"end": 181,
"text": "Wang and Jiang, 2016a)",
"ref_id": "BIBREF27"
},
{
"start": 347,
"end": 369,
"text": "Wang and Jiang (2016a)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Second, we introduce a self-matching mechanism, which can effectively aggregate evidence from the whole passage to infer the answer. Through a gated matching layer, the resulting question-aware passage representation effectively encodes question information for each passage word. However, recurrent networks can only memorize limited passage context in practice despite its theoretical capability. One answer candidate is often unaware of the clues in other parts of the passage. To address this problem, we propose a self-matching layer to dynamically refine passage representation with information from the whole passage. Based on question-aware passage representation, we employ gated attention-based recurrent networks on passage against passage itself, aggregating evidence relevant to the current passage word from every word in the passage. A gated attention-based recurrent network layer and self-matching layer dynamically enrich each passage representation with information aggregated from both question and passage, enabling subsequent network to better predict answers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Lastly, the proposed method yields state-of-theart results against strong baselines. Our single model achieves 71.3% exact match accuracy on the hidden SQuAD test set, while the ensemble model further boosts the result to 75.9%. At the time 1 of submission of this paper, our model holds the first place on the SQuAD leader board.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For reading comprehension style question answering, a passage P and question Q are given, our task is to predict an answer A to question Q based on information found in P. The SQuAD dataset further constrains answer A to be a continuous subspan of passage P. Answer A often includes nonentities and can be much longer phrases. This setup challenges us to understand and reason about both the question and passage in order to infer the answer. Table 1 shows a simple example from the SQuAD dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 443,
"end": 450,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Task Description",
"sec_num": "2"
},
{
"text": "1 On Feb. 6, 2017",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Description",
"sec_num": "2"
},
{
"text": "Passage: Tesla later approached Morgan to ask for more funds to build a more powerful transmitter. When asked where all the money had gone, Tesla responded by saying that he was affected by the Panic of 1901, which he (Morgan) had caused. Morgan was shocked by the reminder of his part in the stock market crash and by Tesla's breach of contract by asking for more funds. Tesla wrote another plea to Morgan, but it was also fruitless. Morgan still owed Tesla money on the original agreement, and Tesla had been facing foreclosure even before construction of the tower began. Question: On what did Tesla blame for the loss of the initial money? Answer: Panic of 1901 3 Gated Self-Matching Networks Figure 1 gives an overview of the gated selfmatching networks. First, the question and passage are processed by a bi-directional recurrent network (Mikolov et al., 2010) separately. We then match the question and passage with gated attention-based recurrent networks, obtaining question-aware representation for the passage. On top of that, we apply self-matching attention to aggregate evidence from the whole passage and refine the passage representation, which is then fed into the output layer to predict the boundary of the answer span.",
"cite_spans": [
{
"start": 844,
"end": 866,
"text": "(Mikolov et al., 2010)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 697,
"end": 705,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task Description",
"sec_num": "2"
},
{
"text": "Consider a question Q = {w Q t } m t=1 and a passage P = {w P t } n t=1 . We first convert the words to their respective word-level embeddings ({e Q t } m t=1 and {e P t } n t=1 ) and character-level embeddings ({c Q t } m t=1 and {c P t } n t=1 ). The character-level embeddings are generated by taking the final hidden states of a bi-directional recurrent neural network (RNN) applied to embeddings of characters in the token. Such character-level embeddings have been shown to be helpful to deal with out-ofvocab (OOV) tokens. We then use a bi-directional RNN to produce new representation u Q 1 , . . . , u Q m and u P 1 , . . . , u P n of all words in the question and passage respectively:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question and Passage Encoder",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "u Q t = BiRNN Q (u Q t\u22121 , [e Q t , c Q t ])",
"eq_num": "(1)"
}
],
"section": "Question and Passage Encoder",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "u P t = BiRNN P (u P t\u22121 , [e P t , c P t ])",
"eq_num": "(2)"
}
],
"section": "Question and Passage Encoder",
"sec_num": "3.1"
},
{
"text": "We choose to use Gated Recurrent Unit (GRU) in our experiment since it performs similarly to LSTM (Hochreiter and Schmidhuber, 1997) Figure 1 : Gated Self-Matching Networks structure overview.",
"cite_spans": [
{
"start": 98,
"end": 132,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 133,
"end": 141,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Question and Passage Encoder",
"sec_num": "3.1"
},
{
"text": "We propose a gated attention-based recurrent network to incorporate question information into passage representation. It is a variant of attentionbased recurrent networks, with an additional gate to determine the importance of information in the passage regarding a question. Given question and passage representation {u Q t } m t=1 and {u P t } n t=1 , Rockt\u00e4schel et al. (2015) propose generating sentence-pair representation {v P t } n t=1 via soft-alignment of words in the question and passage as follows:",
"cite_spans": [
{
"start": 354,
"end": 379,
"text": "Rockt\u00e4schel et al. (2015)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gated Attention-based Recurrent Networks",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v P t = RNN(v P t\u22121 , c t )",
"eq_num": "(3)"
}
],
"section": "Gated Attention-based Recurrent Networks",
"sec_num": "3.2"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gated Attention-based Recurrent Networks",
"sec_num": "3.2"
},
{
"text": "c t = att(u Q , [u P t , v P t\u22121 ])",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gated Attention-based Recurrent Networks",
"sec_num": "3.2"
},
{
"text": "is an attentionpooling vector of the whole question (u Q ):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gated Attention-based Recurrent Networks",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s t j = v T tanh(W Q u u Q j + W P u u P t + W P v v P t\u22121 ) a t i = exp(s t i )/\u03a3 m j=1 exp(s t j ) c t = \u03a3 m i=1 a t i u Q i",
"eq_num": "(4)"
}
],
"section": "Gated Attention-based Recurrent Networks",
"sec_num": "3.2"
},
{
"text": "Each passage representation v P t dynamically incorporates aggregated matching information from the whole question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gated Attention-based Recurrent Networks",
"sec_num": "3.2"
},
{
"text": "Wang and Jiang (2016a) introduce match-LSTM, which takes u P t as an additional input into the recurrent network:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gated Attention-based Recurrent Networks",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v P t = RNN(v P t\u22121 , [u P t , c t ])",
"eq_num": "(5)"
}
],
"section": "Gated Attention-based Recurrent Networks",
"sec_num": "3.2"
},
{
"text": "To determine the importance of passage parts and attend to the ones relevant to the question, we add another gate to the input ([u P t , c t ]) of RNN:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gated Attention-based Recurrent Networks",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "g t = sigmoid(W g [u P t , c t ]) [u P t , c t ] * = g t [u P t , c t ]",
"eq_num": "(6)"
}
],
"section": "Gated Attention-based Recurrent Networks",
"sec_num": "3.2"
},
{
"text": "Different from the gates in LSTM or GRU, the additional gate is based on the current passage word and its attention-pooling vector of the question, which focuses on the relation between the question and current passage word. The gate effectively model the phenomenon that only parts of the passage are relevant to the question in reading comprehension and question answering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gated Attention-based Recurrent Networks",
"sec_num": "3.2"
},
{
"text": "[u P t , c t ] * is utilized in subsequent calculations instead of [u P t , c t ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gated Attention-based Recurrent Networks",
"sec_num": "3.2"
},
{
"text": "We call this gated attention-based recurrent networks. It can be applied to variants of RNN, such as GRU and LSTM. We also conduct experiments to show the effectiveness of the additional gate on both GRU and LSTM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gated Attention-based Recurrent Networks",
"sec_num": "3.2"
},
{
"text": "Through gated attention-based recurrent networks, question-aware passage representation {v P t } n t=1 is generated to pinpoint important parts in the passage. One problem with such representation is that it has very limited knowledge of context. One answer candidate is often oblivious to important cues in the passage outside its surrounding window. Moreover, there exists some sort of lexical or syntactic divergence between the question and passage in the majority of SQuAD dataset (Rajpurkar et al., 2016) . Passage context is necessary to infer the answer. To address this problem, we propose directly matching the question-aware passage representation against itself. It dynamically collects evidence from the whole passage for words in passage and encodes the evidence relevant to the current passage word and its matching question information into the passage representation h P t :",
"cite_spans": [
{
"start": 486,
"end": 510,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Self-Matching Attention",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h P t = BiRNN(h P t\u22121 , [v P t , c t ])",
"eq_num": "(7)"
}
],
"section": "Self-Matching Attention",
"sec_num": "3.3"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-Matching Attention",
"sec_num": "3.3"
},
{
"text": "c t = att(v P , v P t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-Matching Attention",
"sec_num": "3.3"
},
{
"text": "is an attention-pooling vector of the whole passage (v P ):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-Matching Attention",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s t j = v T tanh(W P v v P j + WP v v P t ) a t i = exp(s t i )/\u03a3 n j=1 exp(s t j ) c t = \u03a3 n i=1 a t i v P i",
"eq_num": "(8)"
}
],
"section": "Self-Matching Attention",
"sec_num": "3.3"
},
{
"text": "An additional gate as in gated attention-based recurrent networks is applied to [v P t , c t ] to adaptively control the input of RNN.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-Matching Attention",
"sec_num": "3.3"
},
{
"text": "Self-matching extracts evidence from the whole passage according to the current passage word and question information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-Matching Attention",
"sec_num": "3.3"
},
{
"text": "We follow Wang and Jiang (2016b) and use pointer networks (Vinyals et al., 2015) to predict the start and end position of the answer. In addition, we use an attention-pooling over the question representation to generate the initial hidden vector for the pointer network. Given the passage representation {h P t } n t=1 , the attention mechanism is utilized as a pointer to select the start position (p 1 ) and end position (p 2 ) from the passage, which can be formulated as follows:",
"cite_spans": [
{
"start": 58,
"end": 80,
"text": "(Vinyals et al., 2015)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Output Layer",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s t j = v T tanh(W P h h P j + W a h h a t\u22121 ) a t i = exp(s t i )/\u03a3 n j=1 exp(s t j ) p t = arg max(a t 1 , . . . , a t n )",
"eq_num": "(9)"
}
],
"section": "Output Layer",
"sec_num": "3.4"
},
{
"text": "Here h a t\u22121 represents the last hidden state of the answer recurrent network (pointer network). The input of the answer recurrent network is the attention-pooling vector based on current predicted probability a t :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output Layer",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c t = \u03a3 n i=1 a t i h P i h a t = RNN(h a t\u22121 , c t )",
"eq_num": "(10)"
}
],
"section": "Output Layer",
"sec_num": "3.4"
},
{
"text": "When predicting the start position, h a t\u22121 represents the initial hidden state of the answer recurrent network. We utilize the question vector r Q as the initial state of the answer recurrent network.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output Layer",
"sec_num": "3.4"
},
{
"text": "r Q = att(u Q , V Q r )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output Layer",
"sec_num": "3.4"
},
{
"text": "is an attention-pooling vector of the question based on the parameter V Q r :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output Layer",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s j = v T tanh(W Q u u Q j + W Q v V Q r ) a i = exp(s i )/\u03a3 m j=1 exp(s j ) r Q = \u03a3 m i=1 a i u Q i",
"eq_num": "(11)"
}
],
"section": "Output Layer",
"sec_num": "3.4"
},
{
"text": "To train the network, we minimize the sum of the negative log probabilities of the ground truth start and end position by the predicted distributions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output Layer",
"sec_num": "3.4"
},
{
"text": "We specially focus on the SQuAD dataset to train and evaluate our model, which has garnered a huge attention over the past few months. SQuAD is composed of 100,000+ questions posed by crowd workers on 536 Wikipedia articles. The dataset is randomly partitioned into a training set (80%), a development set (10%), and a test set (10%). The answer to every question is a segment of the corresponding passage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 4.1 Implementation Details",
"sec_num": "4"
},
{
"text": "We use the tokenizer from Stanford CoreNLP (Manning et al., 2014) to preprocess each passage and question. The Gated Recurrent Unit variant of LSTM is used throughout our model. For word embedding, we use pretrained case-sensitive GloVe embeddings 2 (Pennington et al., 2014) for both questions and passages, and it is fixed during training; We use zero vectors to represent all out-of-vocab words. We utilize 1 layer of bi-directional GRU to compute character-level embeddings and 3 layers of bi-directional GRU to encode questions and passages, the gated attention-based recurrent network for question and passage matching is also encoded bidirectionally in our experiment. The hidden vector length is set to 75 for all layers. The hidden size used to compute attention scores is also 75. We also apply dropout (Srivastava et al., 2014) between layers with a dropout rate of 0.2. The model is optimized with AdaDelta (Zeiler, 2012) with an initial learning rate of 1. The \u03c1 and used in AdaDelta are 0.95 and 1e \u22126 respectively.",
"cite_spans": [
{
"start": 813,
"end": 838,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 4.1 Implementation Details",
"sec_num": "4"
},
{
"text": "Single model EM / F1 EM / F1 LR Baseline (Rajpurkar et al., 2016) 40.0 / 51.0 40.4 / 51.0 Dynamic Chunk Reader (Yu et al., 2016) 62.5 / 71.2 62.5 / 71.0 Match-LSTM with Ans-Ptr (Wang and Jiang, 2016b) 64.1 / 73.9 64.7 / 73.7 Dynamic Coattention Networks (Xiong et al., 2016) 65.4 / 75.6 66.2 / 75.9 RaSoR (Lee et al., 2016) 66.4 / 74.9 -/ -BiDAF (Seo et al., 2016) 68.0 / 77.3 68.0 / 77.3 jNet (Zhang et al., 2017) -/ -68.7 / 77.4 Multi-Perspective Matching -/ -68.9 / 77.8 FastQA (Weissenborn et al., 2017) -/ -68.4 / 77.1 FastQAExt (Weissenborn et al., 2017) -/ -70.8 / 78.9 R-NET 71.1 / 79.5 71.3 / 79.7 Ensemble model Fine-Grained Gating (Yang et al., 2016) 62.4 / 73.4 62.5 / 73.3 Match-LSTM with Ans-Ptr (Wang and Jiang, 2016b) 67.6 / 76.8 67.9 / 77.0 RaSoR (Lee et al., 2016) 68.2 / 76.7 -/ -Dynamic Coattention Networks (Xiong et al., 2016) 70.3 / 79.4 71.6 / 80.4 BiDAF (Seo et al., 2016) 73.3 / 81.1 73.3 / 81.1 Multi-Perspective Matching -/ -73.8 / 81.3 R-NET 75.6 / 82.8 75.9 / 82.9 Human Performance (Rajpurkar et al., 2016) 80.3 / 90.5 77.0 / 86.8 Table 2 : The performance of our gated self-matching networks (R-NET) and competing approaches 4 .",
"cite_spans": [
{
"start": 41,
"end": 65,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 111,
"end": 128,
"text": "(Yu et al., 2016)",
"ref_id": "BIBREF34"
},
{
"start": 254,
"end": 274,
"text": "(Xiong et al., 2016)",
"ref_id": "BIBREF31"
},
{
"start": 305,
"end": 323,
"text": "(Lee et al., 2016)",
"ref_id": "BIBREF11"
},
{
"start": 346,
"end": 364,
"text": "(Seo et al., 2016)",
"ref_id": "BIBREF21"
},
{
"start": 394,
"end": 414,
"text": "(Zhang et al., 2017)",
"ref_id": "BIBREF36"
},
{
"start": 481,
"end": 507,
"text": "(Weissenborn et al., 2017)",
"ref_id": "BIBREF30"
},
{
"start": 534,
"end": 560,
"text": "(Weissenborn et al., 2017)",
"ref_id": "BIBREF30"
},
{
"start": 642,
"end": 661,
"text": "(Yang et al., 2016)",
"ref_id": "BIBREF34"
},
{
"start": 764,
"end": 782,
"text": "(Lee et al., 2016)",
"ref_id": "BIBREF11"
},
{
"start": 828,
"end": 848,
"text": "(Xiong et al., 2016)",
"ref_id": "BIBREF31"
},
{
"start": 879,
"end": 897,
"text": "(Seo et al., 2016)",
"ref_id": "BIBREF21"
},
{
"start": 1013,
"end": 1037,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 1062,
"end": 1069,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Dev Set Test Set",
"sec_num": null
},
{
"text": "Single Model EM / F1 Gated Self-Matching (GRU) 71.1 / 79.5 -Character embedding 69.6 / 78.6 -Gating 67.9 / 77.1 -Self-Matching 67.6 / 76.7 -Gating, -Self-Matching 65.4 / 74.7 Table 3 : Ablation tests of single model on the SQuAD dev set. All the components significantly (t-test, p < 0.05) improve the model.",
"cite_spans": [],
"ref_spans": [
{
"start": 175,
"end": 182,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dev Set Test Set",
"sec_num": null
},
{
"text": "Two metrics are utilized to evaluate model performance: Exact Match (EM) and F1 score. EM measures the percentage of the prediction that matches one of the ground truth answers exactly. F1 measures the overlap between the prediction and ground truth answers which takes the maximum F1 over all of the ground truth answers. The scores on dev set are evaluated by the official script 3 . Since the test set is hidden, we are required to submit the model to Stanford NLP group to obtain the test scores. dev and test set of our model and competing approaches 4 . The ensemble model consists of 20 training runs with the identical architecture and hyper-parameters. At test time, we choose the answer with the highest sum of confidence scores amongst the 20 runs for each question. As we can see, our method clearly outperforms the baseline and several strong state-of-the-art systems for both single model and ensembles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Results",
"sec_num": "4.2"
},
{
"text": "We do ablation tests on the dev set to analyze the contribution of components of gated self-matching networks. As illustrated in Table 3 , the gated Figure 2 : Part of the attention matrices for self-matching. Each row is the attention weights of the whole passage for the current passage word. The darker the color is the higher the weight is. Some key evidence relevant to the question-passage tuple is more encoded into answer candidates.",
"cite_spans": [],
"ref_spans": [
{
"start": 129,
"end": 136,
"text": "Table 3",
"ref_id": null
},
{
"start": 149,
"end": 157,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.3"
},
{
"text": "attention-based recurrent network (GARNN) and self-matching attention mechanism positively contribute to the final results of gated self-matching networks. Removing self-matching results in 3.5 point EM drop, which reveals that information in the passage plays an important role. Characterlevel embeddings contribute towards the model's performance since it can better handle out-ofvocab or rare words. To show the effectiveness of GARNN for variant RNNs, we conduct experiments on the base model (Wang and Jiang, 2016b) of different variant RNNs. The base model match the question and passage via a variant of attentionbased recurrent network (Wang and Jiang, 2016a) , and employ pointer networks to predict the answer. Character-level embeddings are not utilized. As shown in Table 4 , the gate introduced in question and passage matching layer is helpful for both GRU and LSTM on the SQuAD dataset.",
"cite_spans": [
{
"start": 497,
"end": 520,
"text": "(Wang and Jiang, 2016b)",
"ref_id": "BIBREF28"
},
{
"start": 644,
"end": 667,
"text": "(Wang and Jiang, 2016a)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 778,
"end": 785,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.3"
},
{
"text": "To show the ability of the model for encoding evidence from the passage, we draw the align-ment of the passage against itself in self-matching. The attention weights are shown in Figure 2 , in which the darker the color is the higher the weight is. We can see that key evidence aggregated from the whole passage is more encoded into the answer candidates. For example, the answer \"Egg of Columbus\" pays more attention to the key information \"Tesla\", \"device\" and the lexical variation word \"known\" that are relevant to the question-passage tuple. The answer \"world classic of epoch-making oratory\" mainly focuses on the evidence \"Michael Mullet\", \"speech\" and lexical variation word \"considers\". For other words, the attention weights are more evenly distributed between evidence and some irrelevant parts. Selfmatching do adaptively aggregate evidence for words in passage.",
"cite_spans": [],
"ref_spans": [
{
"start": 179,
"end": 187,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Encoding Evidence from Passage",
"sec_num": "5.1"
},
{
"text": "To further analyse the model's performance, we analyse the F1 score for different question types (Figure 3(a) ), different answer lengths (Figure 3(b) ), different passage lengths (Figure 3(c) ) and different question lengths (Figure 3(d) ) of our model and its ablation models. As we can see, both four models show the same trend. The questions are split into different groups based on a set of question words we have defined, including \"what\", \"how\", \"who\", \"when\", \"which\", \"where\", and \"why\". As we can see, our model is better at \"when\" and \"who\" questions, but poorly on \"why\" questions. This is mainly because the answers to why questions can be very diverse, and they are not restricted to any certain type of phrases. From the Graph 3(b), the performance of our model obviously drops with the increase of answer length. Longer answers are harder to predict. From Graph 3(c) and 3(d), we discover that the performance remains stable with the increase in length, the obvious fluctuation in longer passages and questions is mainly because the proportion is too small. Our model is largely agnostic to long passages and focuses on important part of the passage.",
"cite_spans": [],
"ref_spans": [
{
"start": 97,
"end": 109,
"text": "(Figure 3(a)",
"ref_id": "FIGREF0"
},
{
"start": 138,
"end": 151,
"text": "(Figure 3(b)",
"ref_id": "FIGREF0"
},
{
"start": 181,
"end": 193,
"text": "(Figure 3(c)",
"ref_id": "FIGREF0"
},
{
"start": 227,
"end": 239,
"text": "(Figure 3(d)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Result Analysis",
"sec_num": "5.2"
},
{
"text": "Reading Comprehension and Question Answering Dataset Benchmark datasets play an important role in recent progress in reading comprehension and question answering research. Exist-ing datasets can be classified into two categories according to whether they are manually labeled. Those that are labeled by humans are always in high quality (Richardson et al., 2013; Berant et al., 2014; Yang et al., 2015) , but are too small for training modern data-intensive models. Those that are automatically generated from natural occurring data can be very large (Hill et al., 2016; , which allow the training of more expressive models. However, they are in cloze style, in which the goal is to predict the missing word (often a named entity) in a passage. Moreover, have shown that the CNN / Daily News dataset (Hermann et al., 2015) requires less reasoning than previously thought, and conclude that performance is almost saturated.",
"cite_spans": [
{
"start": 337,
"end": 362,
"text": "(Richardson et al., 2013;",
"ref_id": "BIBREF19"
},
{
"start": 363,
"end": 383,
"text": "Berant et al., 2014;",
"ref_id": "BIBREF1"
},
{
"start": 384,
"end": 402,
"text": "Yang et al., 2015)",
"ref_id": "BIBREF32"
},
{
"start": 551,
"end": 570,
"text": "(Hill et al., 2016;",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Different from above datasets, the SQuAD provides a large and high-quality dataset. The answers in SQuAD often include non-entities and can be much longer phrase, which is more challenging than cloze-style datasets. Moreover, Rajpurkar et al. (2016) show that the dataset retains a diverse set of answers and requires different forms of logical reasoning, including multi-sentence reasoning. MS MARCO (Nguyen et al., 2016) is also a large-scale dataset. The questions in the dataset are real anonymized queries issued through Bing or Cortana and the passages are related web pages. For each question in the dataset, several related passages are provided. However, the answers are human generated, which is different from SQuAD where answers must be a span of the passage.",
"cite_spans": [
{
"start": 226,
"end": 249,
"text": "Rajpurkar et al. (2016)",
"ref_id": "BIBREF18"
},
{
"start": 401,
"end": 422,
"text": "(Nguyen et al., 2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "End-to-end Neural Networks for Reading Comprehension Along with cloze-style datasets, several powerful deep learning models Hill et al., 2016; Kadlec et al., 2016; Sordoni et al., 2016; Cui et al., 2016; Trischler et al., 2016; Shen et al., 2016) have been introduced to solve this problem. Hermann et al. (2015) first introduce attention mechanism into reading comprehension. Hill et al. (2016) propose a windowbased memory network for CBT dataset. Kadlec et al. (2016) introduce pointer networks with one attention step to predict the blanking out entities. Sordoni et al. (2016) propose an iterative alternating attention mechanism to better model the links between question and passage. Trischler et al. (2016) solve cloze-style question answering task by combining an attentive model with a reranking model. propose iteratively selecting important parts of the passage by a multiplying gating function with the question representation. Cui et al. (2016) propose a two-way attention mechanism to encode the passage and question mutually. Shen et al. (2016) propose iteratively inferring the answer with a dynamic number of reasoning steps and is trained with reinforcement learning.",
"cite_spans": [
{
"start": 124,
"end": 142,
"text": "Hill et al., 2016;",
"ref_id": "BIBREF8"
},
{
"start": 143,
"end": 163,
"text": "Kadlec et al., 2016;",
"ref_id": "BIBREF10"
},
{
"start": 164,
"end": 185,
"text": "Sordoni et al., 2016;",
"ref_id": "BIBREF23"
},
{
"start": 186,
"end": 203,
"text": "Cui et al., 2016;",
"ref_id": "BIBREF5"
},
{
"start": 204,
"end": 227,
"text": "Trischler et al., 2016;",
"ref_id": "BIBREF25"
},
{
"start": 228,
"end": 246,
"text": "Shen et al., 2016)",
"ref_id": "BIBREF22"
},
{
"start": 377,
"end": 395,
"text": "Hill et al. (2016)",
"ref_id": "BIBREF8"
},
{
"start": 450,
"end": 470,
"text": "Kadlec et al. (2016)",
"ref_id": "BIBREF10"
},
{
"start": 560,
"end": 581,
"text": "Sordoni et al. (2016)",
"ref_id": "BIBREF23"
},
{
"start": 691,
"end": 714,
"text": "Trischler et al. (2016)",
"ref_id": "BIBREF25"
},
{
"start": 941,
"end": 958,
"text": "Cui et al. (2016)",
"ref_id": "BIBREF5"
},
{
"start": 1042,
"end": 1060,
"text": "Shen et al. (2016)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Neural network-based models demonstrate the effectiveness on the SQuAD dataset. Wang and Jiang (2016b) combine match-LSTM and pointer networks to produce the boundary of the answer. Xiong et al. (2016) and Seo et al. (2016) employ variant coattention mechanism to match the question and passage mutually. Xiong et al. (2016) propose a dynamic pointer network to iteratively infer the answer. Yu et al. (2016) and Lee et al. (2016) solve SQuAD by ranking continuous text spans within passage. Yang et al. (2016) present a fine-grained gating mechanism to dynamically combine word-level and character-level representation and model the interaction between questions and passages. propose matching the context of passage with the question from multiple perspectives.",
"cite_spans": [
{
"start": 182,
"end": 201,
"text": "Xiong et al. (2016)",
"ref_id": "BIBREF31"
},
{
"start": 206,
"end": 223,
"text": "Seo et al. (2016)",
"ref_id": "BIBREF21"
},
{
"start": 305,
"end": 324,
"text": "Xiong et al. (2016)",
"ref_id": "BIBREF31"
},
{
"start": 392,
"end": 408,
"text": "Yu et al. (2016)",
"ref_id": "BIBREF34"
},
{
"start": 413,
"end": 430,
"text": "Lee et al. (2016)",
"ref_id": "BIBREF11"
},
{
"start": 492,
"end": 510,
"text": "Yang et al. (2016)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Different from the above models, we introduce self-matching attention in our model. It dynamically refines the passage representation by looking over the whole passage and aggregating evidence relevant to the current passage word and question, allowing our model make full use of passage information. Weightedly attending to word context has been proposed in several works. Ling et al. (2015) propose considering window-based contextual words differently depending on the word and its relative position. Cheng et al. (2016) propose a novel LSTM network to encode words in a sentence which considers the relation between the current token being processed and its past tokens in the memory. Parikh et al. (2016) apply this method to encode words in a sentence according to word form and its distance. Since passage information relevant to question is more helpful to infer the answer in reading comprehension, we apply self-matching based on question-aware representation and gated attention-based recurrent networks. It helps our model mainly focus on question-relevant evidence in the passage and dynamically look over the whole passage to aggregate evidence.",
"cite_spans": [
{
"start": 374,
"end": 392,
"text": "Ling et al. (2015)",
"ref_id": "BIBREF12"
},
{
"start": 504,
"end": 523,
"text": "Cheng et al. (2016)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Another key component of our model is the attention-based recurrent network, which has demonstrated success in a wide range of tasks. first propose attentionbased recurrent networks to infer word-level alignment when generating the target word. Hermann et al. (2015) introduce word-level attention into reading comprehension to model the interaction between questions and passages. Rockt\u00e4schel et al. (2015) and Wang and Jiang (2016a) propose determining entailment via word-by-word matching. The gated attention-based recurrent network is a variant of attention-based recurrent network with an additional gate to model the fact that passage parts are of different importance to the particular question for reading comprehension and question answering.",
"cite_spans": [
{
"start": 382,
"end": 407,
"text": "Rockt\u00e4schel et al. (2015)",
"ref_id": "BIBREF20"
},
{
"start": 412,
"end": 434,
"text": "Wang and Jiang (2016a)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "In this paper, we present gated self-matching networks for reading comprehension and question answering. We introduce the gated attentionbased recurrent networks and self-matching attention mechanism to obtain representation for the question and passage, and then use the pointernetworks to locate answer boundaries. Our model achieves state-of-the-art results on the SQuAD dataset, outperforming several strong competing systems. As for future work, we are applying the gated self-matching networks to other reading comprehension and question answering datasets, such as the MS MARCO dataset (Nguyen et al., 2016) .",
"cite_spans": [
{
"start": 593,
"end": 614,
"text": "(Nguyen et al., 2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Downloaded from http://nlp.stanford.edu/ data/glove.840B.300d.zip.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Extracted from SQuAD leaderboard http: //stanford-qa.com on Feb. 6, 2017.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank all the anonymous reviewers for their helpful comments. We thank Pranav ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. CoRR .",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Modeling biological processes for reading comprehension",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Srikumar",
"suffix": ""
},
{
"first": "Pei-Chun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Abby",
"middle": [],
"last": "Vander Linden",
"suffix": ""
},
{
"first": "Brittany",
"middle": [],
"last": "Harding",
"suffix": ""
},
{
"first": "Brad",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant, Vivek Srikumar, Pei-Chun Chen, Abby Vander Linden, Brittany Harding, Brad Huang, Peter Clark, and Christopher D. Manning. 2014. Modeling biological processes for reading comprehension. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special In- terest Group of the ACL.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A thorough examination of the cnn/daily mail reading comprehension task",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Bolton",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqi Chen, Jason Bolton, and Christopher D. Man- ning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. In Pro- ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Long short-term memory-networks for machine reading",
"authors": [
{
"first": "Jianpeng",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine reading. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Process- ing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merrienboer",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Aglar G\u00fcl\u00e7ehre",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1724--1734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, \u00c7 aglar G\u00fcl\u00e7ehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL. pages 1724- 1734.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Attention-overattention neural networks for reading comprehension",
"authors": [
{
"first": "Yiming",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Zhipeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Si",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Shijin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Guoping",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. 2016. Attention-over- attention neural networks for reading comprehen- sion. CoRR .",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Gated-attention readers for text comprehension",
"authors": [
{
"first": "Bhuwan",
"middle": [],
"last": "Dhingra",
"suffix": ""
},
{
"first": "Hanxiao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bhuwan Dhingra, Hanxiao Liu, William W. Cohen, and Ruslan Salakhutdinov. 2016. Gated-attention read- ers for text comprehension. CoRR .",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Teaching machines to read and comprehend",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Moritz Hermann",
"suffix": ""
},
{
"first": "Tom\u00e1s",
"middle": [],
"last": "Kocisk\u00fd",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Lasse",
"middle": [],
"last": "Espeholt",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Kay",
"suffix": ""
},
{
"first": "Mustafa",
"middle": [],
"last": "Suleyman",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015",
"volume": "",
"issue": "",
"pages": "1693--1701",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Moritz Hermann, Tom\u00e1s Kocisk\u00fd, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. Teaching ma- chines to read and comprehend. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Sys- tems 2015. pages 1693-1701.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The goldilocks principle: Reading children's books with explicit memory representations",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2016. The goldilocks principle: Reading children's books with explicit memory representa- tions. In Proceedings of the International Confer- ence on Learning Representations.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9(8):1735-1780.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Text understanding with the attention sum reader network",
"authors": [
{
"first": "Rudolf",
"middle": [],
"last": "Kadlec",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Schmid",
"suffix": ""
},
{
"first": "Ondrej",
"middle": [],
"last": "Bajgar",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. 2016. Text understanding with the at- tention sum reader network. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning recurrent span representations for extractive question answering",
"authors": [
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.01436"
]
},
"num": null,
"urls": [],
"raw_text": "Kenton Lee, Tom Kwiatkowski, Ankur Parikh, and Di- panjan Das. 2016. Learning recurrent span repre- sentations for extractive question answering. arXiv preprint arXiv:1611.01436 .",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Not all contexts are created equal: Better word representations with variable attention",
"authors": [
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Silvio",
"middle": [],
"last": "Amir",
"suffix": ""
},
{
"first": "Ramon",
"middle": [],
"last": "Fermandez",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
},
{
"first": "Isabel",
"middle": [],
"last": "Trancoso",
"suffix": ""
},
{
"first": "Chu-Cheng",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang Ling, Yulia Tsvetkov, Silvio Amir, Ramon Fer- mandez, Chris Dyer, Alan W. Black, Isabel Tran- coso, and Chu-Cheng Lin. 2015. Not all con- texts are created equal: Better word representations with variable attention. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The stanford corenlp natural language processing toolkit",
"authors": [
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Bauer",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mc-Closky",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL (System Demonstrations)",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David Mc- Closky. 2014. The stanford corenlp natural lan- guage processing toolkit. In ACL (System Demon- strations). pages 55-60.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Recurrent neural network based language model",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Karafi\u00e1t",
"suffix": ""
},
{
"first": "Lukas",
"middle": [],
"last": "Burget",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Martin Karafi\u00e1t, Lukas Burget, Jan Cernock\u1ef3, and Sanjeev Khudanpur. 2010. Recur- rent neural network based language model. In Inter- speech.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "MS MARCO: A human generated machine reading comprehension dataset",
"authors": [
{
"first": "Tri",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Mir",
"middle": [],
"last": "Rosenberg",
"suffix": ""
},
{
"first": "Xia",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Saurabh",
"middle": [],
"last": "Tiwary",
"suffix": ""
},
{
"first": "Rangan",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human gener- ated machine reading comprehension dataset. CoRR abs/1611.09268.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A decomposable attention model for natural language inference",
"authors": [
{
"first": "P",
"middle": [],
"last": "Ankur",
"suffix": ""
},
{
"first": "Oscar",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ankur P. Parikh, Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceed- ings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL. pages 1532-1543.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Squad: 100,000+ questions for machine comprehension of text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the Conference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Mctest: A challenge dataset for the open-domain machine comprehension of text",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Richardson",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Christopher",
"suffix": ""
},
{
"first": "Erin",
"middle": [],
"last": "Burges",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Renshaw",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "193--203",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Richardson, Christopher J. C. Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the 2013 Conference on Em- pirical Methods in Natural Language Processing. pages 193-203.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Reasoning about entailment with neural attention",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Karl",
"middle": [
"Moritz"
],
"last": "Hermann",
"suffix": ""
},
{
"first": "Tom\u00e1s",
"middle": [],
"last": "Kocisk\u00fd",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Rockt\u00e4schel, Edward Grefenstette, Karl Moritz Hermann, Tom\u00e1s Kocisk\u00fd, and Phil Blunsom. 2015. Reasoning about entailment with neural attention. CoRR .",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Bidirectional attention flow for machine comprehension",
"authors": [
{
"first": "Minjoon",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Aniruddha",
"middle": [],
"last": "Kembhavi",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.01603"
]
},
"num": null,
"urls": [],
"raw_text": "Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603 .",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Reasonet: Learning to stop reading in machine comprehension",
"authors": [
{
"first": "Yelong",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Po-Sen",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Weizhu",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Workshop on Cognitive Computation: Integrating neural and symbolic approaches 2016 colocated with the 30th Annual Conference on Neural Information Processing Systems (NIPS 2016)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. 2016. Reasonet: Learning to stop reading in machine comprehension. In Proceedings of the Workshop on Cognitive Computation: Inte- grating neural and symbolic approaches 2016 co- located with the 30th Annual Conference on Neu- ral Information Processing Systems (NIPS 2016), Barcelona, Spain, December 9, 2016..",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Iterative alternating neural attention for machine reading",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Phillip",
"middle": [],
"last": "Bachman",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessandro Sordoni, Phillip Bachman, and Yoshua Bengio. 2016. Iterative alternating neural attention for machine reading. CoRR abs/1606.02245.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Dropout: a simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Machine Learning Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdi- nov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research .",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Natural language comprehension with the epireader",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Trischler",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Xingdi",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Kaheer",
"middle": [],
"last": "Suleman",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. 2016. Natural language comprehension with the epireader. In Proceedings of the Confer- ence on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Pointer networks",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Meire",
"middle": [],
"last": "Fortunato",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "2692--2700",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems 28: Annual Con- ference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada. pages 2692-2700.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Learning natural language inference with LSTM",
"authors": [
{
"first": "Shuohang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2016,
"venue": "The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuohang Wang and Jing Jiang. 2016a. Learning natu- ral language inference with LSTM. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Machine comprehension using match-lstm and answer pointer",
"authors": [
{
"first": "Shuohang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1608.07905"
]
},
"num": null,
"urls": [],
"raw_text": "Shuohang Wang and Jing Jiang. 2016b. Machine com- prehension using match-lstm and answer pointer. arXiv preprint arXiv:1608.07905 .",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Multi-perspective context matching for machine comprehension",
"authors": [
{
"first": "Zhiguo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Haitao",
"middle": [],
"last": "Mi",
"suffix": ""
},
{
"first": "Wael",
"middle": [],
"last": "Hamza",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1612.04211"
]
},
"num": null,
"urls": [],
"raw_text": "Zhiguo Wang, Haitao Mi, Wael Hamza, and Radu Florian. 2016. Multi-perspective context match- ing for machine comprehension. arXiv preprint arXiv:1612.04211 .",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Fastqa: A simple and efficient neural architecture for question answering",
"authors": [
{
"first": "Dirk",
"middle": [],
"last": "Weissenborn",
"suffix": ""
},
{
"first": "Georg",
"middle": [],
"last": "Wiese",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Seiffe",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1703.04816"
]
},
"num": null,
"urls": [],
"raw_text": "Dirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017. Fastqa: A simple and efficient neural ar- chitecture for question answering. arXiv preprint arXiv:1703.04816 .",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Dynamic coattention networks for question answering",
"authors": [
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.01604"
]
},
"num": null,
"urls": [],
"raw_text": "Caiming Xiong, Victor Zhong, and Richard Socher. 2016. Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604 .",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Wikiqa: A challenge dataset for open-domain question answering",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Meek",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP. Citeseer",
"volume": "",
"issue": "",
"pages": "2013--2018",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. Wikiqa: A challenge dataset for open-domain ques- tion answering. In Proceedings of EMNLP. Cite- seer, pages 2013-2018.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Words or characters? fine-grained gating for reading comprehension",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Bhuwan",
"middle": [],
"last": "Dhingra",
"suffix": ""
},
{
"first": "Ye",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Bhuwan Dhingra, Ye Yuan, Junjie Hu, William W. Cohen, and Ruslan Salakhutdinov. 2016. Words or characters? fine-grained gating for reading comprehension. CoRR abs/1611.01724.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "End-to-end reading comprehension with dynamic answer chunk ranking",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Kazi",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "Mo",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1610.09996"
]
},
"num": null,
"urls": [],
"raw_text": "Yang Yu, Wei Zhang, Kazi Hasan, Mo Yu, Bing Xi- ang, and Bowen Zhou. 2016. End-to-end reading comprehension with dynamic answer chunk rank- ing. arXiv preprint arXiv:1610.09996 .",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "ADADELTA: an adaptive learning rate method",
"authors": [
{
"first": "D",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zeiler",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. CoRR abs/1212.5701.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Exploring question understanding and adaptation in neuralnetwork-based question answering",
"authors": [
{
"first": "Junbei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Qian",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Lirong",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1703.04617"
]
},
"num": null,
"urls": [],
"raw_text": "Junbei Zhang, Xiaodan Zhu, Qian Chen, Lirong Dai, and Hui Jiang. 2017. Exploring ques- tion understanding and adaptation in neural- network-based question answering. arXiv preprint arXiv:1703.04617 .",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "Model performance on different question types (a), different answer lengths (b), different passage lengths (c), different question lengths (d). The point on the x-axis of figure (c) and (d) represent the datas whose passages length or questions length are between the value of current point and last point.",
"uris": null
},
"TABREF0": {
"text": "An example from the SQuAD dataset.",
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF1": {
"text": "but is computationally cheaper.",
"content": "<table><tr><td>Output Layer</td><td>Start</td><td>End</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>\u210e 1</td><td>\u210e 2</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>\u210e 1</td><td>\u210e 2</td><td>\u210e 3</td><td>\u2026</td><td>\u210e</td><td/><td/><td/><td/></tr><tr><td>Passage</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>Self-Matching Layer</td><td/><td>Attention</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>1</td><td>2</td><td>3</td><td>\u2026</td><td>1</td><td>2</td><td>3</td><td>\u2026</td><td/></tr><tr><td>Question and Passage</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>Matching Layer</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>Question</td><td/><td>Attention</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>Vector</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>1</td><td>2</td><td>\u2026</td><td/><td>1</td><td>2</td><td>3</td><td>\u2026</td><td/></tr><tr><td>Question and Passage</td><td>When</td><td>was</td><td>\u2026</td><td>tested</td><td>The</td><td>delay</td><td>in</td><td>\u2026</td><td>test</td></tr><tr><td>GRU Layer</td><td/><td colspan=\"2\">Question</td><td/><td/><td/><td>Passage</td><td/><td/></tr></table>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF2": {
"text": "",
"content": "<table><tr><td>Single Model</td><td>EM / F1</td></tr><tr><td>Base model (GRU)</td><td>64.5 / 74.1</td></tr><tr><td colspan=\"2\">+Gating Base model (LSTM) 64.2 / 73.9 66.2 / 75.8</td></tr><tr><td>+Gating</td><td>66.0 / 75.6</td></tr></table>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF3": {
"text": "Effectiveness of gated attention-based recurrent networks for both GRU and LSTM.",
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table"
}
}
}
}