ACL-OCL / Base_JSON /prefixN /json /N19 /N19-1032.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N19-1032",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:00:12.854044Z"
},
"title": "BAG: Bi-directional Attention Entity Graph Convolutional Network for Multi-hop Reasoning Question Answering",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Cao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Sydney",
"location": {}
},
"email": ""
},
{
"first": "Meng",
"middle": [],
"last": "Fang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tencent Robotics X",
"location": {
"country": "China"
}
},
"email": "mfang@tencent.com"
},
{
"first": "Dacheng",
"middle": [],
"last": "Tao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Sydney",
"location": {}
},
"email": "dacheng.tao@sydney.edu.au"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Multi-hop reasoning question answering requires deep comprehension of relationships between various documents and queries. We propose a Bi-directional Attention Entity Graph Convolutional Network (BAG), leveraging relationships between nodes in an entity graph and attention information between a query and the entity graph, to solve this task. Graph convolutional networks are used to obtain a relation-aware representation of nodes for entity graphs built from documents with multi-level features. Bidirectional attention is then applied on graphs and queries to generate a query-aware nodes representation, which will be used for the final prediction. Experimental evaluation shows BAG achieves stateof-the-art accuracy performance on the QAngaroo WIKIHOP dataset.",
"pdf_parse": {
"paper_id": "N19-1032",
"_pdf_hash": "",
"abstract": [
{
"text": "Multi-hop reasoning question answering requires deep comprehension of relationships between various documents and queries. We propose a Bi-directional Attention Entity Graph Convolutional Network (BAG), leveraging relationships between nodes in an entity graph and attention information between a query and the entity graph, to solve this task. Graph convolutional networks are used to obtain a relation-aware representation of nodes for entity graphs built from documents with multi-level features. Bidirectional attention is then applied on graphs and queries to generate a query-aware nodes representation, which will be used for the final prediction. Experimental evaluation shows BAG achieves stateof-the-art accuracy performance on the QAngaroo WIKIHOP dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Question Answering (QA) and Machine Comprehension (MC) tasks have drawn significant attention during the past years. The proposal of large-scale single-document-based QA/MC datasets, such as SQuAD (Rajpurkar et al., 2016) , CNN/Daily mail (Hermann et al., 2015) , makes training available for end-to-end deep neural models, such as BiDAF (Seo et al., 2016) , DCN (Xiong et al., 2016) and SAN (Liu et al., 2017) . However, gaps still exist between these datasets and real-world applications. For example, reasoning is constrained to a single paragraph, or even part of it. Extended work was done to meet practical demand, such as DrQA (Chen et al., 2017) answering a SQuAD question based on the whole Wikipedia instead of single paragraph. Besides, latest largescale datasets, e.g. TriviaQA (Joshi et al., 2017) and NarrativeQA (Ko\u010disk\u1ef3 et al., 2018) , address this limitation by introducing multiple documents, ensuring reasoning cannot be done within local information. Although those datasets are fairly challenging, reasoning are within one document.",
"cite_spans": [
{
"start": 197,
"end": 221,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF11"
},
{
"start": 239,
"end": 261,
"text": "(Hermann et al., 2015)",
"ref_id": "BIBREF4"
},
{
"start": 338,
"end": 356,
"text": "(Seo et al., 2016)",
"ref_id": "BIBREF14"
},
{
"start": 363,
"end": 383,
"text": "(Xiong et al., 2016)",
"ref_id": "BIBREF19"
},
{
"start": 392,
"end": 410,
"text": "(Liu et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 634,
"end": 653,
"text": "(Chen et al., 2017)",
"ref_id": "BIBREF1"
},
{
"start": 790,
"end": 810,
"text": "(Joshi et al., 2017)",
"ref_id": "BIBREF5"
},
{
"start": 827,
"end": 849,
"text": "(Ko\u010disk\u1ef3 et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In many scenarios, we need to comprehend the relationships of entities across documents before answering questions. Therefore, reading comprehension tasks with multiple hops were proposed to make it available for machine to tackle such problems, e.g. QAngaroo task (Welbl et al., 2018) . Each sample in QAngaroo contains multiple supporting documents, and the goal is selecting the correct answer from a set of candidates for a query. Most queries cannot be answered depending on a single document, and multi-step reasoning chains across documents are needed. Therefore, it is possible that understanding a part of paragraphs loses effectiveness for multi-hop inference, which posts a huge challenge for previous models. Some baseline models, e.g. BiDAF (Seo et al., 2016) and FastQA (Weissenborn et al., 2017) , which are popular for single-document QA, suffer dramatical accuracy decline in this task.",
"cite_spans": [
{
"start": 265,
"end": 285,
"text": "(Welbl et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 754,
"end": 772,
"text": "(Seo et al., 2016)",
"ref_id": "BIBREF14"
},
{
"start": 784,
"end": 810,
"text": "(Weissenborn et al., 2017)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a new graph-based QA model, named Bi-directional Attention Entity Graph convolutional network (BAG). Documents are transformed into a graph in which nodes are entities and edges are relationships between them. The graph is then imported into graph convolutional networks (GCNs) to learn relation-aware representation of nodes. Furthermore, we introduce a new bi-directional attention between the graph and a query with multi-level features to derive the mutual information for final prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Experimental results demonstrate that BAG achieves state-of-the-art performance on the WIK-IHOP dataset. Ablation test also shows BAG benefits from the bi-directional attention, multi-level features and graph convolutional networks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions can be summarized as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Applying a bi-directional attention between graphs and queries to learn query-aware representation for reading comprehension. \u2022 Multi-level features are involved to gain comprehensive relationship representation for graph nodes during processing of GCNs. Figure 1 : Framework of BAG model.",
"cite_spans": [],
"ref_spans": [
{
"start": 257,
"end": 265,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently coreference and graph-based models are studied for multi-hop QA (Dhingra et al., 2018; Santoro et al., 2017) . Coref-GRU (Dhingra et al., 2018) uses coreferences among tokens in documents. However, it is still limited by the longdistance relation propagation capability of RNNs. Besides, graph is proved to be an efficient way to represent complex relationships among objects and derive relational information (Santoro et al., 2017) . MHQA-GRN (Song et al., 2018) and Entity-GCN (De Cao et al., 2018) construct entity graphs based on documents to learn more compact representation for multi-hop reasoning and derive answers from graph networks. However, both of them care less about input features and the attention between queries and graph nodes. Attention has been proven to be an essential mechanism to promote the performance of NLP tasks in previous work (Bahdanau et al., 2014; Sukhbaatar et al., 2015) . In addition, bidirectional attention (Seo et al., 2016) shows its superiority to vanilla mutual attention because it provides complementary information to each other for both contexts and queries. However, little work exploits the attention between graphs and queries.",
"cite_spans": [
{
"start": 73,
"end": 95,
"text": "(Dhingra et al., 2018;",
"ref_id": "BIBREF3"
},
{
"start": 96,
"end": 117,
"text": "Santoro et al., 2017)",
"ref_id": "BIBREF12"
},
{
"start": 130,
"end": 152,
"text": "(Dhingra et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 419,
"end": 441,
"text": "(Santoro et al., 2017)",
"ref_id": "BIBREF12"
},
{
"start": 453,
"end": 472,
"text": "(Song et al., 2018)",
"ref_id": "BIBREF15"
},
{
"start": 870,
"end": 893,
"text": "(Bahdanau et al., 2014;",
"ref_id": "BIBREF0"
},
{
"start": 894,
"end": 918,
"text": "Sukhbaatar et al., 2015)",
"ref_id": "BIBREF16"
},
{
"start": 958,
"end": 976,
"text": "(Seo et al., 2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We first formally define the multiple-hop QA task, taking QAngaroo (Welbl et al., 2018) WIKIHOP data as an example, There is a set S containing N supporting documents, a query q with M tokens and a set of answer candidates C. Our goal is to find the correct answer index a. Giving a triple-style query q = (country, kepahiang), it means which country does kepahiang belongs to. Then answer candidates are provided, e.g. C = {Indonesia, Malaysia}. There are multiple supporting documents but not all of them are related to reasoning, e.g. Kephiang is a regency in Bengkulu, Bengkulu is one of provinces of Indonesia, Jambi is a province of Indonesia. We can derive the correct candidate is Indonesia, i.e. a = 0, based on reasoning hops in former two documents.",
"cite_spans": [
{
"start": 67,
"end": 87,
"text": "(Welbl et al., 2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BAG Model",
"sec_num": "3"
},
{
"text": "We show the proposed BAG model in Figure 1 . It contains five modules: (1) entity graph construction, (2) multi-level feature layer, (3) GCN layer, (4) bi-directional attention and (5) output layer.",
"cite_spans": [],
"ref_spans": [
{
"start": 34,
"end": 42,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "BAG Model",
"sec_num": "3"
},
{
"text": "We construct an entity graph based on Entity- GCN (De Cao et al., 2018) , which means all mentions of candidates found out in documents are used as nodes in the graph. Undirected edges are defined according to positional properties of every node pair. There are two kinds of edges included: 1) cross-document edge, for every node pair with the same entity string located in different documents; 2) within-document edge, for every node pair located in the same document.",
"cite_spans": [
{
"start": 46,
"end": 71,
"text": "GCN (De Cao et al., 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Graph Construction",
"sec_num": "3.1"
},
{
"text": "Nodes in an entity graph can be found out via simple string matching. This approach can simplify calculation as well as make sure all relevant entities are included in the graph. Picked out along possible reasoning chains during dataset generating (Welbl et al., 2018) , answer candidates have contained all related entities for answering. Finally, We can obtain a set of T nodes {n i }, 1 \u2264 i \u2264 T and corresponding edges among these nodes via above procedures.",
"cite_spans": [
{
"start": 248,
"end": 268,
"text": "(Welbl et al., 2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Graph Construction",
"sec_num": "3.1"
},
{
"text": "We represent both nodes and queries using multilevel features as shown in Figure 1 (2). We first use pretrained word embeddings to represent tokens, such as GLoVe (Pennington et al., 2014) because nodes and queries are composed of tokens. Then contextual-level feature is used to offset the deficiency of GLoVe. Note that only part of tokens are remained during graph construction because we only extract entities as nodes. Thus contextual information around these entities in original document becomes essential for indicating relations between tokens and we use higher-level information for nodes except for token-level feature.",
"cite_spans": [
{
"start": 163,
"end": 188,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 74,
"end": 82,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multi-level Features",
"sec_num": "3.2"
},
{
"text": "We use ELMo (Peters et al., 2018) as contextualized word representations, modeling both complex word characteristics and contextual linguistic conditions. It should be noted that ELMo features for nodes are calculated based on original documents, then truncated according to the position indices of nodes. Token-level and context-level features will be concatenated and encoded to make a further comprehension. Since a node may contain more than one token, we average features among tokens to generate a feature vector for each node before encoding it. It will be transformed into the encoded node feature via a 1-layer linear network.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-level Features",
"sec_num": "3.2"
},
{
"text": "Different from nodes, we represent a query by directly using a bidirectional LSTM (Bi-LSTM) whose output in each step is used as encoded query features. And both linear network and LSTM have the same output dimensiond.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-level Features",
"sec_num": "3.2"
},
{
"text": "In addition, we add two manual features to reflect the semantic properties of tokens, which are named-entity recognition (NER) and part-ofspeech (POS). The complete feature f n \u2208 R T \u00d7d , f q \u2208 R M \u00d7d for both nodes and queries will be the concatenation of corresponding encoded features, NER embedding and POS embedding, where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-level Features",
"sec_num": "3.2"
},
{
"text": "d = d + d P OS + d N ER .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-level Features",
"sec_num": "3.2"
},
{
"text": "In order to realize multi-hop reasoning, we use a Relational Graph Convolutional Network (R-GCN) (Schlichtkrull et al., 2018 ) that can propagate message across different entity nodes in graphs and generate transformed representation of original ones. The R-GCN is employed to handle high-relational data characteristics and make use of different edge types. At lth layer, given the hidden state h l i \u2208 R d of node i, the hidden states h l j \u2208 R d , j \u2208 {N i } and relations R N i of all its neighbors (d is the hidden state dimension), the hidden state in the next layer can be obtained via",
"cite_spans": [
{
"start": 97,
"end": 124,
"text": "(Schlichtkrull et al., 2018",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GCN Layer",
"sec_num": "3.3"
},
{
"text": "h l+1 i = \u03c3( r\u2208R N i j\u2208N i 1 c i,r W l r h l j + W l 0 h l i ), (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GCN Layer",
"sec_num": "3.3"
},
{
"text": "where c i,r is a normalization constant |N i |, W l r \u2208 R d\u00d7d stands a relation-specific weight matrix and W l 0 \u2208 R d\u00d7d stands a general weight. Similar to Entity-GCN (De Cao et al., 2018), we apply a gate on update vector u l i and hidden state h l i of current node by a linear transformation f s ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GCN Layer",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w l i = \u03c3(f s (concat(u l i , h l i )),",
"eq_num": "(2)"
}
],
"section": "GCN Layer",
"sec_num": "3.3"
},
{
"text": "in which u l i can be obtained via (1) without sigmoid function. Then it will be used for updating weights for the hidden state h l+1 i of the same node in next layer,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GCN Layer",
"sec_num": "3.3"
},
{
"text": "h l+1 i = w l i tanh(u l i ) + (1 \u2212 w l i ) h l i . (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GCN Layer",
"sec_num": "3.3"
},
{
"text": "We stack such networks for L layers in which all parameters are shared. The information of each node will be propagated up to L-node distance away, generating L-hop-reasoning relation-aware representation of nodes. The initial input will be mutli-level nodes features f n = {f n i }, 0 \u2264 i \u2264 T and edges e = {e ij } in the graph.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GCN Layer",
"sec_num": "3.3"
},
{
"text": "Bi-directional attention is responsible for generating the mutual information between a graph and a query. In BiDAF (Seo et al., 2016) , attention is applied to sequence data in QA tasks such as supporting texts. However, we also find it works well between graph nodes and queries. It generates query-aware node representations that can provide more reasoning information for prediction. What differs in BAG is that attention is applied for graphs as shown in Figure 1(4) . The similarity matrix S \u2208 R T\u00d7M is calculated via",
"cite_spans": [
{
"start": 116,
"end": 134,
"text": "(Seo et al., 2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 460,
"end": 471,
"text": "Figure 1(4)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bi-directional Attention Between a Graph and a Query",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "S= avg \u22121 f a (concat(h n , f q , h n \u2022 f q )),",
"eq_num": "(4)"
}
],
"section": "Bi-directional Attention Between a Graph and a Query",
"sec_num": "3.4"
},
{
"text": "in which h n \u2208 R T \u00d7d is all node representations obtained from the last GCN layer, f q \u2208 R M \u00d7d is the query feature matrix after encoding, d is the dimension for both query feature and transformed node representation, f a is a linear transformation, avg \u22121 stands for the average operation in last dimension, and \u2022 is element-wise multiplication.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-directional Attention Between a Graph and a Query",
"sec_num": "3.4"
},
{
"text": "Unlike the context-to-query attention in BiDAF, we introduce a node-to-query attention a n2q \u2208 R T \u00d7d , which signifies the query tokens that have the highest relevancy for each node using",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-directional Attention Between a Graph and a Query",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a n2q = softmax col (S) \u2022 f q ,",
"eq_num": "(5)"
}
],
"section": "Bi-directional Attention Between a Graph and a Query",
"sec_num": "3.4"
},
{
"text": "where softmax col means performing softmax function across the column, and \u2022 stands for matrix multiplication. At the same time, we also design query-to-node attention a q2n \u2208 R M \u00d7d which signifies the nodes that are most related to each token in the query via",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-directional Attention Between a Graph and a Query",
"sec_num": "3.4"
},
{
"text": "a q2n = dup(softmax(max col (S))) \u2022 f n , (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-directional Attention Between a Graph and a Query",
"sec_num": "3.4"
},
{
"text": "in which max col is the maximum function applied on across column of a matrix, which will transform S into R 1\u00d7M . Then function dup will duplicate it for T times into shape R T \u00d7M . f n \u2208 R T \u00d7d is the original node feature before GCN layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-directional Attention Between a Graph and a Query",
"sec_num": "3.4"
},
{
"text": "Our bi-directional attention layer is the concatenation of the original nodes feature, nodes-toquery attention, the element-wise multiplication of nodes feature and nodes-to-query attention, and multiplication of nodes feature and query-to-nodes attention. It should be noted that the relationaware nodes representation from GCN layer is just used to calculate the similarity matrix, and original node feature is used in rest calculation to obtain more general complementary information between graph and query. Edges are not taken in account because they are discrete and combined with nodes in GCN layer. The output is defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-directional Attention Between a Graph and a Query",
"sec_num": "3.4"
},
{
"text": "a = concat(f n , a n2q , f n \u2022 a n2q , f n \u2022 a q2n ). (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-directional Attention Between a Graph and a Query",
"sec_num": "3.4"
},
{
"text": "A 2-layer fully connect feed-forward network is employed to generate the final prediction, with tanh as the activation function in each layer. Softmax will be applied among the output. It uses query-aware representation of nodes from the attention layer as input, and its output is regarded as the probability of each node becoming answer. Since each candidate may appear several times in the graph, the probability of each candidate is the sum of all corresponding nodes. The loss function is defined as the cross entropy between the gold answer and its predicted probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Output layer",
"sec_num": "3.5"
},
{
"text": "We used both unmasked and masked versions of the QAngaroo WIKIHOP dataset (Welbl et al., 2018) and followed its basic setting, in which masked version used specific tokens such as MASK1 to replace original candidates tokens in documents. There are 43,738, 5,129 and 2,451 examples in the training set, the development set and the test set respectively, and test set is not public.",
"cite_spans": [
{
"start": 74,
"end": 94,
"text": "(Welbl et al., 2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "In the implementation 1 , we used standard ELMo with a 1024 dimension representation. Besides, 300-dimension GLoVe pre-trained embeddings from 840B Web crawl data were used as token-level features. We used spaCy to provide additional 8-dimension NER and POS features. The dimension of the 1-layer linear network for nodes in multi-level feature module was 512 with tanh as activation function. A 2-layer Bi-LSTM was employed for queries whose hidden state size is 256. Then the feature dimension is d = 512 + 8 + 8 = 528. The GCN layer number L was set as 5. And the unit number of intermediate layers in output layer was 256.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "In addition, the number of nodes and the query length were truncated as 500 and 25 respectively for normalized computation. Dropout with rate 0.2 was applied before GCN layer. Adam optimizer is employed with initial learning rate 2 \u00d7 10 \u22124 , which will be halved for every 5 epochs, With batch size 32. It took about 14 hours for 50-epoch training on two GTX1080Ti GPUs using pre-built and pre-processed graph data generated from original corpus, which can significantly decrease the training time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "We consider the following baseline models: FastQA (Weissenborn et al., 2017) , BiDAF (Seo et al., 2016) , Coref-GRU (Dhingra et al., 2018) , MHQA-GRN (Song et al., 2018) , Entity-GCN (De Cao et al., 2018) . Former three models are RNN-based models, while coreference relationship is involved in Coref-GRU. The last two models are graph-based models specially designed for multi-hop QA tasks.",
"cite_spans": [
{
"start": 50,
"end": 76,
"text": "(Weissenborn et al., 2017)",
"ref_id": "BIBREF17"
},
{
"start": 85,
"end": 103,
"text": "(Seo et al., 2016)",
"ref_id": "BIBREF14"
},
{
"start": 116,
"end": 138,
"text": "(Dhingra et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 150,
"end": 169,
"text": "(Song et al., 2018)",
"ref_id": "BIBREF15"
},
{
"start": 172,
"end": 204,
"text": "Entity-GCN (De Cao et al., 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "As shown in Table 1 , we collected three kinds of results. The dev and test results stand for the original validation and test sets respectively, noting that the test set is not public. In addition, we divide the original validation set of masked version into two parts evenly, one as a split validation set for tuning model and the other one as a split test set. The test 1 results are for the split test set.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "Our BAG model achieves state-of-the art per-formance on both unmasked and masked data 2 , with accuracy 69.0% on the test set, which is 1.4% higher in value than previous best model Entity-GCN. It is significant superior than FastQA and BiDAF due to leveraging of relationship information given by the graph and abandoning some distracting context in multiple documents. Although Coref-GRU extends GRU with coreference relationships, it is still not enough for multi-hop because hop relationships are not limited to coreference, entities with the same strings also existed across documents which can be used for reasoning. Both MHQA-GRN and Entity-GCN utilize graph networks to resolve relations among entities in documents. However, the lack of attention and complementary features limits their performance. Therefore our BAG model achieves the best performance under all data configurations. It is noticed that BAG only gets a small promotion on masked data. We argue that the reason is the attention between masks and queries generating less useful information compared to unmasked ones. Moreover, ablation experimental results on unmasked version of the WIKIHOP dev set are given in Table 2 . Once we remove the bi-directional attention and put the concatenation of nodes and queries directly into the output layer, it shows significant performance drop with more than 3%, proving the necessity of attention for reasoning in multi-hop QA. If we use linear-transformationbased single attention a = h n W a f q given in (Luong et al., 2015) instead of our bi-directional attention, the accuracy drops with 2%, which means attention bi-directionality also contributes to the performance improvement. The similar condition will appear if we remove GCN, but use raw nodes as input for the attention layer.",
"cite_spans": [
{
"start": 1522,
"end": 1542,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 1187,
"end": 1194,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "In addition, if edge types are no longer considered, which makes R-GCN degraded to vanilla GCN, noticeable accuracy loss about 2% appears. The absence of multi-level features will also cause degradation. The removal of semantic-level features causes slight decline on the performance, including NER and POS features. Further removal of ELMo feature will causes a dramatical drop, which reflects the insufficiency of only using word embeddings as features for nodes and that contextual information is very important. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "We propose a Bi-directional Attention entity Graph convolutional network (BAG) for multihop reasoning QA tasks. Regarding task characteristics, graph convolutional networks (GCNs) are efficient to handle relationships among entities in documents. We demonstrate that both bidirectional attention between nodes and queries and multi-level features are necessary for such tasks. The former one aims to obtain query-aware node representation for answering, while the latter one provides contextual comprehension of isolated nodes in graphs. Our experimental results not only demonstrate the effectiveness of two proposed modules, but also show BAG achieves stateof-the-art performance on the WIKIHOP dataset. Our future work will be making use of more complex relations between entities and building graphs in more general way without candidates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Source code is available on https://github.com/ caoyu1991/BAG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The paper was written on early Dec. 2018, during that time Entity-GCN is the best public model, and only one anonymous model is better than it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by Australian Research Council Projects under grants FL-170100117, DP-280103424.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Reading wikipedia to answer open-domain questions",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Fisch",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.00051"
]
},
"num": null,
"urls": [],
"raw_text": "Danqi Chen, Adam Fisch, Jason Weston, and An- toine Bordes. 2017. Reading wikipedia to an- swer open-domain questions. arXiv preprint arXiv:1704.00051.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Question answering by reasoning across documents with graph convolutional networks",
"authors": [
{
"first": "Nicola",
"middle": [],
"last": "De Cao",
"suffix": ""
},
{
"first": "Wilker",
"middle": [],
"last": "Aziz",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1808.09920"
]
},
"num": null,
"urls": [],
"raw_text": "Nicola De Cao, Wilker Aziz, and Ivan Titov. 2018. Question answering by reasoning across documents with graph convolutional networks. arXiv preprint arXiv:1808.09920.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Neural models for reasoning over multiple mentions using coreference",
"authors": [
{
"first": "Bhuwan",
"middle": [],
"last": "Dhingra",
"suffix": ""
},
{
"first": "Qiao",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.05922"
]
},
"num": null,
"urls": [],
"raw_text": "Bhuwan Dhingra, Qiao Jin, Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. 2018. Neural models for reasoning over multiple mentions using coreference. arXiv preprint arXiv:1804.05922.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Teaching machines to read and comprehend",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Moritz Hermann",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Kocisky",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Lasse",
"middle": [],
"last": "Espeholt",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Kay",
"suffix": ""
},
{
"first": "Mustafa",
"middle": [],
"last": "Suleyman",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1693--1701",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. Teaching ma- chines to read and comprehend. In Advances in Neu- ral Information Processing Systems, pages 1693- 1701.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension",
"authors": [
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Weld",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1705.03551"
]
},
"num": null,
"urls": [],
"raw_text": "Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehen- sion. arXiv preprint arXiv:1705.03551.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The narrativeqa reading comprehension challenge",
"authors": [
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Ko\u010disk\u1ef3",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Schwarz",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Karl",
"middle": [
"Moritz"
],
"last": "Hermann",
"suffix": ""
},
{
"first": "G\u00e1abor",
"middle": [],
"last": "Melis",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association of Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "317--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom\u00e1\u0161 Ko\u010disk\u1ef3, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, G\u00e1abor Melis, and Edward Grefenstette. 2018. The narrativeqa reading comprehension challenge. Transactions of the Association of Computational Linguistics, 6:317-328.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Stochastic answer networks for machine reading comprehension",
"authors": [
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yelong",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1712.03556"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaodong Liu, Yelong Shen, Kevin Duh, and Jian- feng Gao. 2017. Stochastic answer networks for machine reading comprehension. arXiv preprint arXiv:1712.03556.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Effective approaches to attentionbased neural machine translation",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.04025"
]
},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention- based neural machine translation. arXiv preprint arXiv:1508.04025.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.05365"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. arXiv preprint arXiv:1802.05365.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Squad: 100,000+ questions for machine comprehension of text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.05250"
]
},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A simple neural network module for relational reasoning",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Santoro",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Raposo",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Mateusz",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "Razvan",
"middle": [],
"last": "Malinowski",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Pascanu",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Battaglia",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lillicrap",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "4967--4976",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Santoro, David Raposo, David G Barrett, Ma- teusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. 2017. A simple neural net- work module for relational reasoning. In Advances in neural information processing systems, pages 4967-4976.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Modeling relational data with graph convolutional networks",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Schlichtkrull",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Kipf",
"suffix": ""
},
{
"first": "Rianne",
"middle": [],
"last": "Bloem",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Berg",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2018,
"venue": "European Semantic Web Conference",
"volume": "",
"issue": "",
"pages": "593--607",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolu- tional networks. In European Semantic Web Confer- ence, pages 593-607. Springer.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Bidirectional attention flow for machine comprehension",
"authors": [
{
"first": "Minjoon",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Aniruddha",
"middle": [],
"last": "Kembhavi",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.01603"
]
},
"num": null,
"urls": [],
"raw_text": "Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Exploring graph-structured passage representation for multihop reading comprehension with graph neural networks",
"authors": [
{
"first": "Linfeng",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Zhiguo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mo",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1809.02040"
]
},
"num": null,
"urls": [],
"raw_text": "Linfeng Song, Zhiguo Wang, Mo Yu, Yue Zhang, Radu Florian, and Daniel Gildea. 2018. Exploring graph-structured passage representation for multi- hop reading comprehension with graph neural net- works. arXiv preprint arXiv:1809.02040.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "End-to-end memory networks",
"authors": [
{
"first": "Sainbayar",
"middle": [],
"last": "Sukhbaatar",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Fergus",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "2440--2448",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems, pages 2440-2448.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Fastqa: A simple and efficient neural architecture for question answering",
"authors": [
{
"first": "Dirk",
"middle": [],
"last": "Weissenborn",
"suffix": ""
},
{
"first": "Georg",
"middle": [],
"last": "Wiese",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Seiffe",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017. Fastqa: A simple and efficient neu- ral architecture for question answering. CoRR, abs/1703.04816.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Constructing datasets for multi-hop reading comprehension across documents. Transactions of the Association of Computational Linguistics",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Welbl",
"suffix": ""
},
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "6",
"issue": "",
"pages": "287--302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi-hop reading comprehension across documents. Transac- tions of the Association of Computational Linguis- tics, 6:287-302.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Dynamic coattention networks for question answering",
"authors": [
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.01604"
]
},
"num": null,
"urls": [],
"raw_text": "Caiming Xiong, Victor Zhong, and Richard Socher. 2016. Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td>Models</td><td>Unmasked dev test</td><td colspan=\"2\">Masked dev test 1</td></tr><tr><td/><td>65.4</td><td>-</td><td>-</td></tr><tr><td>Entity-GCN</td><td colspan=\"3\">64.8 * 67.6 70.5 * 68.1</td></tr><tr><td>BAG</td><td colspan=\"3\">66.5 69.0 70.9 68.9</td></tr><tr><td>Models</td><td colspan=\"2\">Unmasked</td></tr><tr><td colspan=\"2\">Without Attention</td><td>63.1</td></tr><tr><td colspan=\"2\">Using Single Attention</td><td>64.5</td></tr><tr><td colspan=\"2\">Without GCN</td><td>63.3</td></tr><tr><td colspan=\"2\">Without edge type</td><td>63.9</td></tr><tr><td colspan=\"2\">Without NER, POS</td><td>66.0</td></tr><tr><td colspan=\"2\">+Without ELMo</td><td>60.5</td></tr><tr><td>Full Model</td><td/><td>66.5</td></tr></table>",
"text": "FastQA 27.2 * 25.7 38.0 * 48.3 BiDAF 49.7 * 42.9 59.8 * 57.5 Coref-GRU \u2020 56.0 * 59.3 --MHQA-GRN \u2021 62.8 Table 1: The performance of different models on both masked and unmasked version of WIKIHOP dataset. ([*] Results reported in original papers, others are obtained by official code. [ \u2020] Masked data is not suitable for coreference parsing. [ \u2021] Some results are missing due to unavailability of source code.)"
},
"TABREF2": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "Ablation test results of BAG model on the unmasked validation set of the WIKIHOP dataset."
}
}
}
}