ACL-OCL / Base_JSON /prefixP /json /P19 /P19-1024.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P19-1024",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:23:41.312190Z"
},
"title": "Attention Guided Graph Convolutional Networks for Relation Extraction",
"authors": [
{
"first": "Zhijiang",
"middle": [],
"last": "Guo",
"suffix": "",
"affiliation": {
"laboratory": "StatNLP Research Group",
"institution": "Singapore University of Technology",
"location": {}
},
"email": "zhijiangguo@mymail.sutd.edu.sg"
},
{
"first": "Yan",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "StatNLP Research Group",
"institution": "Singapore University of Technology",
"location": {}
},
"email": "yanzhang@mymail.sutd.edu.sg"
},
{
"first": "Wei",
"middle": [],
"last": "Lu",
"suffix": "",
"affiliation": {
"laboratory": "StatNLP Research Group",
"institution": "Singapore University of Technology",
"location": {}
},
"email": "luwei@sutd.edu.sg"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Dependency trees convey rich structural information that is proven useful for extracting relations among entities in text. However, how to effectively make use of relevant information while ignoring irrelevant information from the dependency trees remains a challenging research question. Existing approaches employing rule based hard-pruning strategies for selecting relevant partial dependency structures may not always yield optimal results. In this work, we propose Attention Guided Graph Convolutional Networks (AGGCNs), a novel model which directly takes full dependency trees as inputs. Our model can be understood as a soft-pruning approach that automatically learns how to selectively attend to the relevant sub-structures useful for the relation extraction task. Extensive results on various tasks including cross-sentence n-ary relation extraction and large-scale sentence-level relation extraction show that our model is able to better leverage the structural information of the full dependency trees, giving significantly better results than previous approaches.",
"pdf_parse": {
"paper_id": "P19-1024",
"_pdf_hash": "",
"abstract": [
{
"text": "Dependency trees convey rich structural information that is proven useful for extracting relations among entities in text. However, how to effectively make use of relevant information while ignoring irrelevant information from the dependency trees remains a challenging research question. Existing approaches employing rule based hard-pruning strategies for selecting relevant partial dependency structures may not always yield optimal results. In this work, we propose Attention Guided Graph Convolutional Networks (AGGCNs), a novel model which directly takes full dependency trees as inputs. Our model can be understood as a soft-pruning approach that automatically learns how to selectively attend to the relevant sub-structures useful for the relation extraction task. Extensive results on various tasks including cross-sentence n-ary relation extraction and large-scale sentence-level relation extraction show that our model is able to better leverage the structural information of the full dependency trees, giving significantly better results than previous approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Relation extraction aims to detect relations among entities in the text. It plays a significant role in a variety of natural language processing applications including biomedical knowledge discovery , knowledge base population (Zhang et al., 2017) and question answering (Yu et al., 2017) . Figure 1 shows an example about expressing a relation sensitivity among three entities L858E, EGFR and gefitinib in two sentences.",
"cite_spans": [
{
"start": 227,
"end": 247,
"text": "(Zhang et al., 2017)",
"ref_id": "BIBREF41"
},
{
"start": 271,
"end": 288,
"text": "(Yu et al., 2017)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [
{
"start": 291,
"end": 299,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most existing relation extraction models can be categorized into two classes: sequence-based and dependency-based. Sequence-based models operate only on the word sequences (Zeng et al., 2014; Wang et al., 2016) , whereas dependencybased models incorporate dependency trees into the models (Bunescu and Mooney, 2005; Peng et al., 2017) . Compared to sequence-based models, dependency-based models are able to capture non-local syntactic relations that are obscure from the surface form alone (Zhang et al., 2018) . Various pruning strategies are also proposed to distill the dependency information in order to further improve the performance. Xu et al. (2015b,c) apply neural networks only on the shortest dependency path between the entities in the full tree. Miwa and Bansal (2016) reduce the full tree to the subtree below the lowest common ancestor (LCA) of the entities. Zhang et al. (2018) apply graph convolutional networks (GCNs) (Kipf and Welling, 2017) model over a pruned tree. This tree includes tokens that are up to distance K away from the dependency path in the LCA subtree.",
"cite_spans": [
{
"start": 172,
"end": 191,
"text": "(Zeng et al., 2014;",
"ref_id": "BIBREF39"
},
{
"start": 192,
"end": 210,
"text": "Wang et al., 2016)",
"ref_id": "BIBREF32"
},
{
"start": 289,
"end": 315,
"text": "(Bunescu and Mooney, 2005;",
"ref_id": "BIBREF3"
},
{
"start": 316,
"end": 334,
"text": "Peng et al., 2017)",
"ref_id": "BIBREF20"
},
{
"start": 491,
"end": 511,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF40"
},
{
"start": 642,
"end": 661,
"text": "Xu et al. (2015b,c)",
"ref_id": null
},
{
"start": 760,
"end": 782,
"text": "Miwa and Bansal (2016)",
"ref_id": "BIBREF17"
},
{
"start": 875,
"end": 894,
"text": "Zhang et al. (2018)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, rule-based pruning strategies might eliminate some important information in the full tree. Figure 1 shows an example in cross-sentence n-ary relation extraction that the key tokens partial response would be excluded if the model only takes the pruned tree into consideration. Ideally, the model should be able to learn how to maintain a balance between including and excluding information in the full tree. In this paper, we propose the novel Attention Guided Graph Convolutional Networks (AGGCNs), which operate directly on the full tree. Intuitively, we develop a \"soft pruning\" strategy that transforms the original dependency tree into a fully connected edgeweighted graph. These weights can be viewed as the strength of relatedness between nodes, which can be learned in an end-to-end fashion by using self-attention mechanism (Vaswani et al., 2017) .",
"cite_spans": [
{
"start": 841,
"end": 863,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 100,
"end": 108,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to encode a large fully connected graph, we next introduce dense connections (Huang et al., 2017) to the GCN model following (Guo et al., AUXPASS The deletion mutation on exon-19 of EGFR gene was present in 16 patients, while the L858E point mutation on exon-21 was noted.",
"cite_spans": [
{
"start": 86,
"end": 106,
"text": "(Huang et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 134,
"end": 154,
"text": "(Guo et al., AUXPASS",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "All patients were treated response. with gefitinib and showed a partial Figure 1 : An example dependency tree for two sentences expressing a relation (sensitivity) among three entities. The shortest dependency path between these entities is highlighted in bold (edges and tokens). The root node of the LCA subtree of entities is present. The dotted edges indicate tokens K=1 away from the subtree. Note that tokens partial response off these paths (shortest dependency path, LCA subtree, pruned tree when K=1).",
"cite_spans": [],
"ref_spans": [
{
"start": 72,
"end": 80,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2019). For GCNs, L layers will be needed in order to capture neighborhood information that is L hops away. A shallow GCN model may not be able to capture non-local interactions of large graphs. Interestingly, while deeper GCNs can capture richer neighborhood information of a graph, empirically it has been observed that the best performance is achieved with a 2-layer model (Xu et al., 2018) . With the help of dense connections, we are able to train the AGGCN model with a large depth, allowing rich local and non-local dependency information to be captured. Experiments show that our model is able to achieve better performance for various tasks. For the cross-sentence relation extraction task, our model surpasses the current state-of-theart models on multi-class ternary and binary relation extraction by 8% and 6% in terms of accuracy respectively.",
"cite_spans": [
{
"start": 375,
"end": 392,
"text": "(Xu et al., 2018)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For the largescale sentence-level extraction task (TACRED dataset), our model is also consistently better than others, showing the effectiveness of the model on a large training set. Our code is available at http://www.statnlp.org/ research/information-extraction 1 Our contributions are summarized as follows:",
"cite_spans": [
{
"start": 264,
"end": 265,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose the novel AGGCNs that learn a \"soft pruning\" strategy in an end-to-end fashion, which learns how to select and discard information. Combining with dense connections, our AGGCN model is able to learn a better graph representation. \u2022 Our model achieves new state-of-the-art results without additional computational over-1 Implementation is based on Pytorch (Paszke et al., 2017) .",
"cite_spans": [
{
"start": 368,
"end": 389,
"text": "(Paszke et al., 2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "head when compared with previous GCNs. 2 Unlike tree-structured models (e.g., Tree-LSTM (Tai et al., 2015) ), it can be efficiently applied over dependency trees in parallel.",
"cite_spans": [
{
"start": 88,
"end": 106,
"text": "(Tai et al., 2015)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we will present the basic components used for constructing our AGGCN model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Guided GCNs",
"sec_num": "2"
},
{
"text": "GCNs are neural networks that operate directly on graph structures (Kipf and Welling, 2017).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GCNs",
"sec_num": "2.1"
},
{
"text": "Here we mathematically illustrate how multi-layer GCNs work on a graph. Given a graph with n nodes, we can represent the graph with an n \u00d7 n adjacency matrix A. extend GCNs for encoding dependency trees by incorporating directionality of edges into the model. They add a self-loop for each node in the tree. Opposite direction of a dependency arc is also included, which means A ij = 1 and A ji = 1 if there is an edge going from node i to node j, otherwise A ij = 0 and A ji = 0. The convolution computation for node i at the l-th layer, which takes the input feature representation h (l\u22121) as input and outputs the induced representation h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GCNs",
"sec_num": "2.1"
},
{
"text": "i , can be defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GCNs",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h (l) i = \u03c1 n j=1 A ij W (l) h (l\u22121) j + b (l)",
"eq_num": "(1)"
}
],
"section": "GCNs",
"sec_num": "2.1"
},
{
"text": "where W (l) is the weight matrix, b (l) is the bias vector, and \u03c1 is an activation function (e.g., RELU). h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GCNs",
"sec_num": "2.1"
},
{
"text": "i is the initial input x i , where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GCNs",
"sec_num": "2.1"
},
{
"text": "x i \u2208 R d and d is the input feature dimension. 0.3 0.1 0.1 0.2 0.6 0.2 0.1 0.1 0.7 0.1 0.2 0.6 0.2 0.2 0.1 0.3 0.3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GCNs",
"sec_num": "2.1"
},
{
"text": "The winery includes gardens",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "V3",
"sec_num": null
},
{
"text": "V1 V3 V2 V1 V4 0.1 0.2 0.1 0.6 1 1 0 0 1 0 0 1 1 0 1 1 1 0 1 1 Multi-Head Attention N V1 V2 V3 V4 V1 V2 V4 Attention Guided Layer V3 V2 \u00c3 (1) \u00c3 (N) G (1) G (N)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "V3",
"sec_num": null
},
{
"text": "Attention Guided Layer ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "V3",
"sec_num": null
},
{
"text": "The AGGCN model is composed of M identical blocks as shown in Figure 2 . Each block consists of three types of layers: attention guided layer, densely connected layer and linear combination layer. We first introduce the attention guided layer of the AGGCN model. As we discuss in Section 1, most existing pruning strategies are predefined. They prune the full tree into a subtree, based on which the adjacency matrix is constructed. In fact, such strategies can also be viewed as a form of hard attention (Xu et al., 2015a) , where edges that connect nodes not on the resulting subtree will be directly assigned zero weights (not attended). Such strategies might eliminate relevant information from the original dependency tree. Instead of using rule-based pruning, we develop a \"soft pruning\" strategy in the attention guided layer, which assigns weights to all edges. These weights can be learned by the model in an end-to-end fashion.",
"cite_spans": [
{
"start": 505,
"end": 523,
"text": "(Xu et al., 2015a)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [
{
"start": 62,
"end": 70,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Attention Guided Layer",
"sec_num": "2.2"
},
{
"text": "In the attention guided layer, we transform the original dependency tree into a fully connected edge-weighted graph by constructing an attention guided adjacency matrix\u00c3. Each\u00c3 corresponds to a certain fully connected graph and each entr\u1ef9 A ij is the weight of the edge going from node i to node j. As shown in Figure 2 ,\u00c3 (1) represents a fully connected graph G (1) .\u00c3 can be constructed by using self-attention mechanism (Cheng et al., 2016) , which is an attention mechanism (Bahdanau et al., 2015) that captures the interactions between two arbitrary positions of a single sequence. Once we get\u00c3, we can use it as the input for the computation of the later graph convolutional layer. Note that the size of\u00c3 is the same as the original adjacency matrix A (n \u00d7 n). Therefore, no additional computational overhead is involved. The key idea behind the attention guided layer is to use attention for inducing relations between nodes, especially for those connected by indirect, multi-hop paths. These soft relations can be captured by differentiable functions in the model.",
"cite_spans": [
{
"start": 364,
"end": 367,
"text": "(1)",
"ref_id": null
},
{
"start": 424,
"end": 444,
"text": "(Cheng et al., 2016)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 311,
"end": 319,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Attention Guided Layer",
"sec_num": "2.2"
},
{
"text": "Here we compute\u00c3 by using multi-head attention (Vaswani et al., 2017) , which allows the model to jointly attend to information from different representation subspaces. The calculation involves a query and a set of key-value pairs. The output is computed as a weighted sum of the values, where the weight is computed by a function of the query with the corresponding key.",
"cite_spans": [
{
"start": 47,
"end": 69,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Guided Layer",
"sec_num": "2.2"
},
{
"text": "A (t) = sof tmax( QW Q i \u00d7 (KW K i ) T \u221a d )V (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention Guided Layer",
"sec_num": "2.2"
},
{
"text": "where Q and K are both equal to the collective representation h (l\u22121) at layer l \u2212 1 of the AG-GCN model. The projections are parameter matrices W Q i \u2208 R d\u00d7d and W K i \u2208 R d\u00d7d .\u00c3 (t) is the t-th attention guided adjacency matrix corresponding to the t-th head. Up to N matrices are constructed, where N is a hyper-parameter. Figure 2 shows an example that the original adjacency matrix is transformed into multiple attention guided adjacency matrices. Accordingly, the input dependency tree is converted into multiple fully connected edge-weighted graphs. In practice, we treat the original adjacency matrix as an initialization so that the dependency information can be captured in the node representations for later attention calculation. The attention guided layer is included starting from the second block.",
"cite_spans": [],
"ref_spans": [
{
"start": 326,
"end": 334,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Attention Guided Layer",
"sec_num": "2.2"
},
{
"text": "Unlike previous pruning strategies, which lead to a resulting structure that is smaller than the original structure, our attention guided layer outputs a larger fully connected graph. Following (Guo et al., 2019) , we introduce dense connections (Huang et al., 2017) into the AGGCN model in order to capture more structural information on large graphs. With the help of dense connections, we are able to train a deeper model, allowing rich local and non-local information to be captured for learning a better graph representation.",
"cite_spans": [
{
"start": 194,
"end": 212,
"text": "(Guo et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 246,
"end": 266,
"text": "(Huang et al., 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Densely Connected Layer",
"sec_num": "2.3"
},
{
"text": "Dense connectivity is shown in Figure 2 . Direct connections are introduced from any layer to all its preceding layers. Mathematically, we first define g",
"cite_spans": [],
"ref_spans": [
{
"start": 31,
"end": 39,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Densely Connected Layer",
"sec_num": "2.3"
},
{
"text": "(l)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Densely Connected Layer",
"sec_num": "2.3"
},
{
"text": "j as the concatenation of the initial node representation and the node representations produced in layers 1,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Densely Connected Layer",
"sec_num": "2.3"
},
{
"text": "\u2022 \u2022 \u2022 , l \u2212 1: g (l) j = [x j ; h (1) j ; ...; h (l\u22121) j ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Densely Connected Layer",
"sec_num": "2.3"
},
{
"text": "(3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Densely Connected Layer",
"sec_num": "2.3"
},
{
"text": "In practice, each densely connected layer has L sub-layers. The dimensions of these sub-layers d hidden are decided by L and the input feature dimension d. In AGGCNs, we use d hidden = d/L. For example, if the densely connected layer has 3 sub-layers and the input dimension is 300, the hidden dimension of each sub-layer will be d hidden = d/L = 300/3 = 100. Then we concatenate the output of each sub-layer to form the new representation. Therefore, the output dimension is 300 (3 \u00d7 100). Different from the GCN model whose hidden dimension is larger than or equal to the input dimension, the AGGCN model shrinks the hidden dimension as the number of layers increases in order to improves the parameter efficiency similar to DenseNets (Huang et al., 2017 ).",
"cite_spans": [
{
"start": 737,
"end": 756,
"text": "(Huang et al., 2017",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Densely Connected Layer",
"sec_num": "2.3"
},
{
"text": "Since we have N different attention guided adjacency matrices, N separate densely connected layers are required. Accordingly, we modify the computation of each layer as follows (for the t-th matrix\u00c3 (t) ):",
"cite_spans": [
{
"start": 199,
"end": 202,
"text": "(t)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Densely Connected Layer",
"sec_num": "2.3"
},
{
"text": "h (l) t i = \u03c1 n j=1\u00c3 (t) ij W (l) t g (l) j + b (l) t (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Densely Connected Layer",
"sec_num": "2.3"
},
{
"text": "where t = 1, ..., N and t selects the weight matrix and bias term associated with the attention guided adjacency matrix\u00c3 (t) . The column dimension of the weight matrix increases by d hidden per sub-layer, i.e., W",
"cite_spans": [
{
"start": 121,
"end": 124,
"text": "(t)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Densely Connected Layer",
"sec_num": "2.3"
},
{
"text": "(l) t \u2208 R d hidden \u00d7d (l) , where d (l) = d + d hidden \u00d7 (l \u2212 1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Densely Connected Layer",
"sec_num": "2.3"
},
{
"text": "The AGGCN model includes a linear combination layer to integrate representations from N different densely connected layers. Formally, the output of the linear combination layer is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Combination Layer",
"sec_num": "2.4"
},
{
"text": "h comb = W comb h out + b comb (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Combination Layer",
"sec_num": "2.4"
},
{
"text": "where h out is the output by concatenating outputs from N separate densely connected layers, i.e.,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Combination Layer",
"sec_num": "2.4"
},
{
"text": "h out = [h (1) ; ...; h (N ) ] \u2208 R d\u00d7N . W comb \u2208 R (d\u00d7N )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Combination Layer",
"sec_num": "2.4"
},
{
"text": "\u00d7d is a weight matrix and b comb is a bias vector for the linear transformation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear Combination Layer",
"sec_num": "2.4"
},
{
"text": "After applying the AGGCN model over the dependency tree, we obtain hidden representations of all tokens. Given these representations, the goal of relation extraction is to predict a relation among entities. Following (Zhang et al., 2018) , we concatenate the sentence representation and entity representations to get the final representation for classification. First we need to obtain the sentence representation h sent . It can be computed as:",
"cite_spans": [
{
"start": 217,
"end": 237,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "AGGCNs for Relation Extraction",
"sec_num": "2.5"
},
{
"text": "h sent = f (h mask ) = f (AGGCN(x)) (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AGGCNs for Relation Extraction",
"sec_num": "2.5"
},
{
"text": "where h mask represents the masked collective hidden representations. Masked here means we only select representations of tokens that are not entity tokens in the sentence. f :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AGGCNs for Relation Extraction",
"sec_num": "2.5"
},
{
"text": "R d\u00d7n \u2192 R d\u00d71",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AGGCNs for Relation Extraction",
"sec_num": "2.5"
},
{
"text": "is a max pooling function that maps from n output vectors to 1 sentence vector. Similarly, we can obtain the entity representations. For the i-th entity, its representation h e i can be computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AGGCNs for Relation Extraction",
"sec_num": "2.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h e i = f (h e i )",
"eq_num": "(7)"
}
],
"section": "AGGCNs for Relation Extraction",
"sec_num": "2.5"
},
{
"text": "where h e i indicates the hidden representation corresponding to the i-th entity. 3 Entity representations will be concatenated with sentence representation to form a new representation. Following (Zhang et al., 2018) , we apply a feed-forward neural network (FFNN) over the concatenated representations inspired by relational reasoning works (Santoro et al., 2017; Lee et al., 2017) :",
"cite_spans": [
{
"start": 197,
"end": 217,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF40"
},
{
"start": 343,
"end": 365,
"text": "(Santoro et al., 2017;",
"ref_id": "BIBREF24"
},
{
"start": 366,
"end": 383,
"text": "Lee et al., 2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "AGGCNs for Relation Extraction",
"sec_num": "2.5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h f inal = FFNN([h sent ; h e 1 ; ...h e i ])",
"eq_num": "(8)"
}
],
"section": "AGGCNs for Relation Extraction",
"sec_num": "2.5"
},
{
"text": "where h f inal will be taken as inputs to a logistic regression classifier to make a prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AGGCNs for Relation Extraction",
"sec_num": "2.5"
},
{
"text": "3 Experiments",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AGGCNs for Relation Extraction",
"sec_num": "2.5"
},
{
"text": "We evaluate the performance of our model on two tasks, namely, cross-sentence n-ary relation extraction and sentence-level relation extraction. For the cross-sentence n-ary relation extraction task, we use the dataset introduced in (Peng et al., 2017) , which contains 6,987 ternary relation instances and 6,087 binary relation instances extracted from PubMed. 4 Most instances contain multiple sentences and each instance is assigned with one of the five labels, including \"resistance or nonresponse\", \"sensitivity\", \"response\", \"resistance\" and \"none\". We consider two specific tasks for evaluation, i,e., binary-class n-ary relation extraction and multi-class n-ary relation extraction. For binary-class n-ary relation extraction, we follow (Peng et al., 2017) to binarize multi-class labels by grouping the four relation classes as \"yes\" and treating \"none\" as \"no\".",
"cite_spans": [
{
"start": 232,
"end": 251,
"text": "(Peng et al., 2017)",
"ref_id": "BIBREF20"
},
{
"start": 361,
"end": 362,
"text": "4",
"ref_id": null
},
{
"start": 744,
"end": 763,
"text": "(Peng et al., 2017)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "For the sentence-level relation extraction task, we follow the experimental settings in (Zhang et al., 2018) to evaluate our model on the TACRED dataset (Zhang et al., 2017) and Semeval-10 Task 8 (Hendrickx et al., 2010) . With over 106K instances, the TACRED dataset introduces 41 relation types and a special \"no relation\" type to describe the relations between the mention pairs in instances. Subject mentions are categorized into \"person\" and \"organization\", while object mentions are categorized into 16 fine-grained types, including \"date\", \"location\", etc. Semeval-10 Task 8 is a public dataset, which contains 10,717 instances with 9 relations and a special \"other\" class.",
"cite_spans": [
{
"start": 88,
"end": 108,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF40"
},
{
"start": 153,
"end": 173,
"text": "(Zhang et al., 2017)",
"ref_id": "BIBREF41"
},
{
"start": 196,
"end": 220,
"text": "(Hendrickx et al., 2010)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "We tune the hyper-parameters according to results on the development sets. For the cross-sentence nary relation extraction task, we use the same data split used in (Song et al., 2018b) 4 , while for the sentence-level relation extraction task, we use the same development set from (Zhang et al., 2018) 5 .",
"cite_spans": [
{
"start": 164,
"end": 186,
"text": "(Song et al., 2018b) 4",
"ref_id": null
},
{
"start": 281,
"end": 301,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF40"
},
{
"start": 302,
"end": 303,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.2"
},
{
"text": "We choose the number of heads N for attention guided layer from {1, 2, 3, 4}, the block number M from {1, 2, 3}, the number of sublayers L in each densely connected layer from {2, 3, 4, 5, 6}. Through preliminary experiments on the development sets, we find that the com-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.2"
},
{
"text": "binations (N =2, M =2, L=5, d hidden =340) and (N =3, M =2, L=5, d hidden =300)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.2"
},
{
"text": "give the best results on cross-sentence n-ary relation extraction and sentence-level relation extraction, respectively. GloVe (Pennington et al., 2014) 6 vectors are used as the initialization for word embeddings.",
"cite_spans": [
{
"start": 126,
"end": 151,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.2"
},
{
"text": "Models are evaluated using the same metrics as previous work (Song et al., 2018b; Zhang et al., 2018) . We report the test accuracy averaged over five cross validation folds (Song et al., 2018b) for the cross-sentence n-ary relation extraction task. For the sentence-level relation extraction task, we report the micro-averaged F1 scores for the TA-CRED dataset and the macro-averaged F1 scores for the SemEval dataset (Zhang et al., 2018) .",
"cite_spans": [
{
"start": 61,
"end": 81,
"text": "(Song et al., 2018b;",
"ref_id": "BIBREF26"
},
{
"start": 82,
"end": 101,
"text": "Zhang et al., 2018)",
"ref_id": "BIBREF40"
},
{
"start": 174,
"end": 194,
"text": "(Song et al., 2018b)",
"ref_id": "BIBREF26"
},
{
"start": 419,
"end": 439,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "3.2"
},
{
"text": "For cross-sentence n-ary relation extraction task, we consider three kinds of models as baselines: 1) a feature-based classifier based on shortest dependency paths between all entity pairs, 2) Graph-structured LSTM methods, including Graph LSTM (Peng et al., 2017) , bidirectional DAG LSTM (Bidir DAG LSTM) (Song et al., 2018b) and Graph State LSTM (GS GLSTM) (Song et al., 2018b) . These methods extend LSTM to encode graphs constructed from input sentences with dependency edges, 3) Graph convolutional networks (GCN) with pruned trees, which have shown efficacy on the relation extraction task (Zhang et al., 2018) 7 . Addition-5 https://nlp.stanford.edu/projects/ tacred/ 6 We use the 300-dimensional Glove word vectors trained on the Common Crawl corpus https://nlp. stanford.edu/projects/glove/ 7 The results are produced by the open implementation of Zhang et al. (2018) .",
"cite_spans": [
{
"start": 245,
"end": 264,
"text": "(Peng et al., 2017)",
"ref_id": "BIBREF20"
},
{
"start": 307,
"end": 327,
"text": "(Song et al., 2018b)",
"ref_id": "BIBREF26"
},
{
"start": 360,
"end": 380,
"text": "(Song et al., 2018b)",
"ref_id": "BIBREF26"
},
{
"start": 597,
"end": 619,
"text": "(Zhang et al., 2018) 7",
"ref_id": null
},
{
"start": 676,
"end": 677,
"text": "6",
"ref_id": null
},
{
"start": 858,
"end": 877,
"text": "Zhang et al. (2018)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results on Cross-Sentence n-ary Relation Extraction",
"sec_num": "3.3"
},
{
"text": "Multi-class",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Binary-class",
"sec_num": null
},
{
"text": "Feature-Based 74.7 77.7 73.9 75.2 --SPTree (Miwa and Bansal, 2016) --75.9 75.9 --Graph LSTM-EMBED (Peng et al., 2017) 76.5 80.6 74.3 76.5 --Graph LSTM-FULL (Peng et al., 2017) 77.9 80.7 75.6 76.7 --00000000000000000 + multi-task -82.0 -78.5 --Bidir DAG LSTM (Song et al., 2018b) 75.6 77.3 76.9 76.4 51.7 50.7 GS GLSTM (Song et al., 2018b) 80 Table 1 : Average test accuracies in five-fold validation for binary-class n-ary relation extraction and multi-class n-ary relation extraction. \"T\" and \"B\" denote ternary drug-gene-mutation interactions and binary drug-mutation interactions, respectively. Single means that we report the accuracy on instances within single sentences, while Cross means the accuracy on all instances. K in the GCN models means that the preprocessed pruned trees include tokens up to distance K away from the dependency path in the LCA subtree.",
"cite_spans": [
{
"start": 43,
"end": 66,
"text": "(Miwa and Bansal, 2016)",
"ref_id": "BIBREF17"
},
{
"start": 98,
"end": 122,
"text": "(Peng et al., 2017) 76.5",
"ref_id": null
},
{
"start": 156,
"end": 175,
"text": "(Peng et al., 2017)",
"ref_id": "BIBREF20"
},
{
"start": 258,
"end": 278,
"text": "(Song et al., 2018b)",
"ref_id": "BIBREF26"
},
{
"start": 318,
"end": 338,
"text": "(Song et al., 2018b)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 342,
"end": 349,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "T B T B Single Cross Single Cross Cross Cross",
"sec_num": null
},
{
"text": "ally, we follow (Song et al., 2018b) to consider the tree-structured LSTM method (SPTree) (Miwa and Bansal, 2016) on drug-mutation binary relation extraction. Main results are shown in Table 1 .",
"cite_spans": [
{
"start": 16,
"end": 36,
"text": "(Song et al., 2018b)",
"ref_id": "BIBREF26"
},
{
"start": 90,
"end": 113,
"text": "(Miwa and Bansal, 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 185,
"end": 192,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "T B T B Single Cross Single Cross Cross Cross",
"sec_num": null
},
{
"text": "We first focus on the binary-class n-ary relation extraction task. For ternary relation extraction (first two columns in Table 1 ), our AGGCN model achieves accuracies of 87.1 and 87.0 on instances within single sentence (Single) and on all instances (Cross), respectively, which outperform all the baselines. More specifically, our AG-GCN model surpasses the state-of-the-art Graphstructured LSTM model (GS GLSTM) by 6.8 and 3.8 points for the Single and Cross settings, respectively. Compared to GCN models , our model obtains 1.3 and 1.2 points higher than the best performing model with pruned tree (K=1). For binary relation extraction (third and fourth columns in Table 1 ), AGGCN consistently outperforms GS GLSTM and GCN as well.",
"cite_spans": [],
"ref_spans": [
{
"start": 121,
"end": 128,
"text": "Table 1",
"ref_id": null
},
{
"start": 670,
"end": 677,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "T B T B Single Cross Single Cross Cross Cross",
"sec_num": null
},
{
"text": "These results suggest that, compared to previous full tree based methods, e.g., GS GLSTM, AGGCN is able to extract more information from the underlying graph structure to learn a more expressive representation through graph convolutions. AGGCN also performs better than GCNs, although its performance can be boosted via pruned trees. We believe this is because of the combination of densely connected layer and attention guided layer. The dense connections could facilitate information propagation in large graphs, enabling AGGCN to efficiently learn from long-distance dependencies without pruning techniques. Meanwhile, the attention guided layer can further distill relevant information and filter out noises from the representation learned by the densely connected layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "T B T B Single Cross Single Cross Cross Cross",
"sec_num": null
},
{
"text": "We next show the results on the multi-class classification task (last two columns in Table 1 ). We follow (Song et al., 2018b) to evaluate our model on all instances for both ternary and binary relations. This fine-grained classification task is much harder than coarse-grained classification task. As a result, the performance of all models degrades a lot. However, our AGGCN model still obtains 8.0 and 5.7 points higher than the GS GLSTM model for ternary and binary relations, respectively. We also notice that our AGGCN achieves a better test accuracy than all GCN models, which further demonstrates its ability to learn better representations from full trees.",
"cite_spans": [
{
"start": 106,
"end": 126,
"text": "(Song et al., 2018b)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 85,
"end": 92,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "T B T B Single Cross Single Cross Cross Cross",
"sec_num": null
},
{
"text": "We now report the results on the TACRED dataset for the sentence-level relation extraction task in Table 2 . We compare our model against two kinds of models: 1) dependency-based models, 2)",
"cite_spans": [],
"ref_spans": [
{
"start": 99,
"end": 106,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results on Sentence-level Relation Extraction",
"sec_num": "3.4"
},
{
"text": "Model P R F1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on Sentence-level Relation Extraction",
"sec_num": "3.4"
},
{
"text": "LR (Zhang et al., 2017) 73.5 49.9 59.4 SDP-LSTM (Xu et al., 2015c )* 66.3 52.7 58.7 Tree-LSTM (Tai et al., 2015 )** 66.0 59.2 62.4 PA-LSTM (Zhang et al., 2017) 65.7 64.5 65.1 GCN (Zhang et al., 2018) 69.8 59.0 64.0 C-GCN (Zhang et al., 2018) 69.9 63.3 66.4 AGGCN (ours) 69.9 60.9 65.1 C- AGGCN (ours) 71.8 66.4 69.0 Model F1",
"cite_spans": [
{
"start": 3,
"end": 23,
"text": "(Zhang et al., 2017)",
"ref_id": "BIBREF41"
},
{
"start": 48,
"end": 65,
"text": "(Xu et al., 2015c",
"ref_id": "BIBREF36"
},
{
"start": 94,
"end": 111,
"text": "(Tai et al., 2015",
"ref_id": "BIBREF27"
},
{
"start": 139,
"end": 159,
"text": "(Zhang et al., 2017)",
"ref_id": "BIBREF41"
},
{
"start": 179,
"end": 199,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF40"
},
{
"start": 221,
"end": 241,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF40"
},
{
"start": 288,
"end": 300,
"text": "AGGCN (ours)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results on Sentence-level Relation Extraction",
"sec_num": "3.4"
},
{
"text": "SVM (Rink and Harabagiu, 2010) 82.2 SDP-LSTM (Xu et al., 2015c) 83.7 SPTree (Miwa and Bansal, 2016) 84.4 PA-LSTM (Zhang et al., 2017) 82.7 C-GCN (Zhang et al., 2018) 84.8 C-AGGCN (ours) 85.7 sequence-based models. Dependency-based models include the logistic regression classifier (LR) (Zhang et al., 2017) , Shortest Path LSTM (SDP-LSTM) (Xu et al., 2015c) , Tree-structured neural model (Tree-LSTM) (Tai et al., 2015) , GCN and Contextualized GCN (C-GCN) (Zhang et al., 2018) . Both GCN and C-GCN models use the pruned trees. For sequence-based model, we consider the state-of-the-art Position Aware LSTM (PA-LSTM) (Zhang et al., 2017) . As shown in Table 2 , the logistic regression classifier (LR) obtains the highest precision score. We hypothesize that the reason behind this is due to the data imbalance issue. This feature-based method tends to predict a highly frequent label as the relation (e.g., \"per:title\"). Therefore, it has a high precision while having a relatively low recall. On the other hand, the neural models are able to better balance the precision and recall scores.",
"cite_spans": [
{
"start": 4,
"end": 30,
"text": "(Rink and Harabagiu, 2010)",
"ref_id": "BIBREF23"
},
{
"start": 45,
"end": 63,
"text": "(Xu et al., 2015c)",
"ref_id": "BIBREF36"
},
{
"start": 76,
"end": 99,
"text": "(Miwa and Bansal, 2016)",
"ref_id": "BIBREF17"
},
{
"start": 113,
"end": 133,
"text": "(Zhang et al., 2017)",
"ref_id": "BIBREF41"
},
{
"start": 145,
"end": 165,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF40"
},
{
"start": 286,
"end": 306,
"text": "(Zhang et al., 2017)",
"ref_id": "BIBREF41"
},
{
"start": 339,
"end": 357,
"text": "(Xu et al., 2015c)",
"ref_id": "BIBREF36"
},
{
"start": 401,
"end": 419,
"text": "(Tai et al., 2015)",
"ref_id": "BIBREF27"
},
{
"start": 457,
"end": 477,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF40"
},
{
"start": 617,
"end": 637,
"text": "(Zhang et al., 2017)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [
{
"start": 652,
"end": 659,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results on Sentence-level Relation Extraction",
"sec_num": "3.4"
},
{
"text": "Since GCN and C-GCN already show their superiority over other dependency-based models and PA-LSTM, we mainly compare our AGGCN model with them. We can observe that AGGCN outperforms GCN by 1.1 F1 points. We speculate Model F1 C-AGGCN 69.0 0 -Attention-guided layer (AG) 67.1 0 -Dense connected layer (DC) 67.3 0 -AG, DC 66.7 0 -Feed-Forward layer (FF) 67.8 Table 4 : An ablation study for C-AGGCN model.",
"cite_spans": [],
"ref_spans": [
{
"start": 357,
"end": 364,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results on Sentence-level Relation Extraction",
"sec_num": "3.4"
},
{
"text": "C-AGGCN (Full tree) 69.0 C-AGGCN (K=2) 67.5 C-AGGCN K=167.9 C-AGGCN (K=0) 67.0 that the limited improvement is due to the lack of contextual information about word order or disambiguation. Similar to C-GCN (Zhang et al., 2018) , we extend our AGGCN model with a bidirectional LSTM network to capture the contextual representations which are subsequently fed into AGGCN layers. We term the modified model as C-AGGCN. Our C-AGGCN model achieves an F1 score of 69.0, which outperforms the state-ofart C-GCN model by 2.6 points. We also notice that AGGCN and C-AGGCN achieve better precision and recall scores than GCN and C-GCN, respectively. The performance gap between GCNs with pruned trees and AGGCNs with full trees empirically show that the AGGCN model is better at distinguishing relevant from irrelevant information for learning a better graph representation.",
"cite_spans": [
{
"start": 206,
"end": 226,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model F1",
"sec_num": null
},
{
"text": "We also evaluate our model on the SemEval dataset under the same settings as (Zhang et al., 2018) . Results are shown in Table 3 . This dataset is much smaller than TACRED (only 1/10 of TA-CRED in terms of the number of instances). Our C-AGGCN model (85.7) consistently outperforms the C-GCN model (84.8), showing the good generalizability.",
"cite_spans": [
{
"start": 77,
"end": 97,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [
{
"start": 121,
"end": 128,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Model F1",
"sec_num": null
},
{
"text": "Ablation Study. We examine the contributions of two main components, namely, densely connected layers and attention guided layers, using the best-performing C-AGGCN model on the TA-CRED dataset. Figure 3 : Comparison of C-AGGCN and C-GCN against different training data sizes. The results of C-GCN are reproduced from (Zhang et al., 2018) . or densely connected layers improves the performance of the model. This suggests that both layers can assist GCNs to learn better information aggregations, producing better representations for graphs, where the attention-guided layer seems to be playing a more significant role. We also notice that the feed-forward layer is effective in our model. Without the feed-forward layer, the result drops to an F1 score of 67.8.",
"cite_spans": [
{
"start": 318,
"end": 338,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [
{
"start": 195,
"end": 203,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis and Discussion",
"sec_num": "3.5"
},
{
"text": "Performance with Pruned Trees. Table 5 shows the performance of the C-AGGCN model with pruned trees, where K means that the pruned trees include tokens that are up to distance K away from the dependency path in the LCA subtree.",
"cite_spans": [],
"ref_spans": [
{
"start": 31,
"end": 38,
"text": "Table 5",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Analysis and Discussion",
"sec_num": "3.5"
},
{
"text": "We can observe that all the C-AGGCN models with varied values of K are able to outperform the state-of-the-art C-GCN model (Zhang et al., 2018) (reported in Table 2 ). Specifically, with the same setting as K=1, C-AGGCN surpasses C-GCN by 1.5 points of F1 score. This demonstrates that, with the combination of densely connected layer and attention guided layer, C-AGGCN can learn better representations of graphs than C-GCN for downstream tasks. In addition, we notice that the performance of C-AGGCN with full trees outperforms all C-AGGCNs with pruned trees. These results further show the superiority of \"soft pruning\" strategy over hard pruning strategy in utilizing full tree information.",
"cite_spans": [
{
"start": 123,
"end": 143,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [
{
"start": 157,
"end": 164,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Analysis and Discussion",
"sec_num": "3.5"
},
{
"text": "Performance against Sentence Length. Figure 4 shows the F1 scores of three models under different sentence lengths. We partition the sentence length into five classes (< 20, [20, 30) , [30, 40) , [40, 50), \u226550). In general, C-AGGCN with full trees outperforms C-AGGCN with pruned trees and C-GCN against various sentence lengths. We also notice that C-AGGCN with pruned trees performs better than C-GCN in most cases. Moreover, the improvement achieved by C-AGGCN with pruned trees decays when the sentence length increases. Such a performance degradation can be avoided by using full trees, which provide more information of the underlying graph structures. Intuitively, with the increase of the sentence length, the dependency graph becomes larger as more nodes are included. This suggests that C-AGGCN can benefit more from larger graphs (full tree).",
"cite_spans": [
{
"start": 167,
"end": 182,
"text": "(< 20, [20, 30)",
"ref_id": null
},
{
"start": 185,
"end": 189,
"text": "[30,",
"ref_id": null
},
{
"start": 190,
"end": 193,
"text": "40)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 37,
"end": 45,
"text": "Figure 4",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Analysis and Discussion",
"sec_num": "3.5"
},
{
"text": "Performance against Training Data Size. Figure 3 shows the performance of C-AGGCN and C-GCN against different settings for training with different amount of training data. We consider five training settings (20%, 40%, 60%, 80%, 100% of the training data). C-AGGCN consistently outper-forms C-GCN under the same amount of training data. When the size of training data increases, we can observe that the performance gap becomes more obvious. Specifically, using 80% of the training data, the C-AGGCN model is able to achieve a F1 score of 66.5, higher than C-GCN trained on the complete training set. These results demonstrate that our model is more effective in terms of using training resources.",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 48,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis and Discussion",
"sec_num": "3.5"
},
{
"text": "Our work builds on a rich line of recent efforts on relation extraction models and graph convolutional networks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "Relation Extraction. Early research efforts are based on statistical methods. Tree-based kernels (Zelenko et al., 2002) and dependency path-based kernels (Bunescu and Mooney, 2005) are explored to extract the relation. McDonald et al. (2005) construct maximal cliques of entities to predict relations. Mintz et al. (2009) include syntactic features to a statistical classifier. Recently, sequencebased models leverages different neural networks to extract relations, including convolutional neural networks (Zeng et al., 2014; Nguyen and Grishman, 2015; Wang et al., 2016) , recurrent neural networks (Zhou et al., 2016; Zhang et al., 2017) the combination of both (Vu et al., 2016) and transformer (Verga et al., 2018) . Dependency-based approaches also try to incorporate structural information into the neural models. Peng et al. (2017) first split the dependency graph into two DAGs, then extend the tree LSTM model (Tai et al., 2015 ) over these two graphs for n-ary relation extraction. Closest to our work, Song et al. (2018b) use graph recurrent networks (Song et al., 2018a) to directly encode the whole dependency graph without breaking it. The contrast between our model and theirs is reminiscent of the contrast between CNN and RNN. Various pruning strategies have also been proposed to distill the dependency information in order to further improve the performance. Xu et al. (2015b,c) adapt neural models to encode the shortest dependency path. Miwa and Bansal (2016) apply LSTM model over the LCA subtree of two entities. Liu et al. (2015) combine the shortest dependency path and the dependency subtree. Zhang et al. (2018) adopt a path-centric pruning strategy. Unlike these strategies that remove edges in preprocessing, our model learns to assign each edge a different weight in an end-to-end fashion.",
"cite_spans": [
{
"start": 97,
"end": 119,
"text": "(Zelenko et al., 2002)",
"ref_id": "BIBREF38"
},
{
"start": 154,
"end": 180,
"text": "(Bunescu and Mooney, 2005)",
"ref_id": "BIBREF3"
},
{
"start": 219,
"end": 241,
"text": "McDonald et al. (2005)",
"ref_id": "BIBREF15"
},
{
"start": 302,
"end": 321,
"text": "Mintz et al. (2009)",
"ref_id": "BIBREF16"
},
{
"start": 507,
"end": 526,
"text": "(Zeng et al., 2014;",
"ref_id": "BIBREF39"
},
{
"start": 527,
"end": 553,
"text": "Nguyen and Grishman, 2015;",
"ref_id": "BIBREF18"
},
{
"start": 554,
"end": 572,
"text": "Wang et al., 2016)",
"ref_id": "BIBREF32"
},
{
"start": 601,
"end": 620,
"text": "(Zhou et al., 2016;",
"ref_id": "BIBREF42"
},
{
"start": 621,
"end": 640,
"text": "Zhang et al., 2017)",
"ref_id": "BIBREF41"
},
{
"start": 665,
"end": 682,
"text": "(Vu et al., 2016)",
"ref_id": "BIBREF31"
},
{
"start": 699,
"end": 719,
"text": "(Verga et al., 2018)",
"ref_id": "BIBREF30"
},
{
"start": 821,
"end": 839,
"text": "Peng et al. (2017)",
"ref_id": "BIBREF20"
},
{
"start": 920,
"end": 937,
"text": "(Tai et al., 2015",
"ref_id": "BIBREF27"
},
{
"start": 1014,
"end": 1033,
"text": "Song et al. (2018b)",
"ref_id": "BIBREF26"
},
{
"start": 1063,
"end": 1083,
"text": "(Song et al., 2018a)",
"ref_id": "BIBREF25"
},
{
"start": 1379,
"end": 1398,
"text": "Xu et al. (2015b,c)",
"ref_id": null
},
{
"start": 1459,
"end": 1481,
"text": "Miwa and Bansal (2016)",
"ref_id": "BIBREF17"
},
{
"start": 1537,
"end": 1554,
"text": "Liu et al. (2015)",
"ref_id": "BIBREF13"
},
{
"start": 1620,
"end": 1639,
"text": "Zhang et al. (2018)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "Graph Convolutional Networks. Early efforts that attempt to extend neural networks to deal with arbitrary structured graphs are introduced by Gori et al. (2005) ; Bruna (2014) . Subsequent efforts improve its computational efficiency with local spectral convolution techniques (Henaff et al., 2015; Defferrard et al., 2016) . Our approach is closely related to the GCNs (Kipf and Welling, 2017), which restrict the filters to operate on a first-order neighborhood around each node.",
"cite_spans": [
{
"start": 142,
"end": 160,
"text": "Gori et al. (2005)",
"ref_id": "BIBREF6"
},
{
"start": 163,
"end": 175,
"text": "Bruna (2014)",
"ref_id": "BIBREF2"
},
{
"start": 277,
"end": 298,
"text": "(Henaff et al., 2015;",
"ref_id": "BIBREF8"
},
{
"start": 299,
"end": 323,
"text": "Defferrard et al., 2016)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "More recently, Velickovic et al. (2018) proposed graph attention networks (GATs) to summarize neighborhood states by using masked selfattentional layers (Vaswani et al., 2017) . Compared to our work, their motivations and network structures are different. In particular, each node only attends to its neighbors in GATs whereas AG-GCNs measure the relatedness among all nodes. The network topology in GATs remains the same, while fully connected graphs will be built in AG-GCNs to capture long-range semantic interactions.",
"cite_spans": [
{
"start": 15,
"end": 39,
"text": "Velickovic et al. (2018)",
"ref_id": "BIBREF29"
},
{
"start": 153,
"end": 175,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "We introduce the novel Attention Guided Graph Convolutional Networks (AGGCNs). Experimental results show that AGGCNs achieve state-ofthe-art results on various relation extraction tasks. Unlike previous approaches, AGGCNs operate directly on the full tree and learn to distill the useful information from it in an end-to-end fashion. There are multiple venues for future work. One natural question we would like to ask is how to make use of the proposed framework to perform improved graph representation learning for graph related tasks (Bastings et al., 2017) .",
"cite_spans": [
{
"start": 538,
"end": 561,
"text": "(Bastings et al., 2017)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The size of the adjacency matrix representing the fully connected graph is the same as the one of the original tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The number of entities is fixed in n-ary relation extraction task. It is 3 for the first dataset and 2 for the second.4 The dataset is available at https://github.com/ freesunshine0316/nary-grn",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their valuable and constructive comments on this work. We would also like to thank Zhiyang Teng, Linfeng Song, Yuhao Zhang and Chenxi Liu for their helpful suggestions. This work is supported by Singapore Ministry of Education Academic Research Fund (AcRF) Tier 2 Project MOE2017-T2-1-156. This work is also partially supported by SUTD project PIE-SGP-AI-2018-01.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. of ICLR.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Graph convolutional encoders for syntax-aware neural machine translation",
"authors": [
{
"first": "Joost",
"middle": [],
"last": "Bastings",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "Wilker",
"middle": [],
"last": "Aziz",
"suffix": ""
},
{
"first": "Diego",
"middle": [],
"last": "Marcheggiani",
"suffix": ""
},
{
"first": "Khalil",
"middle": [],
"last": "Sima",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Sima'an. 2017. Graph convolutional encoders for syntax-aware neural ma- chine translation. In Proc. of EMNLP.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Spectral networks and deep locally connected networks on graphs",
"authors": [
{
"first": "Joan",
"middle": [],
"last": "Bruna",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joan Bruna. 2014. Spectral networks and deep locally connected networks on graphs. In Proc. of ICLR.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A shortest path dependency kernel for relation extraction",
"authors": [
{
"first": "C",
"middle": [],
"last": "Razvan",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Bunescu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Razvan C. Bunescu and Raymond J. Mooney. 2005. A shortest path dependency kernel for relation extrac- tion. In Proc. of EMNLP.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Long short-term memory-networks for machine reading",
"authors": [
{
"first": "Jianpeng",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine reading. In Proc. of EMNLP.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Convolutional neural networks on graphs with fast localized spectral filtering",
"authors": [
{
"first": "Micha\u00ebl",
"middle": [],
"last": "Defferrard",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Bresson",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Vandergheynst",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. of NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Micha\u00ebl Defferrard, Xavier Bresson, and Pierre Van- dergheynst. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. In Proc. of NeurIPS.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A new model for learning in graph domains",
"authors": [
{
"first": "Michele",
"middle": [],
"last": "Gori",
"suffix": ""
},
{
"first": "Gabriele",
"middle": [],
"last": "Monfardini",
"suffix": ""
},
{
"first": "Franco",
"middle": [],
"last": "Scarselli",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of IJCNN",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michele Gori, Gabriele Monfardini, and Franco Scarselli. 2005. A new model for learning in graph domains. In Proc. of IJCNN.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Densely connected graph convolutional networks for graph-to-sequence learning",
"authors": [
{
"first": "Zhijiang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhiyang",
"middle": [],
"last": "Teng",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhijiang Guo, Yan Zhang, Zhiyang Teng, and Wei Lu. 2019. Densely connected graph convolutional net- works for graph-to-sequence learning. Transactions of the Association of Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Deep convolutional networks on graph-structured data",
"authors": [
{
"first": "Mikael",
"middle": [],
"last": "Henaff",
"suffix": ""
},
{
"first": "Joan",
"middle": [],
"last": "Bruna",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikael Henaff, Joan Bruna, and Yann LeCun. 2015. Deep convolutional networks on graph-structured data. arXiv preprint.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals",
"authors": [
{
"first": "Iris",
"middle": [],
"last": "Hendrickx",
"suffix": ""
},
{
"first": "Su",
"middle": [
"Nam"
],
"last": "Kim",
"suffix": ""
},
{
"first": "Zornitsa",
"middle": [],
"last": "Kozareva",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Diarmuid\u00f3",
"middle": [],
"last": "S\u00e9aghdha",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Pennacchiotti",
"suffix": ""
},
{
"first": "Lorenza",
"middle": [],
"last": "Romano",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Szpakowicz",
"suffix": ""
}
],
"year": 2010,
"venue": "SemEval@ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid\u00d3 S\u00e9aghdha, Sebastian Pad\u00f3, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2010. Semeval-2010 task 8: Multi-way classification of semantic relations be- tween pairs of nominals. In SemEval@ACL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Densely connected convolutional networks",
"authors": [
{
"first": "Gao",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Zhuang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Laurens",
"middle": [],
"last": "Van Der Maaten",
"suffix": ""
},
{
"first": "Kilian",
"middle": [
"Q"
],
"last": "Weinberger",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of CVPR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger. 2017. Densely connected convolutional networks. In Proc. of CVPR.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Semisupervised classification with graph convolutional networks",
"authors": [
{
"first": "N",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Kipf",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas N. Kipf and Max Welling. 2017. Semi- supervised classification with graph convolutional networks. In Proc. of ICLR.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "End-to-end neural coreference resolution",
"authors": [
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [
"S"
],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenton Lee, Luheng He, Mike Lewis, and Luke S. Zettlemoyer. 2017. End-to-end neural coreference resolution. In Proc. of EMNLP.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A dependency-based neural network for relation classification",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Houfeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu, Furu Wei, Sujian Li, Heng Ji, Ming Zhou, and Houfeng Wang. 2015. A dependency-based neural network for relation classification. In Proc. of ACL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Encoding sentences with graph convolutional networks for semantic role labeling",
"authors": [
{
"first": "Diego",
"middle": [],
"last": "Marcheggiani",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for se- mantic role labeling. In Proc. of EMNLP.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Simple algorithms for complex relation extraction with applications to biomedical ie",
"authors": [
{
"first": "Ryan",
"middle": [
"T"
],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Seth",
"middle": [],
"last": "Kulick",
"suffix": ""
},
{
"first": "R",
"middle": [
"Scott"
],
"last": "Winters",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"S"
],
"last": "White",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan T. McDonald, Fernando Pereira, Seth Kulick, R. Scott Winters, Yang Jin, and Peter S. White. 2005. Simple algorithms for complex relation ex- traction with applications to biomedical ie. In Proc. of ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Distant supervision for relation extraction without labeled data",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Mintz",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bills",
"suffix": ""
},
{
"first": "Rion",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Daniel Ju- rafsky. 2009. Distant supervision for relation extrac- tion without labeled data. In Proc. of ACL.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "End-to-end relation extraction using lstms on sequences and tree structures",
"authors": [
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Makoto Miwa and Mohit Bansal. 2016. End-to-end re- lation extraction using lstms on sequences and tree structures. In Proc. of ACL.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Relation extraction: Perspective from convolutional neural networks",
"authors": [
{
"first": "Huu",
"middle": [],
"last": "Thien",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of VS@NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thien Huu Nguyen and Ralph Grishman. 2015. Rela- tion extraction: Perspective from convolutional neu- ral networks. In Proc. of VS@NAACL-HLT.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Automatic differentiation in pytorch",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of workshop on NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, and Adam Lerer. 2017. Au- tomatic differentiation in pytorch. In Proc. of work- shop on NeurIPS.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Cross-sentence n-ary relation extraction with graph lstms. Transactions of the Association for",
"authors": [
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Hoifung",
"middle": [],
"last": "Poon",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Wen",
"middle": [],
"last": "Tau",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2017,
"venue": "Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "101--115",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, and Wen tau Yih. 2017. Cross-sentence n-ary relation extraction with graph lstms. Transac- tions of the Association for Computational Linguis- tics, 5:101-115.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Proc. of EMNLP.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Distant supervision for relation extraction beyond the sentence boundary",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Hoifung",
"middle": [],
"last": "Poon",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Quirk and Hoifung Poon. 2017. Distant super- vision for relation extraction beyond the sentence boundary. In Proc. of EACL.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Utd: Classifying semantic relations by combining lexical and semantic resources",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Rink",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sanda",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harabagiu",
"suffix": ""
}
],
"year": 2010,
"venue": "SemEval@ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bryan Rink and Sanda M. Harabagiu. 2010. Utd: Clas- sifying semantic relations by combining lexical and semantic resources. In SemEval@ACL.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A simple neural network module for relational reasoning",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Santoro",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Raposo",
"suffix": ""
},
{
"first": "G",
"middle": [
"T"
],
"last": "David",
"suffix": ""
},
{
"first": "Mateusz",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "Razvan",
"middle": [],
"last": "Malinowski",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"W"
],
"last": "Pascanu",
"suffix": ""
},
{
"first": "Timothy",
"middle": [
"P"
],
"last": "Battaglia",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lillicrap",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Santoro, David Raposo, David G. T. Barrett, Mateusz Malinowski, Razvan Pascanu, Peter W. Battaglia, and Timothy P. Lillicrap. 2017. A sim- ple neural network module for relational reasoning. In Proc. of NeurIPS.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A graph-to-sequence model for amrto-text generation",
"authors": [
{
"first": "Linfeng",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhiguo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018a. A graph-to-sequence model for amr- to-text generation. In Proc. of ACL.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "N-ary relation extraction using graph state lstm",
"authors": [
{
"first": "Linfeng",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhiguo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018b. N-ary relation extraction using graph state lstm. In Proc. of EMNLP.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Improved semantic representations from tree-structured long short-term memory networks",
"authors": [
{
"first": "Kai Sheng",
"middle": [],
"last": "Tai",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory net- works. In Proc. of ACL.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. of NeurIPS.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Graph attention networks",
"authors": [
{
"first": "Petar",
"middle": [],
"last": "Velickovic",
"suffix": ""
},
{
"first": "Guillem",
"middle": [],
"last": "Cucurull",
"suffix": ""
},
{
"first": "Arantxa",
"middle": [],
"last": "Casanova",
"suffix": ""
},
{
"first": "Adriana",
"middle": [],
"last": "Romero",
"suffix": ""
},
{
"first": "Pietro",
"middle": [],
"last": "Li\u00f2",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li\u00f2, and Yoshua Bengio. 2018. Graph attention networks. In Proc. of ICLR.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Simultaneously self-attending to all mentions for full-abstract biological relation extraction",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Verga",
"suffix": ""
},
{
"first": "Emma",
"middle": [],
"last": "Strubell",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Verga, Emma Strubell, and Andrew McCallum. 2018. Simultaneously self-attending to all mentions for full-abstract biological relation extraction. In Proc. of NAACL-HLT.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Combining recurrent and convolutional neural networks for relation classification",
"authors": [
{
"first": "Ngoc",
"middle": [
"Thang"
],
"last": "Vu",
"suffix": ""
},
{
"first": "Heike",
"middle": [],
"last": "Adel",
"suffix": ""
},
{
"first": "Pankaj",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ngoc Thang Vu, Heike Adel, Pankaj Gupta, and Hin- rich Sch\u00fctze. 2016. Combining recurrent and convo- lutional neural networks for relation classification. In Proc. of NAACL-HLT.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Relation classification via multi-level attention cnns",
"authors": [
{
"first": "Linlin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhu",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Gerard",
"middle": [],
"last": "De Melo",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linlin Wang, Zhu Cao, Gerard de Melo, and Zhiyuan Liu. 2016. Relation classification via multi-level at- tention cnns. In Proc. of ACL.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Show, attend and tell: Neural image caption generation with visual attention",
"authors": [
{
"first": "Kelvin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Aaron",
"middle": [
"C"
],
"last": "Courville",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"S"
],
"last": "Zemel",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. 2015a. Show, attend and tell: Neural image caption genera- tion with visual attention. In Proc. of ICML.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Representation learning on graphs with jumping knowledge networks",
"authors": [
{
"first": "Keyulu",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Chengtao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yonglong",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Tomohiro",
"middle": [],
"last": "Sonobe",
"suffix": ""
},
{
"first": "Ken",
"middle": [],
"last": "Ichi Kawarabayashi",
"suffix": ""
},
{
"first": "Stefanie",
"middle": [],
"last": "Jegelka",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keyulu Xu, Chengtao Li, Yonglong Tian, Tomo- hiro Sonobe, Ken ichi Kawarabayashi, and Stefanie Jegelka. 2018. Representation learning on graphs with jumping knowledge networks. In Proc. of ICML.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Semantic relation classification via convolutional neural networks with simple negative sampling",
"authors": [
{
"first": "Kun",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yansong",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Songfang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Dongyan",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kun Xu, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2015b. Semantic relation classifica- tion via convolutional neural networks with simple negative sampling. In Proc. of EMNLP.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Classifying relations via long short term memory networks along shortest dependency paths",
"authors": [
{
"first": "Yan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Lili",
"middle": [],
"last": "Mou",
"suffix": ""
},
{
"first": "Ge",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yunchuan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Zhi",
"middle": [],
"last": "Jin",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015c. Classifying relations via long short term memory networks along shortest depen- dency paths. In Proc. of EMNLP.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Improved neural relation detection for knowledge base question answering",
"authors": [
{
"first": "Mo",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "C\u00edcero",
"middle": [],
"last": "Kazi Saidul Hasan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nogueira",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Santos",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mo Yu, Wenpeng Yin, Kazi Saidul Hasan, C\u00edcero Nogueira dos Santos, Bing Xiang, and Bowen Zhou. 2017. Improved neural relation detection for knowledge base question answering. In Proc. of ACL.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Kernel methods for relation extraction",
"authors": [
{
"first": "Dmitry",
"middle": [],
"last": "Zelenko",
"suffix": ""
},
{
"first": "Chinatsu",
"middle": [],
"last": "Aone",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Richardella",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2002. Kernel methods for relation ex- traction. In Proc. of EMNLP.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Relation classification via convolutional deep neural network",
"authors": [
{
"first": "Daojian",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Siwei",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Guangyou",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of COL-ING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jian Zhao. 2014. Relation classification via con- volutional deep neural network. In Proc. of COL- ING.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Graph convolution over pruned dependency trees improves relation extraction",
"authors": [
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuhao Zhang, Peng Qi, and Christopher D. Man- ning. 2018. Graph convolution over pruned depen- dency trees improves relation extraction. In Proc. of EMNLP.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Positionaware attention and supervised data improve slot filling",
"authors": [
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor An- geli, and Christopher D. Manning. 2017. Position- aware attention and supervised data improve slot fill- ing. In Proc. of EMNLP.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Attentionbased bidirectional long short-term memory networks for relation classification",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Zhenyu",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Bingchen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hongwei",
"middle": [],
"last": "Hao",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attention- based bidirectional long short-term memory net- works for relation classification. In Proc. of ACL.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Comparison of C-AGGCN and C-GCN against different sentence lengths. The results of C-GCN are reproduced from(Zhang et al., 2018)."
},
"TABREF2": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>: Results on the TACRED dataset. Model with</td></tr><tr><td>* indicates that the results are reported in Zhang et al.</td></tr><tr><td>(2017), while model with ** indicates the results are</td></tr><tr><td>reported in Zhang et al. (2018).</td></tr></table>",
"text": "",
"num": null
},
"TABREF3": {
"type_str": "table",
"html": null,
"content": "<table/>",
"text": "Results on the SemEval dataset.",
"num": null
},
"TABREF4": {
"type_str": "table",
"html": null,
"content": "<table/>",
"text": "Results of C-AGGCN with pruned trees.",
"num": null
},
"TABREF5": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>shows the results. We can</td></tr><tr><td>observe that adding either attention guided layers</td></tr></table>",
"text": "",
"num": null
}
}
}
}