| { |
| "paper_id": "P16-1047", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:56:08.817265Z" |
| }, |
| "title": "Neural Networks For Negation Scope Detection", |
| "authors": [ |
| { |
| "first": "Federico", |
| "middle": [], |
| "last": "Fancellu", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Lopez", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Bonnie", |
| "middle": [], |
| "last": "Webber", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Automatic negation scope detection is a task that has been tackled using different classifiers and heuristics. Most systems are however 1) highly-engineered, 2) English-specific, and 3) only tested on the same genre they were trained on. We start by addressing 1) and 2) using a neural network architecture. Results obtained on data from the *SEM2012 shared task on negation scope detection show that even a simple feed-forward neural network using word-embedding features alone, performs on par with earlier classifiers, with a bi-directional LSTM outperforming all of them. We then address 3) by means of a specially-designed synthetic test set; in doing so, we explore the problem of detecting the negation scope more in depth and show that performance suffers from genre effects and differs with the type of negation considered.", |
| "pdf_parse": { |
| "paper_id": "P16-1047", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Automatic negation scope detection is a task that has been tackled using different classifiers and heuristics. Most systems are however 1) highly-engineered, 2) English-specific, and 3) only tested on the same genre they were trained on. We start by addressing 1) and 2) using a neural network architecture. Results obtained on data from the *SEM2012 shared task on negation scope detection show that even a simple feed-forward neural network using word-embedding features alone, performs on par with earlier classifiers, with a bi-directional LSTM outperforming all of them. We then address 3) by means of a specially-designed synthetic test set; in doing so, we explore the problem of detecting the negation scope more in depth and show that performance suffers from genre effects and differs with the type of negation considered.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Amongst different extra-propositional aspects of meaning, negation is one that has received a lot of attention in the NLP community. Previous work have focused in particular on automatically detecting the scope of negation, that is, given a negative instance, to identify which tokens are affected by negation ( \u00a72). As shown in (1), only the first clause is negated and therefore we mark he and the car, along with the predicate was driving as inside the scope, while leaving the other tokens outside.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "(1) He was not driving the car and she left to go home.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In the BioMedical domain there is a long line of research around the topic (e.g. and Prabhakaran and Boguraev (2015) ),", |
| "cite_spans": [ |
| { |
| "start": 85, |
| "end": 116, |
| "text": "Prabhakaran and Boguraev (2015)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "given the importance of recognizing negation for information extraction from medical records. In more general domains, efforts have been more limited and most of the work centered around the *SEM2012 shared task on automatically detecting negation ( \u00a73), despite the recent interest (e.g. machine translation (Wetzel and Bond, 2012; Fancellu and Webber, 2014; Fancellu and Webber, 2015) ). The systems submitted for this shared task, although reaching good overall performance are highly feature-engineered, with some relying on heuristics based on English ) or on tools that are available for a limited number of languages (e.g. Basile et al. (2012) , Packard et al. (2014) ), which do not make them easily portable across languages. Moreover, the performance of these systems was only assessed on data of the same genre (stories from Conan Doyle's Sherlock Holmes) but there was no attempt to test the approach on data of different genre.", |
| "cite_spans": [ |
| { |
| "start": 309, |
| "end": 332, |
| "text": "(Wetzel and Bond, 2012;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 333, |
| "end": 359, |
| "text": "Fancellu and Webber, 2014;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 360, |
| "end": 386, |
| "text": "Fancellu and Webber, 2015)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 630, |
| "end": 650, |
| "text": "Basile et al. (2012)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 653, |
| "end": 674, |
| "text": "Packard et al. (2014)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Given these shortcomings, we investigate whether neural network based sequence-tosequence models ( \u00a7 4) are a valid alternative. The first advantage of neural networks-based methods for NLP is that we could perform classification by means of unsupervised word-embeddings features only, under the assumption that they also encode structural information previous system had to explicitly represent as features. If this assumption holds, another advantage of continuous representations is that, by using a bilingual word-embedding space, we would be able to transfer the model cross-lingually, obviating the problem of the lack of annotated data in other languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The paper makes the following contributions:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "1. Comparable or better performance: We show that neural networks perform on par with previously developed classifiers, with a bi-directional LSTM outperforming them when tested on data from the same genre.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We analyze in more detail the difficulty of detecting negation scope by testing on data of different genre and find that the performance of wordembedding features is comparable to that of more fine-grained syntactic features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Better understanding of the problem:", |
| "sec_num": "2." |
| }, |
| { |
| "text": "We create a synthetic test set of negative sentences extracted from Simple English Wikipedia ( \u00a7 5) and annotated according to the guidelines released during the *SEM2012 shared task (Morante et al., 2011 ), that we hope will guide future work in the field.", |
| "cite_spans": [ |
| { |
| "start": 183, |
| "end": 204, |
| "text": "(Morante et al., 2011", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Creation of additional resources:", |
| "sec_num": "3." |
| }, |
| { |
| "text": "Before formalizing the task, we begin by giving some definitions. A negative sentence n is defined as a vector of words w 1 , w 2 ...w n containing one or more negation cues, where the latter can be a word (e.g. not), a morpheme (e.g. im-patient) or a multi-word expression (e.g. by no means, no longer) inherently expressing negation. A word is a scope token if included in the scope of a negation cue. Following Blanco and Moldovan (2011) , in the *SEM2012 shared task the negation scope is understood as part of a knowledge representation focused around a negated event along with its related semantic roles and adjuncts (or its head in the case of a nominal event). This is exemplified in (2) (from Blanco and Moldovan (2011)) where the scope includes both the negated event eat along the subject the cow, the object grass and the PP with a fork.", |
| "cite_spans": [ |
| { |
| "start": 414, |
| "end": 440, |
| "text": "Blanco and Moldovan (2011)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The task", |
| "sec_num": "2" |
| }, |
| { |
| "text": "(2) The cow did n't eat grass with a fork. 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The task", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Each cue defines its own negation instance, here defined as a tuple I(n,c) where c \u2208 {1,0} |n| is a vector of length n s.t. c i = 1 if w i is part of the cue and 0 otherwise. Given I the goal of automatic scope detection is to predict a vector s \u2208 {O,I} |n| s.t. s i = I (inside of the scope) if w i is in the scope of the cue or O (outside) otherwise.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The task", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In (3) for instance, there are two cues, not and no longer, each one defining a separate negation instance, I1(n,c1) and I2(n,c2), and each with its own scope, s1 and s2. In both (3a) and (3b), n = [I, do, not, love, you, and, you, are, no, longer, invited] ; in (3a), the vector c1 is 1 only at index 3 (w 2 ='not'), while in (3b) c2 is 1 at position 9, 10 (where w 9 w 10 = 'no longer'); finally the vectors s1 and s2 are I only at the indices of the words underlined and O anywhere else.", |
| "cite_spans": [ |
| { |
| "start": 198, |
| "end": 257, |
| "text": "[I, do, not, love, you, and, you, are, no, longer, invited]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The task", |
| "sec_num": "2" |
| }, |
| { |
| "text": "(3) a. I do not love you and you are no longer invited b. I do not love you and you are no longer invited", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The task", |
| "sec_num": "2" |
| }, |
| { |
| "text": "There are the two main challenges involved in detecting the scope of negation: 1) a sentence can contain multiple instances of negation, sometimes nested and 2) scope can be discontinuous. As for 1), the classifier must correctly classify each word as being inside or outside the scope and assign each word to the correct scope; in (4) for instance, there are two negation cues and therefore two scopes, one spanning the entire sentence (3a.) and the other the subordinate only (3b.), with the latter being nested in the former (given that, according to the guidelines, if we negate the event in the main, we also negate its cause).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The task", |
| "sec_num": "2" |
| }, |
| { |
| "text": "(4) a. I did not drive to school because my wife was not feeling well . 2 b. I did not drive to school because my wife was not feeling well .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The task", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In (5), the classifier should instead be able to capture the long range dependency between the subject and its negated predicate, while excluding the positive VP in the middle.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The task", |
| "sec_num": "2" |
| }, |
| { |
| "text": "(5) Naomi went to visit her parents to give them a special gift for their anniversary but never came back .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The task", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In the original task, the performance of the classifier is assessed in terms of precision, recall and F 1 measure over the number of words correctly classified as part of the scope (scope tokens) and over the number of scopes predicted that exactly 2 One might object that the scope only spans over the subordinate given that it is the part of the scope most likely to be interpreted as false (It is not the case that I drove to school because my wife was not at home, but for other reasons). In the *SEM2012 shared task however this is defined separately as the focus of negation and considered as part of the scope. One reason to distinguish the two is the high ambiguity of the focus: one can imagine for instance that if the speaker stresses the words to school this will be most likely considered the focus and the statement interpreted as It is not the case that I drive to school because my wife was not feeling well (but I drove to the hospital instead). match the gold scopes (exact scope match). As for latter, recall is a measure of accuracy since we score how many scopes we fully predict (true positives) over the total number of scopes in our test set (true positives and false negatives); precision takes instead into consideration false positives, that is those negation instances that are predicted as having a scope but in reality don't have any. This is the case of the interjection No (e.g. 'No, leave her alone') that never take scope. Table 1 summarizes the performance of systems previously developed to resolve the scope of negation in non-Biomedical texts.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 1457, |
| "end": 1464, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "The task", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In general, supervised classifiers perform better than rule-based systems, although it is a combination of hand-crafted heuristics and SVM rankers to achieve the best performance. Regardless of the approach used, the syntactic structure (either constituent or dependency-based) of the sentence is often used to detect the scope of negation. This is because the position of the cue in the tree along with the projection of its parent/governor are strong indicators of scope boundaries. Moreover, given that during training we basically learn which syntactic patterns the scope are likely to span, it is also possible to hypothesize that this system should scale well to other genre/domain, as long as we can have a parse for the sentence; this however was never confirmed empirically. Although informative, these systems suffers form three main shortcomings: 1) they are highly-engineered (as in the case of ) and syntactic features add up to other PoS, word and lemma n-gram features, 2) they rely on the parser producing a correct parse and 3) they are English specific.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous work", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Other systems (Basile et al., 2012; Packard et al., 2014) tried to traverse a semantic representation instead. Packard et al. (2014) achieves the best results so far, using hand-crafted heuristics to traverse the MRS (Minimal Recursion Semantics) structures of negative sentences. If the semantic parser cannot create a reliable representation for a sentence, the system 'backs-off' to the hybrid model of , which uses syntactic information instead. This system suffers however from the same shortcomings mentioned above, in particular, given that MRS representation can only be built for a small set of languages.", |
| "cite_spans": [ |
| { |
| "start": 14, |
| "end": 35, |
| "text": "(Basile et al., 2012;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 36, |
| "end": 57, |
| "text": "Packard et al., 2014)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 111, |
| "end": 132, |
| "text": "Packard et al. (2014)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous work", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In this paper, we experiment with two different neural networks architecture: a one hidden layer feed-forward neural network and a bidirectional LSTM (Long Short Term Memory, BiLSTM below) model. We chose to 'start simple' from a feed-forward network to investigate whether even a simple model can reach good performance using word-embedding features only. We then turned to a BiLSTM because a better fit for the task. BiLSTM are sequential models that operate both in forward and backwards fashion; the backward pass is especially important in the case of negation scope detection, given that a scope token can appear in a string before the cue and it is therefore important that we see the latter first to classify the former. We opted in this case for LSTM over RNN cells given that their inner composition is able to better retain useful information when backpropagating the error. 4 Both networks take as input a single negative instance I(n,c). We represent each word w i \u2208 n as a d-dimensional word-embedding vector x \u2208 R d (d=50). In order to encode information about the cue, each word is also represented by a cueembedding vector c \u2208 R d of the same dimensionality of x. c can only take two representations, cue, if c i =1, or notcue otherwise. We also define E vxd w as the word-embedding matrix, where v is the vocabulary size, and E 2xd c as the cue-embedding matrix.", |
| "cite_spans": [ |
| { |
| "start": 886, |
| "end": 887, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scope detection using Neural Networks", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In the case of a feed-forward neural network, the input for each word w i \u2208 n is the concatenation of its representation with the ones of its neighboring words in a context window of length l. This is because feed-forward networks treat the input units as separate and information about how words are arranged as sequences must be explicitly encoded in the input. We define these concatenations x conc and c conc as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scope detection using Neural Networks", |
| "sec_num": "4" |
| }, |
| { |
| "text": "x w i\u2212l ...x w i\u22121 ; x w i ; x w i+1 ...x w i +l and c w i\u2212l ...c w i\u22121 ; c w i ; c w i+1 ...c w i+l respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scope detection using Neural Networks", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We chose the value of l after analyzing the negation scopes in the dev set. We found that although the furthest scope tokens are 23 and 31 positions away from the cue on the left and the right respectively, 95% of the scope tokens fall in a window of 9 tokens to the left and 15 to the right, these two values being the window sizes we con- sider for our input. The probability of a given input is then computed as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scope detection using Neural Networks", |
| "sec_num": "4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "h = \u03c3(W x x conc + W c c conc + b) y = g(W y h + b y )", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Scope detection using Neural Networks", |
| "sec_num": "4" |
| }, |
| { |
| "text": "where W and b the weight and biases matrices, h the hidden layer representation, \u03c3 the sigmoid activation function and g the softmax operation (g(z m )= e zm / k e z k ) to assign a probability to the input of belonging to either the inside (I) or outside (O) of the scope classes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scope detection using Neural Networks", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In the biLSTM, no concatenation is performed, given that the structure of the network is already sequential. The input to the network for each word w i are the word-embedding vector x w i and the cue-embedding vector c w i , where w i constitutes a time step. The computation of the hidden layer at time t and the output can be represented as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scope detection using Neural Networks", |
| "sec_num": "4" |
| }, |
| { |
| "text": "i t = \u03c3(W (i) x x + W (i) c c + W (i) h h t\u22121 + b (i) ) f t = \u03c3(W (f ) x x + W (f ) c c + W (f ) h h t\u22121 + b (f ) ) o t = \u03c3(W (o) x x + W (o) c c + W (o) h h t\u22121 + b (o) ) c t = tanh(W (c) x x + W (c) c c + W (c) h h t\u22121 + b (c) ) c t = f t \u2022c t\u22121 + i t \u2022c t h back/f orw = o t \u2022 tanh(c t ) y t = g(W y (h back ; h f orw ) + b y )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scope detection using Neural Networks", |
| "sec_num": "4" |
| }, |
| { |
| "text": "(2) where the Ws are the weight matrices, h t\u22121 the hidden layer state a time t-1, i t , f t , o t the input, forget and the output gate at the time t and h back ; h f orw the concatenation of the backward and forward hidden layers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scope detection using Neural Networks", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Finally, in both networks our training objective is to minimise, for each negative instance, the negative log likelihood J(W,b) of the correct predic-tions over gold labels:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scope detection using Neural Networks", |
| "sec_num": "4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "J(W, b) = \u2212 1 l l i=1 y (w i ) log h \u03b8 (x (w i ) ) + (1 \u2212 y (w i ) ) log(1 \u2212 h \u03b8 (x (w i ) ))", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Scope detection using Neural Networks", |
| "sec_num": "4" |
| }, |
| { |
| "text": "where l is the length of the sentence n \u2208 I, x (w i ) the probability for the word w i to belong to either the I or O class and y (w i ) its gold label. An overview of both architectures is shown in Figure 1 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 199, |
| "end": 207, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Scope detection using Neural Networks", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Training, development and test set are a collection of stories from Conan Doyle's Sherlock Holmes annotated for cue and scope of negation and released in concomitance with the *SEM2012 shared task. 5 For each word, the correspondent lemma, POS tag and the constituent subtree it belongs to are also annotated. If a sentence contains multiple instances of negation, each is annotated separately.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Both training and testing is done on negative sentences only, i.e. those sentences with at least one cue annotated. Training and test size are of 848 and 235 sentences respectively. If a sentence contains multiple negation instances, we create as many copies as the number of instances. If the sentence contains a morphological cue (e.g. impatient) we split it into affix (im-) and root (patient), and consider the former as cue and the latter as part of the scope.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Both neural network architectures are implemented using TensorFlow (Abadi et al., 2015) with a 200-units hidden layer (400 in total for two concatenated hidden layers in the BiLSTM), the Adam optimizer (Kingma and Ba, 2014) with a starting learning rate of 0.0001, learning rate decay after 10 iterations without improvement and early stopping. In both cases we experimented with different settings:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "1. Simple baseline: In order to understand how hard the task of negation scope detection is, we created a simple baseline by tagging as part of the scope all the tokens 4 words to the left and 6 to the right of the cue; these values were found to be the average span of the scope in either direction in the training data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The word-embedding matrix is randomly initialised and updated relying on the training data only. Information about the cue is fed through another set of embedding vectors, as shown in 4. This resembles the 'Closed track' of the *SEM2012 shared task since no external resource is used.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cue info (C):", |
| "sec_num": "2." |
| }, |
| { |
| "text": "3. Cue info + external embeddings (E): This is the same as setting (2) except that the embed-dings are pre-trained using external data. We experimented with both keeping the wordembedding matrix fixed and updating it during training but we found small or no difference between the two settings. To do this, we train a word-embedding matrix using Word2Vec (Mikolov et al., 2013) on 770 million tokens (for a total of 30 million sentences and 791028 types) from the 'One Billion Words Language Modelling' dataset 6 and the Sherlock Holmes data (5520 sentences) combined. The dataset was tokenized and morphological cues split into negation affix and root to match the Conan Doyle's data. In order to perform this split, we matched each word against an hand-crafted list of words containing affixal negation 7 ; this method have an accuracy of 0.93 on the Conan Doyle test data.", |
| "cite_spans": [ |
| { |
| "start": 355, |
| "end": 377, |
| "text": "(Mikolov et al., 2013)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cue info (C):", |
| "sec_num": "2." |
| }, |
| { |
| "text": "This was mainly to assess whether we could get further improvement by adding additional information. For all the setting above, we also add an extra embedding input vector for the POS or Universal POS of each word w i . As for the word and the cue embeddings, PoS-embedding information are fed to the hidden layer through a separate weight matrix. When pre-trained, the training data for the external PoS-embedding matrix is the same used for building the word embedding representation, except that in this case we feed the PoS / Universal PoS tag for each word. As in (3), we experimented with both updating the tag-embedding matrix and keeping it fixed but found again small or no difference between the two settings. In order to maintain consistency with the original data, we perform PoS tagging using the GE-NIA tagger (Tsuruoka et al., 2005) 8 and then map the resulting tags to universal POS tags. 9", |
| "cite_spans": [ |
| { |
| "start": 824, |
| "end": 847, |
| "text": "(Tsuruoka et al., 2005)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adding PoS / Universal PoS information (PoS/uni PoS):", |
| "sec_num": "4." |
| }, |
| { |
| "text": "The results for the scope detection task are shown in Table 2 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 54, |
| "end": 61, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Results for both architecture when wordembedding features only are used (C and C + E) show that neural networks are a valid alternative for scope detection, with bi-directional LSTM being able to outperform all previously developed classifiers on both scope token recognition and exact scope matching. Moreover, a bi-directional LSTM shows similar performance to the hybrid system of Packard et al. (2014) (rule-based + SVM as a back-off) in absence of any hand-crafted heuristics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "It is also worth noticing that although pretraining the word-embedding and PoS-embedding matrices on external data leads to a slight improvement in performance, the performance of the systems using internal data only is already competitive; this is a particularly positive result considering that the training data is relatively small.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Finally, adding universal POS related information leads to a better performance in most cases. The fact that the best system is built using language-independent features only is an important result when considering the portability of the model across different languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In order to understand the kind of errors our best classifier makes, we performed an error analysis on the held-out set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error analysis", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "First, we investigate whether the per-instance prediction accuracy correlates with scope-related (length of the scope to the left, to the right and combined; maximum length of the gap in a discontinuous scope) and cue-related (type of cue -oneword, prefixal, suffixal, multiword-) variables. We also checked whether the neural network is biased towards the words it has seen in the training(for instance, if it has seen the same token always labeled as O it will then classify it as O). For our best biLSTM system, we found only weak to moderate negative correlations with the following variables:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error analysis", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "\u2022 length of the gap, if the scope is discontinuous (r=-0.1783, p = 0.004);", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error analysis", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "\u2022 overall scope length (r=-0.3529, p < 0.001);", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error analysis", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "\u2022 scope length to the left and to the right (r=-0.3251 and -0.2659 respectively with p < 0.001)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error analysis", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "\u2022 presence of a prefixal cue (r=-0.1781, p = 0.004)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error analysis", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "\u2022 presence of a multiword cue (r=-0.1868, p = 0.0023) meaning that the variables considered are not strong enough to be considered as error patterns. For this reason we also manually analyzed the 96 negation scopes that the best biLSTM system predicted incorrectly and noticed several error patterns:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error analysis", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "\u2022 in 5 cases, the scope should only span on the subordinate but end up including elements from the main. In (6) for instance, where the system prediction is reported in curly brackets, the BiLSTM ends up including the main predicate with its subject in the scope.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error analysis", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "(6) You felt so strongly about it that {I knew you could} not {think of Beecher without thinking of that also} .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error analysis", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "\u2022 in 5 cases, the system makes an incorrect prediction in presence of the syntactic inversion, where a subordinate appears before the main clause; in (7) for instance, the system extends the prediction to the main clause when the scope should instead span the subordinate only.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error analysis", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "(7) But {if she does} not {wish to shield him she would give his name}", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error analysis", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "\u2022 in 8 cases, where two VPs, one positive and one negative, are coordinated, the system ends up including in the scope the positive VP as well, as shown in (8). We hypothesized this is due to the lack of such examples in the training set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error analysis", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "(8) Ah, {you do} n't {know Sarah 's temper or you would wonder no more} .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error analysis", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "As in Packard et al. (2014) , we also noticed that in 15 cases, the gold annotations do not follow the guidelines; in the case of a negated adverb in particular, as shown in (9a) and (9b) the annotations do not seem to agree on whether consider as scope only the adverb or the entire clause around it. Table 2 : Results for the scope detection task on the held-out set. Results are plotted against the simple baseline, the best system so far (Packard et al., 2014) and the system with the highest F1 for scope tokens classification amongst the ones submitted for the *SEM2012 shared task. We also report the number of gold scope tokens, true positive (tp), false positives(fp) and false negatives(fn). 5 Evaluation on synthetic data set", |
| "cite_spans": [ |
| { |
| "start": 6, |
| "end": 27, |
| "text": "Packard et al. (2014)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 442, |
| "end": 464, |
| "text": "(Packard et al., 2014)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 302, |
| "end": 309, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Error analysis", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "One question left unanswered by previous work is whether the performance of scope detection classifiers is robust against data of a different genre and whether different types of negation lead to difference in performance. To answer this, we compare two of our systems with the only original submission to the *SEM2012 we found available (White, 2012) 10 . We decided to use both our best system, BiLSTM+C+UniPoS+E and a sub-optimal systems, BiLSTM+C+E to also assess the robustness of non-English specific features. The synthetic test set here used is built on sentences extracted from Simple Wikipedia and manually annotated for cue and scope according to the annotation guidelines released in concomitance with the *SEM2012 shared task (Morante et al., 2011) . We created 7 different subsets to test different types of negative sentences:", |
| "cite_spans": [ |
| { |
| "start": 739, |
| "end": 761, |
| "text": "(Morante et al., 2011)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methodology", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Simple: we randomly picked 50 positive sentences, containing only one predicate, no dates and no named entities, and we made them negative by adding a negation cue (do support or minor morphological changes were added when required). If more than a lexical negation cue fit in the context, we used them all by creating more than one negative counterpart, as shown in (10). The sentences were picked to contain different kind of predicates (verbal, existential, nominal, adjectival) . Lexical: we randomly picked 10 sentences 11 for each lexical (i.e. one-word) cue in training data (these are not, no, none, nobody, never, without) Prefixal: we randomly picked 10 sentences for each prefixal cue in the training data (un-, im-, in-, dis-, ir-)", |
| "cite_spans": [ |
| { |
| "start": 439, |
| "end": 481, |
| "text": "(verbal, existential, nominal, adjectival)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methodology", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Suffixal: we randomly picked 10 sentences for the suffixal cue -less.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methodology", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Multi-word: we randomly picked 10 sentences for each multi-word cue (neither...nor,no longer,by no means).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methodology", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Unseen: we include 10 sentences for each of the negative prefixes a-(e.g. a-cyclic), ab-(e.g. ab-normal) non-(e.g. non-Communist) that are not annotated as cue in the Conan Doyle corpus, to test whether the system can generalise the classification to unseen cues. 12 Table 3 . shows the results for the comparison on the synthetic test set. The first thing worth noticing is that by using word-embedding features only it is possible to reach comparable performance with a classifier using syntactic features, with universal PoS generally contributing to a better performance; this is particularly evident in the multiword and lexical sub-sets. In general, genre effects hinder both systems; however, considering that the training data is less than 1000 sentences, results are relatively good. Performance gets worse when dealing with morphological cues and in particular in the case of our classifier, with suffixal cues; at a closer inspection however, the cause of such poor performance is attributable to a discrepancy between the annotation guidelines and the training data, already noted in \u00a74.4. The guidelines state in fact that 'If the negated affix is attached to an adverb that is a complement of a verb, the negation scopes over the entire clause' (Morante et al., 2011, p. 21) and we annotated suffixal negation in this way. However, 3 out of 4 examples of suffixal negation in adverbs in the training data (e.g. 9a.) mark the scope on the adverbial root only and that's what our classifiers learn to do.", |
| "cite_spans": [ |
| { |
| "start": 1259, |
| "end": 1288, |
| "text": "(Morante et al., 2011, p. 21)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 267, |
| "end": 274, |
| "text": "Table 3", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Methodology", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Finally, it can be noticed that our system does worse at exact scope matching than the CRF classifier. This is because White (2012)'s CRF model is build on constituency-based features that will then predict scope tokens based on constituent boundaries (which, as we said, are good indicator of scope boundaries), while neural networks, basing the prediction only on word-embedding information, might extend the prediction over these boundaries or leave 'gaps' within.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "In this work, we investigated and confirmed that neural networks sequence-to-sequence models are a valid alternative for the task of detecting the scope of negation. In doing so we offer a detailed analysis of its performance on data of different genre and containing different types of negation, also in comparison with previous classifiers, and found that non-English specific continuous representation can perform batter than or on par with more fine-grained structural features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Future work can be directed towards answering two main questions:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Can we improve the performance of our classifier? To do this, we are going to explore whether adding language-independent structural informa-tion (e.g. universal dependency information) can help the performance on exact scope matching.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Can we transfer our model to other languages? Most importantly, we are going to test the model using word-embedding features extracted from a bilingual embedding space.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In the *SEM2012 shared task, negation is not considered as a downward monotone function and definite expressions are included in its scope.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "For more details on LSTM and related mathematical formulations, we refer to reader toHochreiter and Schmidhuber (1997)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "For the statistics regarding the data, we refer the reader toMorante and Blanco (2012).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Available athttps://code.google.com/ archive/p/word2vec/ 7 The list was courtesy of Ulf Hermjakob and Nathan Schneider.8 https://github.com/saffsd/geniatagger 9 Mapping available at https://github.com/ slavpetrov/universal-pos-tags", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In order for the results to be comparable, we feed White's system with the cues from the gold-standard instead of automatically detecting them.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In some cases, we ended up with more than 10 examples for some cues given that some of the sentences we picked contained more than a negation instance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The data, along with the code, is freely available at https://github.com/ffancellu/NegNN", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This project was also founded by the European Unions Horizon 2020 research and innovation programme under grant agreement No 644402 (HimL).The authors would like to thank Naomi Saphra, Nathan Schneider and Claria Vania for the valuable suggestions and the three anonymous reviewers for their comments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Tensorflow: Large-scale machine learning on heterogeneous systems", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "References M Abadi", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Agarwal", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Barham", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Brevdo", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Citro", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Gs Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Davis", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Devin", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "References M Abadi, A Agarwal, P Barham, E Brevdo, Z Chen, C Citro, GS Corrado, A Davis, J Dean, M Devin, et al. 2015. Tensorflow: Large-scale machine learn- ing on heterogeneous systems. White paper, Google Research.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Umichigan: A conditional random field model for resolving the scope of negation", |
| "authors": [ |
| { |
| "first": "Amjad", |
| "middle": [], |
| "last": "Abu", |
| "suffix": "" |
| }, |
| { |
| "first": "-Jbara", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Dragomir", |
| "middle": [], |
| "last": "Radev", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics", |
| "volume": "1", |
| "issue": "", |
| "pages": "328--334", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Amjad Abu-Jbara and Dragomir Radev. 2012. Umichigan: A conditional random field model for resolving the scope of negation. In Proceedings of the First Joint Conference on Lexical and Computa- tional Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Pro- ceedings of the Sixth International Workshop on Se- mantic Evaluation, pages 328-334. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Ucm-2: a rule-based approach to infer the scope of negation via dependency parsing", |
| "authors": [ |
| { |
| "first": "Miguel", |
| "middle": [], |
| "last": "Ballesteros", |
| "suffix": "" |
| }, |
| { |
| "first": "Alberto", |
| "middle": [], |
| "last": "D\u00edaz", |
| "suffix": "" |
| }, |
| { |
| "first": "Virginia", |
| "middle": [], |
| "last": "Francisco", |
| "suffix": "" |
| }, |
| { |
| "first": "Pablo", |
| "middle": [], |
| "last": "Gerv\u00e1s", |
| "suffix": "" |
| }, |
| { |
| "first": "Jorge", |
| "middle": [], |
| "last": "Carrillo De Albornoz", |
| "suffix": "" |
| }, |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Plaza", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics", |
| "volume": "1", |
| "issue": "", |
| "pages": "288--293", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Miguel Ballesteros, Alberto D\u00edaz, Virginia Francisco, Pablo Gerv\u00e1s, Jorge Carrillo De Albornoz, and Laura Plaza. 2012. Ucm-2: a rule-based approach to infer the scope of negation via dependency pars- ing. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth Inter- national Workshop on Semantic Evaluation, pages 288-293. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Ugroningen: Negation detection with discourse representation structures", |
| "authors": [ |
| { |
| "first": "Valerio", |
| "middle": [], |
| "last": "Basile", |
| "suffix": "" |
| }, |
| { |
| "first": "Johan", |
| "middle": [], |
| "last": "Bos", |
| "suffix": "" |
| }, |
| { |
| "first": "Kilian", |
| "middle": [], |
| "last": "Evang", |
| "suffix": "" |
| }, |
| { |
| "first": "Noortje", |
| "middle": [], |
| "last": "Venhuizen", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics", |
| "volume": "1", |
| "issue": "", |
| "pages": "301--309", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Valerio Basile, Johan Bos, Kilian Evang, and Noortje Venhuizen. 2012. Ugroningen: Negation detection with discourse representation structures. In Pro- ceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceed- ings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 301-309. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Some issues on detecting negation from text", |
| "authors": [ |
| { |
| "first": "Eduardo", |
| "middle": [], |
| "last": "Blanco", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [ |
| "I" |
| ], |
| "last": "Moldovan", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "FLAIRS Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "228--233", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eduardo Blanco and Dan I Moldovan. 2011. Some issues on detecting negation from text. In FLAIRS Conference, pages 228-233. Citeseer.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Fbk: Exploiting phrasal and contextual clues for negation scope detection", |
| "authors": [ |
| { |
| "first": "Md", |
| "middle": [], |
| "last": "Chowdhury", |
| "suffix": "" |
| }, |
| { |
| "first": "Faisal", |
| "middle": [], |
| "last": "Mahbub", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics", |
| "volume": "1", |
| "issue": "", |
| "pages": "340--346", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Md Chowdhury and Faisal Mahbub. 2012. Fbk: Exploiting phrasal and contextual clues for nega- tion scope detection. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main con- ference and the shared task, and Volume 2: Pro- ceedings of the Sixth International Workshop on Se- mantic Evaluation, pages 340-346. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Ucm-i: A rule-based syntactic approach for resolving the scope of negation", |
| "authors": [ |
| { |
| "first": "Jorge", |
| "middle": [], |
| "last": "Carrillo De Albornoz", |
| "suffix": "" |
| }, |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Plaza", |
| "suffix": "" |
| }, |
| { |
| "first": "Alberto", |
| "middle": [], |
| "last": "D\u00edaz", |
| "suffix": "" |
| }, |
| { |
| "first": "Miguel", |
| "middle": [], |
| "last": "Ballesteros", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics", |
| "volume": "1", |
| "issue": "", |
| "pages": "282--287", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jorge Carrillo de Albornoz, Laura Plaza, Alberto D\u00edaz, and Miguel Ballesteros. 2012. Ucm-i: A rule-based syntactic approach for resolving the scope of nega- tion. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth Inter- national Workshop on Semantic Evaluation, pages 282-287. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Applying the semantics of negation to smt through nbest list re-ranking", |
| "authors": [ |
| { |
| "first": "Federico", |
| "middle": [], |
| "last": "Fancellu", |
| "suffix": "" |
| }, |
| { |
| "first": "Bonnie", |
| "middle": [ |
| "L" |
| ], |
| "last": "Webber", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "EACL", |
| "volume": "", |
| "issue": "", |
| "pages": "598--606", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Federico Fancellu and Bonnie L Webber. 2014. Ap- plying the semantics of negation to smt through n- best list re-ranking. In EACL, pages 598-606.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Translating negation: A manual error analysis", |
| "authors": [ |
| { |
| "first": "Federico", |
| "middle": [], |
| "last": "Fancellu", |
| "suffix": "" |
| }, |
| { |
| "first": "Bonnie", |
| "middle": [], |
| "last": "Webber", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Federico Fancellu and Bonnie Webber. 2015. Trans- lating negation: A manual error analysis. ExProM 2015, page 1.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Uabcoral: a preliminary study for resolving the scope of negation", |
| "authors": [ |
| { |
| "first": "Binod", |
| "middle": [], |
| "last": "Gyawali", |
| "suffix": "" |
| }, |
| { |
| "first": "Thamar", |
| "middle": [], |
| "last": "Solorio", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Sixth International Workshop on Semantic Evaluation", |
| "volume": "1", |
| "issue": "", |
| "pages": "275--281", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Binod Gyawali and Thamar Solorio. 2012. Uabco- ral: a preliminary study for resolving the scope of negation. In Proceedings of the First Joint Con- ference on Lexical and Computational Semantics- Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evalua- tion, pages 275-281. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Long short-term memory", |
| "authors": [ |
| { |
| "first": "Sepp", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00fcrgen", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural computation", |
| "volume": "9", |
| "issue": "8", |
| "pages": "1735--1780", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Adam: A method for stochastic optimization", |
| "authors": [ |
| { |
| "first": "Diederik", |
| "middle": [], |
| "last": "Kingma", |
| "suffix": "" |
| }, |
| { |
| "first": "Jimmy", |
| "middle": [], |
| "last": "Ba", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1412.6980" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Uio 2: sequence-labeling negation using dependency features", |
| "authors": [ |
| { |
| "first": "Emanuele", |
| "middle": [], |
| "last": "Lapponi", |
| "suffix": "" |
| }, |
| { |
| "first": "Erik", |
| "middle": [], |
| "last": "Velldal", |
| "suffix": "" |
| }, |
| { |
| "first": "Lilja", |
| "middle": [], |
| "last": "\u00d8vrelid", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonathon", |
| "middle": [], |
| "last": "Read", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics", |
| "volume": "1", |
| "issue": "", |
| "pages": "319--327", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Emanuele Lapponi, Erik Velldal, Lilja \u00d8vrelid, and Jonathon Read. 2012. Uio 2: sequence-labeling negation using dependency features. In Proceedings of the First Joint Conference on Lexical and Com- putational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 319-327. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Distributed representations of words and phrases and their compositionality", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [ |
| "S" |
| ], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeff", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Advances in neural information processing systems", |
| "volume": "", |
| "issue": "", |
| "pages": "3111--3119", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "* sem 2012 shared task: Resolving the scope and focus of negation", |
| "authors": [ |
| { |
| "first": "Roser", |
| "middle": [], |
| "last": "Morante", |
| "suffix": "" |
| }, |
| { |
| "first": "Eduardo", |
| "middle": [], |
| "last": "Blanco", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics", |
| "volume": "1", |
| "issue": "", |
| "pages": "265--274", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roser Morante and Eduardo Blanco. 2012. * sem 2012 shared task: Resolving the scope and focus of nega- tion. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth Inter- national Workshop on Semantic Evaluation, pages 265-274. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Annotation of negation cues and their scope: Guidelines v1. Computational linguistics and psycholinguistics technical report series", |
| "authors": [ |
| { |
| "first": "Roser", |
| "middle": [], |
| "last": "Morante", |
| "suffix": "" |
| }, |
| { |
| "first": "Sarah", |
| "middle": [], |
| "last": "Schrauwen", |
| "suffix": "" |
| }, |
| { |
| "first": "Walter", |
| "middle": [], |
| "last": "Daelemans", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roser Morante, Sarah Schrauwen, and Walter Daele- mans. 2011. Annotation of negation cues and their scope: Guidelines v1. Computational linguistics and psycholinguistics technical report series, CTRS- 003.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Simple negation scope resolution through deep parsing: A semantic solution to a semantic problem", |
| "authors": [ |
| { |
| "first": "Woodley", |
| "middle": [], |
| "last": "Packard", |
| "suffix": "" |
| }, |
| { |
| "first": "Emily", |
| "middle": [ |
| "M" |
| ], |
| "last": "Bender", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonathon", |
| "middle": [], |
| "last": "Read", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephan", |
| "middle": [], |
| "last": "Oepen", |
| "suffix": "" |
| }, |
| { |
| "first": "Rebecca", |
| "middle": [], |
| "last": "Dridan", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "ACL (1)", |
| "volume": "", |
| "issue": "", |
| "pages": "69--78", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Woodley Packard, Emily M Bender, Jonathon Read, Stephan Oepen, and Rebecca Dridan. 2014. Sim- ple negation scope resolution through deep parsing: A semantic solution to a semantic problem. In ACL (1), pages 69-78.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Learning structures of negations from flat annotations", |
| "authors": [ |
| { |
| "first": "Vinodkumar", |
| "middle": [], |
| "last": "Prabhakaran", |
| "suffix": "" |
| }, |
| { |
| "first": "Branimir", |
| "middle": [], |
| "last": "Boguraev", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Lexical and Computational Semantics (* SEM 2015)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vinodkumar Prabhakaran and Branimir Boguraev. 2015. Learning structures of negations from flat an- notations. Lexical and Computational Semantics (* SEM 2015), page 71.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Uio 1: Constituent-based discriminative ranking for negation resolution", |
| "authors": [ |
| { |
| "first": "Jonathon", |
| "middle": [], |
| "last": "Read", |
| "suffix": "" |
| }, |
| { |
| "first": "Erik", |
| "middle": [], |
| "last": "Velldal", |
| "suffix": "" |
| }, |
| { |
| "first": "Lilja", |
| "middle": [], |
| "last": "\u00d8vrelid", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephan", |
| "middle": [], |
| "last": "Oepen", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics", |
| "volume": "1", |
| "issue": "", |
| "pages": "310--318", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jonathon Read, Erik Velldal, Lilja \u00d8vrelid, and Stephan Oepen. 2012. Uio 1: Constituent-based discriminative ranking for negation resolution. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceed- ings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 310-318. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Developing a robust partof-speech tagger for biomedical text", |
| "authors": [ |
| { |
| "first": "Yoshimasa", |
| "middle": [], |
| "last": "Tsuruoka", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuka", |
| "middle": [], |
| "last": "Tateishi", |
| "suffix": "" |
| }, |
| { |
| "first": "Jin-Dong", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomoko", |
| "middle": [], |
| "last": "Ohta", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Mcnaught", |
| "suffix": "" |
| }, |
| { |
| "first": "Sophia", |
| "middle": [], |
| "last": "Ananiadou", |
| "suffix": "" |
| }, |
| { |
| "first": "Jun'ichi", |
| "middle": [], |
| "last": "Tsujii", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Advances in informatics", |
| "volume": "", |
| "issue": "", |
| "pages": "382--392", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoshimasa Tsuruoka, Yuka Tateishi, Jin-Dong Kim, Tomoko Ohta, John McNaught, Sophia Ananiadou, and Jun'ichi Tsujii. 2005. Developing a robust part- of-speech tagger for biomedical text. Advances in informatics, pages 382-392.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Speculation and negation: Rules, rankers, and the role of syntax", |
| "authors": [ |
| { |
| "first": "Erik", |
| "middle": [], |
| "last": "Velldal", |
| "suffix": "" |
| }, |
| { |
| "first": "Lilja", |
| "middle": [], |
| "last": "\u00d8vrelid", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonathon", |
| "middle": [], |
| "last": "Read", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephan", |
| "middle": [], |
| "last": "Oepen", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Computational linguistics", |
| "volume": "38", |
| "issue": "", |
| "pages": "369--410", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Erik Velldal, Lilja \u00d8vrelid, Jonathon Read, and Stephan Oepen. 2012. Speculation and negation: Rules, rankers, and the role of syntax. Computa- tional linguistics, 38(2):369-410.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Enriching parallel corpora for statistical machine translation with semantic negation rephrasing", |
| "authors": [ |
| { |
| "first": "Dominikus", |
| "middle": [], |
| "last": "Wetzel", |
| "suffix": "" |
| }, |
| { |
| "first": "Francis", |
| "middle": [], |
| "last": "Bond", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Sixth Workshop on Syntax, Semantics and Structure in Statistical Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "20--29", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dominikus Wetzel and Francis Bond. 2012. Enrich- ing parallel corpora for statistical machine transla- tion with semantic negation rephrasing. In Proceed- ings of the Sixth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 20- 29. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Uwashington: Negation resolution using machine learning methods", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Paul White", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics", |
| "volume": "1", |
| "issue": "", |
| "pages": "335--339", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "James Paul White. 2012. Uwashington: Negation res- olution using machine learning methods. In Pro- ceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceed- ings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 335-339. Association for Computational Linguistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "text": "An example of scope detection using feed-forward and BiLSTM for the tokens 'you are no longer invited' in the instance in ex. (3b).", |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "text": "10) a. Many people disagree on the topic b. Many people do not disagree on the topic c. Many people never disagree on the topic", |
| "type_str": "figure", |
| "num": null |
| }, |
| "TABREF0": { |
| "content": "<table><tr><td/><td/><td/><td colspan=\"3\">Scope tokens 3</td><td colspan=\"2\">Exact scope match</td></tr><tr><td/><td/><td>Method</td><td colspan=\"2\">Prec. Rec.</td><td>F 1</td><td colspan=\"2\">Prec. Rec.</td><td>F 1</td></tr><tr><td>*SEM2012</td><td>Closed track UiO1 (Open track UiO2 (Lapponi et al., 2012) UGroningen (Basile et al., 2012) UCM-1 (de Albornoz et al., 2012) UCM-2 (Ballesteros et al., 2012)</td><td>CRF rule-based rule-based rule-based</td><td colspan=\"5\">82.25 82.16 82.20 85.71 62.65 72.39 69.20 82.27 75.15 76.12 40.96 53.26 85.37 68.53 76.03 82.86 46.59 59.64 58.30 67.70 62.65 67.13 38.55 48.98</td></tr><tr><td/><td>Packard et al. (2014)</td><td colspan=\"2\">heuristics + SVM 86.1</td><td>90.4</td><td>88.2</td><td>98.8</td><td>65.5</td><td>78.7</td></tr></table>", |
| "html": null, |
| "text": "heuristics+ SVM 81.99 88.81 85.26 87.43 61.45 72.17 UiO2(Lapponi et al., 2012) CRF 86.03 81.55 83.73 85.71 62.65 72.39 FBK (Chowdhury and Mahbub, 2012) CRF 81.53 82.44 81.89 88.96 58.23 70.39 UWashington (White, 2012) CRF 83.26 83.77 83.51 82.72 63.45 71.81 UMichigan (Abu-Jbara and Radev, 2012) CRF 84.85 80.66 82.70 90.00 50.60 64.78 UABCoRAL (Gyawali and Solorio, 2012) SVM 85.37 68.86 76.23 79.04 53.01 63.46", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF1": { |
| "content": "<table/>", |
| "html": null, |
| "text": "Summary of previous work on automatic detection of negation scope.", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF4": { |
| "content": "<table><tr><td/><td/><td/><td/><td/><td colspan=\"2\">Scope tokens</td><td/><td/><td colspan=\"3\">Exact scope match</td></tr><tr><td/><td>Data</td><td>gold</td><td>tp</td><td>fp</td><td>fn</td><td>Prec.</td><td>Rec.</td><td>F 1</td><td>Prec.</td><td>Rec.</td><td>F 1</td></tr><tr><td>White (2012)</td><td colspan=\"6\">simple 20 100suffixal 850 830 0 100 78 7</td><td/><td/><td/><td/></tr></table>", |
| "html": null, |
| "text": ".00 97.65 98.81 100.00 93.98 96.90 lexical 814 652 101 162 86.59 80.10 83.22 100.00 58.41 73.75 prefixal 316 232 103 83 68.98 73.40 71.12 100.00 32.76 49.35", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF5": { |
| "content": "<table/>", |
| "html": null, |
| "text": "Results for the scope detection task on the synthetic test set.", |
| "num": null, |
| "type_str": "table" |
| } |
| } |
| } |
| } |