ACL-OCL / Base_JSON /prefixS /json /S12 /S12-1041.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S12-1041",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:24:15.832808Z"
},
"title": "UiO 1 : Constituent-Based Discriminative Ranking for Negation Resolution",
"authors": [
{
"first": "Jonathon",
"middle": [],
"last": "Read",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Oslo",
"location": {}
},
"email": "jread@ifi.uio.no"
},
{
"first": "Erik",
"middle": [],
"last": "Velldal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Oslo",
"location": {}
},
"email": "erikve@ifi.uio.no"
},
{
"first": "Lilja",
"middle": [],
"last": "\u00d8vrelid",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Oslo",
"location": {}
},
"email": "liljao@ifi.uio.no"
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Oslo",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes the first of two systems submitted from the University of Oslo (UiO) to the 2012 *SEM Shared Task on resolving negation. Our submission is an adaption of the negation system of Velldal et al. (2012), which combines SVM cue classification with SVM-based ranking of syntactic constituents for scope resolution. The approach further extends our prior work in that we also identify factual negated events. While submitted for the closed track, the system was the top performer in the shared task overall.",
"pdf_parse": {
"paper_id": "S12-1041",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes the first of two systems submitted from the University of Oslo (UiO) to the 2012 *SEM Shared Task on resolving negation. Our submission is an adaption of the negation system of Velldal et al. (2012), which combines SVM cue classification with SVM-based ranking of syntactic constituents for scope resolution. The approach further extends our prior work in that we also identify factual negated events. While submitted for the closed track, the system was the top performer in the shared task overall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The First Joint Conference on Lexical and Computational Semantics (*SEM 2012) hosts a shared task on resolving negation (Morante and Blanco, 2012) . This involves the subtasks of (i) identifying negation cues, (ii) identifying the in-sentence scope of these cues, and (iii) identifying negated (and factual) events. This paper describes a system submitted by the Language Technology Group at the University of Oslo (UiO). Our starting point is the negation system developed by Velldal et al. (2012) for the domain of biomedical texts, an SVM-based system for classifying cues and ranking syntactic constituents to resolve cue scopes. However, we extend and adapt this system in several important respects, such as in terms of the underlying linguistic formalisms that are used, the textual domain, handling of morphological cues and discontinuous scopes, and in that the current system also identifies negated events.",
"cite_spans": [
{
"start": 120,
"end": 146,
"text": "(Morante and Blanco, 2012)",
"ref_id": "BIBREF3"
},
{
"start": 477,
"end": 498,
"text": "Velldal et al. (2012)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The data sets used for the shared task include the following, all based on negation-annotated Conan Doyle (CD) stories (Morante and Daelemans, 2012) : a training set of 3644 sentences (hereafter referred to as CDT), a development set of 787 sentences (CDD), and a held-out evaluation set of 1089 sentences (CDE). We will refer to the combination of CDT and CDD as CDTD. An example of an annotated sentence is shown in (1) below, where the cue is marked in bold, the scope is underlined, and the event marked in italics.",
"cite_spans": [
{
"start": 119,
"end": 148,
"text": "(Morante and Daelemans, 2012)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) There was no answer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We describe two different system configurations, both of which were submitted for the closed track (hence we can only make use of the data provided by the task organizers). The systems only differ with respect to how they were optimized. In the first configuration, (hereafter I), all components in the pipeline had their parameters tuned by 10-fold cross-validation across CDTD. The second configuration (II) is tuned against CDD using CDT for training. The rationale for this strategy is to guard against possible overfitting effects that could result from either optimization scheme, given the limited size of the data sets. For the held-out testing all models are estimated on the entire CDTD.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unless otherwise noted, all reported scores are generated using the evaluation script provided by the organizers, which breaks down performance with respect to cues, events, scope tokens, and two variants of scope-level exact match (one requiring exact match of cues and the other only partial cue match). The latter two scores are identical for our system hence are not duplicated in this paper. Furthermore, as we did not optimize for the scope tokens measure this is only reported for the final evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Note also that the evaluation actually includes two variants of the metrics mentioned above; a set of primary measures with precision computed as P = T P/(T P + F P ) and a set of so-called B measures that instead uses P = T P/S, where S is the total number of predictions made by the system. The reason why S is not identical with T P + F P is that partial matches are only counted as FNs (and not FPs) in order to avoid double penalties. We do not report the B measures for development testing as they were only introduced for the final evaluation and hence were not considered in our system optimization. We note though, that the relativeranking of participating systems for the primary and B measures is identical, and that the correlation between the paired lists of scores is nearly perfect (r = 0.997).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is structured according to the components of our system. Section 2 details the process of identifying instances of negation through the disambiguation of known cue words and affixes. Section 3 describes our hybrid approach to scope resolution, which utilizes both heuristic and data-driven methods to select syntactic constituents. Section 4 discusses our event detection component, which first applies a classifier to filter out non-factual events and then uses a learned ranking function to select events among in-scope tokens. End-to-end results are presented in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Cue identification is based on the light-weight classification scheme presented by Velldal et al. (2012) . By treating the set of cue words as a closed class, Velldal et al. (2012) showed that one could greatly reduce the number of examples presented to the learner, and correspondingly the number of features, while at the same time improving performance. This means that the classifier only attempts to 'disambiguate' known cue words, while ignoring any words not observed as cues in the training data.",
"cite_spans": [
{
"start": 83,
"end": 104,
"text": "Velldal et al. (2012)",
"ref_id": "BIBREF7"
},
{
"start": 159,
"end": 180,
"text": "Velldal et al. (2012)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cue Detection",
"sec_num": "2"
},
{
"text": "The classifier applied in the current submission is extended to also handle morphological or affixal negation cues, such as the prefix cue in impatience, the infix in carelessness, and the suffix of colourless. The negation affixes observed in CDTD are; the prefixes un, dis, ir, im, and in; the infix less (we internally treat this as the suffixes lessly and lessness); and the suffix less. Of the total set of 1157 cues in the training and development data, 192 are affixal. There are, however, a total of 1127 tokens matching one of the affix patterns above, and while we main-tain the closed class assumption also for the affixes, the classifier will need to consider their status as a cue or non-cue when attaching to any such token, as in image, recklessness, and bless.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cue Detection",
"sec_num": "2"
},
{
"text": "In the initial formulation of Velldal (2011) , an SVM classifier was applied using simple n-gram features over words, both full forms and lemmas, to the left and right of the candidate cues. In addition to these token-level features, the classifier we apply here includes features specifically targeting affixal cues. The first such feature records character ngrams from both the beginning and end of the base that an affix attaches to (up to five positions). For a context like impossible we would record n-grams such as {possi, poss, . . . } and {sible, ible, . . . }, and combine this with information about the affix itself (im) and the token part-of-speech (\"JJ\").",
"cite_spans": [
{
"start": 30,
"end": 44,
"text": "Velldal (2011)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "2.1"
},
{
"text": "For the second type of affix-specific features, we try to emulate the effect of a lexicon look-up of the remaining substring that an affix attaches to, checking its status as an independent base form and its part-of-speech. In order to take advantage of such information while staying within the confines of the closed track, we automatically generate a lexicon from the training data, counting the instances of each PoS tagged lemma in addition to n-grams of wordinitial characters (again recording up to five positions). For a given match of an affix pattern, a feature will then record these counts for the substring it attaches to. The rationale for this feature is that the occurrence of a substring such as un in a token such as underlying should be less likely as a cue given that the first part of the remaining string (e.g., derly) would be an unlikely way to begin a word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "2.1"
},
{
"text": "It is also possible for a negation cue to span multiple tokens, such as the (discontinuous) pair neither / nor or fixed expressions like on the contrary. There are, however, only 16 instances of such multiword cues (MWCs) in the entire CDTD. Rather than letting the classifier be sensitive to these corner cases, we cover such MWC patterns using a small set of simple post-processing heuristics. A small stop-list is used for filtering out the relevant words from the examples presented to the classifier (on, the, etc.). Note that, in terms of training the final classifiers, CDTD provides us with a total of 1162 positive and Table 1 : Detecting negation cues using the two classifiers and the majority-usage baseline.",
"cite_spans": [],
"ref_spans": [
{
"start": 628,
"end": 635,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "2.1"
},
{
"text": "1100 negative training examples, given our closedclass treatment of cues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "2.1"
},
{
"text": "Before we turn to the results, note that the difference between the two submitted versions of the classifier (I and II) only concerns the orders of the n-grams used for the token-level features. 1 Table 1 presents the results for our cue classifier. As an informed baseline, we also tried classifying each word based on its most frequent use as a cue or noncue in the training data. (Affixal cue occurrences are counted by looking at both the affix-pattern and the base it attaches to, basically treating the entire token as a cue. Tokens that end up being classified as cues are then matched against the affix patterns observed during training in order to correctly delimit the annotation of the cue.) This simple majority-usage approach actually provides a fairly strong baseline, yielding an F 1 of 90.34 on CDTD. Compare this to the F 1 of 95.03 obtained by the classifier on the same data set. However, when applying the models to the held-out set, with models estimated over the entire CDTD, the classifier suffers a slight drop in performance, leaving the baseline even more competitive: While our best performing final cue classifier (I) achieves F 1 =92.10, the baseline achieves F 1 =89.51, and even outperforms four of the ten cue detection systems submitted for the shared task (three of the 12 shared task submissions use the same classifier).",
"cite_spans": [],
"ref_spans": [
{
"start": 197,
"end": 204,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "2.1"
},
{
"text": "1 Classifier I records the lemma and full form of the target token, and lemmas two positions left/right. Classifier II records the lemma, form, and PoS of the target, full forms three positions to the left and one to the right, PoS one position right/left, and lemmas three positions to the right. The affixal-specific features are the same for both configurations as described above. Inspecting the predictions of the classifier on CDD, which comprises a total of 173 gold annotated cues, we find that Classifier I mislabels 11 false positives (FPs) and seven false negatives (FNs). Of the FPs, we find five so-called false negation cues (Morante et al., 2011) , including three instances of the fixed expression none the less. The others are affixal cues, of which two are clearly wrong (underworked, universal) while others might arguably be due to annotation errors (insuperable, unhappily, endless, listlessly). Among the FNs, two are due to MWCs not covered by our heuristics (e.g., no more), with the remainder concerning affixes.",
"cite_spans": [
{
"start": 639,
"end": 661,
"text": "(Morante et al., 2011)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "2.2"
},
{
"text": "During the development of our scope resolution system we have pursued both a rule-based and datadriven approach. Both are rooted in the assumption that the scope of negations corresponds to a syntactically meaningful unit. Our starting point here will be the syntactic analyses provided by the task organizers (see Figure 1) , generated using the reranking parser of Charniak and Johnson (2005) . However, as alignment between scope annotations and syntactic units is not straightforward for all cases, we apply several exception rules that 'slacken' the requirements for alignment, as discussed in Section 3.1. In Sections 3.2 and 3.3 we detail our rule-based and data-driven approaches, respectively. Note that the predictions of the rule-based component will be incorporated as features in the learned model, similarly to the set-up described by Read et al. (2011) . Section 3.4 details the post-processing we apply to handle cases of discontinuous scope, be-fore Section 3.5 finally presents development results together with a brief error analysis.",
"cite_spans": [
{
"start": 367,
"end": 394,
"text": "Charniak and Johnson (2005)",
"ref_id": "BIBREF0"
},
{
"start": 849,
"end": 867,
"text": "Read et al. (2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 315,
"end": 324,
"text": "Figure 1)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Constituent-Based Scope Resolution",
"sec_num": "3"
},
{
"text": "In order to test our initial assumption that syntactic units correspond to scope annotations, we quantify the alignment of scopes with constituents in CDT, excluding 97 negations that do not have a scope. We find that the initial alignment is rather low at 52.42%. We therefore formulate a set of slackening heuristics, designed to improve on this alignment by removing certain constituents at the beginning or end of a scope. First of all, removing constituentinitial and -final punctuation improves alignment to 72.83%. We then apply the following slackening rules, with examples indicating the resulting scope following slackening (not showing events):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constituent Alignment and Slackening",
"sec_num": "3.1"
},
{
"text": "-Remove coordination (CC) and following conjuncts if the coordination is a rightwards sibling of an ancestor of the cue and it is not directly dominated by an NP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constituent Alignment and Slackening",
"sec_num": "3.1"
},
{
"text": "(2) Since we have been so unfortunate as to miss him and have no notion [. . . ] -Remove S* to the right of cue, if delimited by punctuation.",
"cite_spans": [
{
"start": 72,
"end": 80,
"text": "[. . . ]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constituent Alignment and Slackening",
"sec_num": "3.1"
},
{
"text": "(3) \"There is no other claimant, I presume ?\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constituent Alignment and Slackening",
"sec_num": "3.1"
},
{
"text": "-Remove constituent-initial SBAR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constituent Alignment and Slackening",
"sec_num": "3.1"
},
{
"text": "(4) If it concerned no one but myself I would not try to keep it from you.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constituent Alignment and Slackening",
"sec_num": "3.1"
},
{
"text": "-Remove punctuation-delimited NPs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constituent Alignment and Slackening",
"sec_num": "3.1"
},
{
"text": "(5) \"But I can't forget them, Miss Stapleton,\" said I.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constituent Alignment and Slackening",
"sec_num": "3.1"
},
{
"text": "-Remove constituent-initial RB, CC, UH, ADVP or INTJ.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constituent Alignment and Slackening",
"sec_num": "3.1"
},
{
"text": "(6) And yet it was not quite the last.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constituent Alignment and Slackening",
"sec_num": "3.1"
},
{
"text": "The slackening rules are based on a few observations. First, scope rarely crosses coordination boundaries (with the exception of nominal coordination). Second, scope usually does not cross clause boundaries (indicated by S/SBAR). Furthermore, titles and other nominals of address are not included in the scope. Finally, sentence and discourse adverbials are often excluded from the scope. Since these express semantic distinctions, we approximate this notion syntactically using parts-of-speech and constituent category labels expressing adverbials (RB), coordinations (CC), various types of interjections (UH, INTJ) and adverbial phrases (ADVP). We may note here that syntactic categories are not always sufficient to express semantic distinctions. Prepositional phrases, for instance, are often used to express the same type of discourse adverbials, but can also express a range of other distinctions (e.g., temporal or locative adverbials), which are included in the scope. So a slackening rule removing initial PPs was tried but not found to improve overall alignment. After applying the above slackening rules the alignment rate for CDT improves to 86.13%. This also represents an upper-bound on our performance, as we will not be able to correctly predict a scope that does not align with a (slackened) constituent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constituent Alignment and Slackening",
"sec_num": "3.1"
},
{
"text": "The alignment of constituents and scopes reveal consistent patterns and we therefore formulate a set of heuristic rules over constituents. These are based on frequencies of paths from the cue to the scopealigned constituent for the annotations in CDT, as well as the annotation guidelines (Morante et al., 2011) . The rules are formulated as paths over constituent trees and are presented in Figure 2 . The path syntax is based on LPath (Lai and Bird, 2010) . The rules are listed in order of execution, showing how more specific rules are consulted before more general ones. We furthermore allow for some additional functionality in the interpretation of rules by enabling simple constraints that are applied to the candidate constituent. For example, the rule RB//VP/SBAR if SBAR\\WH * will be activated when the cue is an adverb having some ancestor VP which has a parent SBAR, where the SBAR must contain a WH-phrase among its children.",
"cite_spans": [
{
"start": 289,
"end": 311,
"text": "(Morante et al., 2011)",
"ref_id": null
},
{
"start": 437,
"end": 457,
"text": "(Lai and Bird, 2010)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 392,
"end": 400,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Heuristics Operating over Constituents",
"sec_num": "3.2"
},
{
"text": "In cases where no rule is activated we use a default scope prediction, which expands the scope to both the left and the right of the cue until either the sentence boundary or a punctuation mark is reached. The rules are evaluated individually in Section 3.5 below and the rule predictions are furthermore employed as features for the ranker described below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Heuristics Operating over Constituents",
"sec_num": "3.2"
},
{
"text": "Our data-driven approach to scope resolution involves learning a ranking function over candidate syntactic constituents. The approach has similarities to discriminative parse selection, except that we here rank subtrees rather than full parses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constituent Ranking",
"sec_num": "3.3"
},
{
"text": "When defining the training data, we begin by selecting negations for which the parse tree contains a constituent that (after slackening) aligns with the gold scope. We then select an initial candidate by selecting the smallest constituent that spans all the words in the cue, and then generate subsequent candidates by traversing the path to the root of the tree (see Figure 1) . This results in a mean ambiguity of 4.9 candidate constituents per negation (in CDTD). Candidates whose projection corresponds to the gold scope are labeled as correct; all others are labeled as incorrect. Experimenting with a variety of feature types (listed in Table 2 ), we use the implementation of ordinal ranking in the SVM light toolkit (Joachims, 2002) to learn a linear scoring function for preferring correct candidate scopes.",
"cite_spans": [
{
"start": 724,
"end": 740,
"text": "(Joachims, 2002)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 368,
"end": 377,
"text": "Figure 1)",
"ref_id": "FIGREF0"
},
{
"start": 643,
"end": 650,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Constituent Ranking",
"sec_num": "3.3"
},
{
"text": "The most informative feature type is the LPath from cue, which in addition to recording the full path from the cue to the candidate constituent (e.g., the path to the correct candidate in Figure 1 is no/DT/NP/VP/S), also includes delexicalized (./DT/NP/VP/S), generalized (no/DT//S), and generalized delexicalized versions (./DT//S).",
"cite_spans": [],
"ref_spans": [
{
"start": 188,
"end": 194,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Constituent Ranking",
"sec_num": "3.3"
},
{
"text": "Note that the rule prediction feature facilitates a hybrid approach by recording whether the candidate matches the boundaries of the scope predicted by the rules of Section 3.2, as well as the degree of overlap.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constituent Ranking",
"sec_num": "3.3"
},
{
"text": "10.3% of the scopes in the training data are what (Morante et al., 2011) refer to as discontinuous. This means that the scope contains two or more parts which are bridged by tokens other than the cue. The sentence in (7) exemplifies a common cause of scopal discontinuity in the data, namely ellipsis (Morante et al., 2011) . Almost all of these are cases of coordination, as in example 7where the cue is found in the final conjunct (did not return [. . . ] ) and the scope excludes the preceding conjunct(s) (therefore spent the day at my club). There are also some cases of adverbs that are excluded from the scope, causing discontinuity, as in (8), where the adverb certainly is excluded from the scope. In order to deal with discontinuous scopes we formulate two simple post-processing heuristics, which are applied after rules/ranking: (1) If the cue is in a conjoined phrase, remove the previous conjuncts from the scope, and (2) remove sentential adverbs from the scope (where a list of sentential adverbs was compiled from the training data).",
"cite_spans": [
{
"start": 50,
"end": 72,
"text": "(Morante et al., 2011)",
"ref_id": null
},
{
"start": 301,
"end": 323,
"text": "(Morante et al., 2011)",
"ref_id": null
},
{
"start": 449,
"end": 457,
"text": "[. . . ]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Handling Discontinuous Scope",
"sec_num": "3.4"
},
{
"text": "Our development procedure evaluated all permutations of feature combinations, searching for optimal parameters using gold-standard cues. Table 2 indicates which features are included in our two ranker configurations, i.e., tuning by 10-fold crossvalidation on CDTD (I) vs. a train/test-split for CDT/CDD(II). Table 3 lists the results of our scope resolution approaches applied to gold cues. As a baseline, all Table 3 : Scope resolution for gold cues using the two versions of the ranker, also listing the performance of the rule-based approach in isolation.",
"cite_spans": [],
"ref_spans": [
{
"start": 137,
"end": 144,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 309,
"end": 316,
"text": "Table 3",
"ref_id": null
},
{
"start": 411,
"end": 418,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.5"
},
{
"text": "cases are assigned the default scope prediction of the rule-based approach. On CDTD this results in an F 1 of 49.61 (P=98.31, R=33.18); compare to the ranker in Configuration I on the same data set (F 1 =84.76, P=100.00, R=73.55). We note that our different optimization procedures do not appear to have made much difference to the learned ranking functions as both perform similarly on the held-out data, though suffering a slight drop in performance compared to the development results. We also evaluate the rules and observe that this approach achieves similar heldout results. This is particularly note-worthy given that there are only fourteen rules plus the default scope baseline. Note that, as the rankers performed better than the rules in isolation on both CDTD and CDD during development, our final system submissions are based on rankers I and II from Table 3 . We performed a manual error analysis of our scope resolution system (Ranker II ) on the basis of CDD (using gold cues). First, we may note that parse errors are a common sources of scope resolution errors. It is well-known that coordination presents a difficult construction for syntactic parsers, and we often find incorrectly parsed coordinate structures among the system errors. Since coordination is used both in the slackening rules and the analysis of discontinuous scopes, these errors have clear effects on system performance. We may further note that discourse-level adverbials, such as in the second place in example (9) below, are often included in the scope assigned by our system, which they should not be according to the gold annotation.",
"cite_spans": [],
"ref_spans": [
{
"start": 864,
"end": 871,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.5"
},
{
"text": "(9) But, in the second place, why did you not come at once?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.5"
},
{
"text": "There are also quite a few errors related to the scope of affixal cues, which the ranker often erroneously assigns a scope that is larger than simply the base which the affix attaches to.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.5"
},
{
"text": "Our event detection component implements two stages: First we apply a factuality classifier, and then we identify negated events 2 for those contexts that have been labeled as factual. We detail the two stages in order below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Event Detection",
"sec_num": "4"
},
{
"text": "The annotation guidelines of Morante et al. (2011) specify that events should only be annotated for negations that have a scope and that occur in factual statements. This means that we can view the *SEM data sets to implicitly annotate factuality and non-factuality, and take advantage of this to train an SVM factuality classifier. We take positive examples to correspond to negations annotated with both a scope and an event, while negative examples correspond to scope negations with no event. For CDTD, this strategy gives 738 positive and 317 negative examples, spread over a total of 930 sentences. Note that we do not have any explicit annotation of cue words for these examples. All we have are instances of negation that we know to be within a factual or non-factual context, but the indication of factuality may typically be well outside the annotated negation scope. For our experiments here, we therefore use the negation cue itself as a place-holder for the abstract notion of context that we are really classifying. Given the limited amount of data, we only optimize our factuality classifier by 10-fold crossvalidation on CDTD (i.e., the same configuration is used for submissions I and II).",
"cite_spans": [
{
"start": 29,
"end": 50,
"text": "Morante et al. (2011)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Factuality Detection",
"sec_num": "4.1"
},
{
"text": "The feature types we use are all variations over bag-of-words (BoW) features. We include left-and right-oriented BoW features centered on the negation cue, recording forms, lemmas, and PoS, and using both unigrams and bigrams. The features are ex- Table 4 : Results for factuality detection (using gold negation cues and scopes). Due to the limited training data for factuality, the classifier is only optimized by 10-fold cross-validation on CDTD.",
"cite_spans": [],
"ref_spans": [
{
"start": 248,
"end": 255,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Factuality Detection",
"sec_num": "4.1"
},
{
"text": "tracted from the sentence as a whole, as well as from a local window of six tokens to each side of the cue. Table 4 provides results for factuality classification using gold-standard cues and scopes. 3 We also include results for a baseline approach that simply considers all cases to be factual, i.e., the majority class. In this case precision is identical to accuracy and recall is 100%. For precision and accuracy we see that the classifier improves substantially over the baseline on both data sets, although there is a bit of a drop in performance when going from the 10-fold to held-out results. There also seem to be some signs of overfitting, given that roughly 70% of the training examples end up as support vectors.",
"cite_spans": [
{
"start": 200,
"end": 201,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 108,
"end": 115,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Factuality Detection",
"sec_num": "4.1"
},
{
"text": "Having filtered out non-factual contexts, events are identified by applying a similar approach to that of the scope-resolving ranker described in Section 3.3. In this case, however, we rank tokens as candidates for events. For simplicity in this first round of development we make the assumption that all events are single words. Thus, the system will be unable to correctly predict the event in the 6.94% of instances in CDTD that are multi-word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ranking Events",
"sec_num": "4.2"
},
{
"text": "We select candidate words from all those marked as being in the scope (including substrings of tokens with affixal cues). This gives a mean ambiguity of 7.8 candidate events per negation (in CDTD). Then, discarding multi-word training examples, we use SVM light to learn a ranking function for identifying events among the candidates. Table 5 shows the features employed, with in- 3 As this is not singled out as a separate subtask in the shared task itself, these are the only scores in the paper not computed using the script provided by the organizers. dications as to their presence in our two configurations (after an exhaustive search of feature combinations). The most important feature was LPath to scope constituent. For example, in Figure 1 the scope constituent is the S root of the tree; the path that describes the correct candidate is answer/NN/NP/VP/S. As discussed in Section 3.3, we also record generalized, delexicalized and generalized delexicalized paths. Table 6 lists the results of the event ranker applied to gold-standard cues, scopes, and factuality. For a comparative baseline, we implemented a keywordbased approach that simply searches in-scope words for instances of events previously observed in the training set, sorted according to descending frequency. This baseline achieves F 1 =29.44 on CDD. For comparison, the ranker (II) achieves F 1 =91.70 on the same data set, as seen in Table 6 . We also see that Configuration II appears to generalize best, with over 1.2 points improvement over the F 1 of I.",
"cite_spans": [
{
"start": 381,
"end": 382,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 335,
"end": 342,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 742,
"end": 750,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 976,
"end": 983,
"text": "Table 6",
"ref_id": "TABREF8"
},
{
"start": 1414,
"end": 1421,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Ranking Events",
"sec_num": "4.2"
},
{
"text": "An analysis of the event predictions for CDD indicates that the most frequent errors (41.2%) are instances where the ranker correctly predicts part of the event but our single word assumption is invalid. Another apparent error is that the system fails to Table 7 : End-to-end results on the held-out data.",
"cite_spans": [],
"ref_spans": [
{
"start": 255,
"end": 262,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ranking Events",
"sec_num": "4.2"
},
{
"text": "predict a main verb for the event, and instead predicts nouns (17.7% of all errors), modals (17.7%) or prepositions (11.8%).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ranking Events",
"sec_num": "4.2"
},
{
"text": "5 Held-Out Evaluation Table 7 presents our final results for both system configurations on the held-out evaluation data (also including the B measures, as discussed in the introduction). Comparing submission I and II, we find that the latter has slightly better scores end-to-end. However, as seen throughout the paper, the picture is less clear-cut when considering the isolated performance of each component. When ranked according to the Full Negation measures, our submissions were placed first and second (out of seven submissions in the closed track, and twelve submissions total). It is difficult to compare system performance on subtasks, however, as each component will be affected by the performance of the previous.",
"cite_spans": [],
"ref_spans": [
{
"start": 22,
"end": 29,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ranking Events",
"sec_num": "4.2"
},
{
"text": "This paper has presented two closed-track submissions for the *SEM 2012 shared task on negation resolution. The systems were ranked first and second overall in the shared task end-to-end evaluation, and the submissions only differ with respect to the data sets used for parameter tuning. There are four components in the pipeline: (i) An SVM classifier for identifying negation cue words and affixes, (ii) an SVM-based ranker that combines empirical evidence and manually-crafted rules to resolve the insentence scope of negation, (iii) a classifier for determining whether a negation is in a factual or non-factual context, and (iv) a ranker that determines (factual) negated events among in-scope tokens. For future work we would like to try training separate classifiers for affixal and token-level cues, given that largely separate sets of features are effective for the two cases. The system might also benefit from sources of information that would place it in the open track. These include drawing information from other parsers and formalisms, generating cue features from an external lexicon, and using additional training data for factuality detection, e.g., FactBank (Saur\u00ed and Pustejovsky, 2009) .",
"cite_spans": [
{
"start": 1178,
"end": 1207,
"text": "(Saur\u00ed and Pustejovsky, 2009)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "From observations on CDTD we note that approximately 14% of scopes will be unresolvable as they are not aligned with constituents (see Section 3.1). This can perhaps be tackled by ranking tokens as candidates for left and right scope boundaries (similar to the event ranker in the current work). This would improve the upper-bound to 100% at the expense of greatly increasing the number of candidates. However, the strong discriminative power of our current approach can still be incorporated using constituent-based features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Note that the annotation guidelines use the term event rather broadly as referring to a process, action, state, or property(Morante et al., 2011).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Roser Morante and Eduardo Blanco for their work in organizing this shared task and commitment to producing quality data. We also thank the anonymous reviewers for their feedback. Largescale experimentation was carried out with the TI-TAN HPC facilities at the University of Oslo.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Coarse-tofine n-best parsing and MaxEnt discriminative reranking",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Forty-Third Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak and Mark Johnson. 2005. Coarse-to- fine n-best parsing and MaxEnt discriminative rerank- ing. In Proceedings of the Forty-Third Annual Meeting of the Association for Computational Linguistics, Ann Arbor, MI.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Optimizing search engines using clickthrough data",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Eighth ACM International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In Proceedings of the Eighth ACM International Conference on Knowledge Discov- ery and Data Mining, Alberta.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Querying linguistic trees",
"authors": [
{
"first": "Catherine",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Logic, Language and Information",
"volume": "19",
"issue": "",
"pages": "53--73",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Catherine Lai and Steven Bird. 2010. Querying linguis- tic trees. Journal of Logic, Language and Information, 19:53-73.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "*SEM 2012 shared task: Resolving the scope and focus of negation",
"authors": [
{
"first": "Roser",
"middle": [],
"last": "Morante",
"suffix": ""
},
{
"first": "Eduardo",
"middle": [],
"last": "Blanco",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics, Montreal. Roser Morante and Walter Daelemans",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roser Morante and Eduardo Blanco. 2012. *SEM 2012 shared task: Resolving the scope and focus of nega- tion. In Proceedings of the First Joint Conference on Lexical and Computational Semantics, Montreal. Roser Morante and Walter Daelemans. 2012.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Annotation of negation cues and their scope: Guidelines v1.0",
"authors": [
{
"first": "",
"middle": [],
"last": "Conandoyle-Neg",
"suffix": ""
}
],
"year": 2011,
"venue": "University of Antwerp. CLIPS: Computational Linguistics & Psycholinguistics technical report series",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "ConanDoyle-neg: Annotation of negation in Conan Doyle stories. In Proceedings of the Eighth Interna- tional Conference on Language Resources and Evalu- ation, Istanbul. Roser Morante, Sarah Schrauwen, and Walter Daele- mans. 2011. Annotation of negation cues and their scope: Guidelines v1.0. Technical report, Univer- sity of Antwerp. CLIPS: Computational Linguistics & Psycholinguistics technical report series.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Resolving speculation and negation scope in biomedical articles using a syntactic constituent ranker",
"authors": [
{
"first": "Jonathon",
"middle": [],
"last": "Read",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Velldal",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "Lilja",
"middle": [],
"last": "\u00d8vrelid",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Fourth International Symposium on Languages in Biology and Medicine",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathon Read, Erik Velldal, Stephan Oepen, and Lilja \u00d8vrelid. 2011. Resolving speculation and negation scope in biomedical articles using a syntactic con- stituent ranker. In Proceedings of the Fourth Inter- national Symposium on Languages in Biology and Medicine, Singapore.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Factbank: a corpus annotated with event factuality",
"authors": [
{
"first": "Roser",
"middle": [],
"last": "Saur\u00ed",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
}
],
"year": 2009,
"venue": "Language Resources and Evaluation",
"volume": "43",
"issue": "3",
"pages": "227--268",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roser Saur\u00ed and James Pustejovsky. 2009. Factbank: a corpus annotated with event factuality. Language Resources and Evaluation, 43(3):227-268.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Speculation and negation: Rules, rankers and the role of syntax",
"authors": [
{
"first": "Erik",
"middle": [],
"last": "Velldal",
"suffix": ""
},
{
"first": "Lilja",
"middle": [],
"last": "\u00d8vrelid",
"suffix": ""
},
{
"first": "Jonathon",
"middle": [],
"last": "Read",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
}
],
"year": 2012,
"venue": "Computational Linguistics",
"volume": "38",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik Velldal, Lilja \u00d8vrelid, Jonathon Read, and Stephan Oepen. 2012. Speculation and negation: Rules, rankers and the role of syntax. Computational Lin- guistics, 38(2).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Predicting speculation: A simple disambiguation approach to hedge detection in biomedical literature",
"authors": [
{
"first": "Erik",
"middle": [],
"last": "Velldal",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Biomedical Semantics",
"volume": "2",
"issue": "5",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik Velldal. 2011. Predicting speculation: A simple dis- ambiguation approach to hedge detection in biomedi- cal literature. Journal of Biomedical Semantics, 2(5).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Example parse tree provided in the data, highlighting our candidate scope constituents.",
"uris": null,
"num": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Scope resolution heuristics.",
"uris": null,
"num": null
},
"TABREF3": {
"text": "Features used to describe candidate constituents for scope resolution, with indications of presence in our two system configurations.",
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null
},
"TABREF7": {
"text": "Features used to describe candidates for event detection, with indications of presence in our two system configurations.",
"content": "<table><tr><td>Data set</td><td>Model</td><td>Prec</td><td>Rec</td><td>F1</td></tr><tr><td>CDTD</td><td>RankerI</td><td colspan=\"3\">91.49 90.83 91.16</td></tr><tr><td>CDD</td><td colspan=\"4\">RankerII 92.11 91.30 91.70</td></tr><tr><td>CDE</td><td colspan=\"4\">RankerI RankerII 84.94 84.95 84.94 83.73 83.73 83.73</td></tr></table>",
"html": null,
"type_str": "table",
"num": null
},
"TABREF8": {
"text": "Event detection for gold scopes and gold factuality information.",
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null
}
}
}
}