ACL-OCL / Base_JSON /prefixS /json /S12 /S12-1044.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S12-1044",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:23:44.953789Z"
},
"title": "UWashington: Negation Resolution using Machine Learning Methods",
"authors": [
{
"first": "James",
"middle": [
"Paul"
],
"last": "White",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington",
"location": {
"postBox": "Box 354340",
"postCode": "98195",
"settlement": "Seattle",
"region": "WA",
"country": "USA"
}
},
"email": "jimwhite@uw.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper reports on a simple system for resolving the scope of negation in the closed track of the *SEM 2012 Shared Task. Cue detection is performed using regular expression rules extracted from the training data. Both scope tokens and negated event tokens are resolved using a Conditional Random Field (CRF) sequence taggernamely the SimpleTagger library in the MALLET machine learning toolkit. The full negation F 1 score obtained for the task evaluation is 48.09% (P=74.02%, R=35.61%) which ranks this system fourth among the six submitted for the closed track.",
"pdf_parse": {
"paper_id": "S12-1044",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper reports on a simple system for resolving the scope of negation in the closed track of the *SEM 2012 Shared Task. Cue detection is performed using regular expression rules extracted from the training data. Both scope tokens and negated event tokens are resolved using a Conditional Random Field (CRF) sequence taggernamely the SimpleTagger library in the MALLET machine learning toolkit. The full negation F 1 score obtained for the task evaluation is 48.09% (P=74.02%, R=35.61%) which ranks this system fourth among the six submitted for the closed track.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Resolving the scope of negation is an interesting area of research for Natural Language Processing (NLP) systems because many such systems have used methods that are insensitive to polarity. As a result it is fairly common to have a system that treats \"X does Y\" and \"X does not Y\" as having the same, or very nearly the same, meaning 1 . A few application areas that have been addressing this issue recently are in sentiment analysis, bio medical NLP, and recognition of textual entail ment. Sentiment analysis systems are frequently used in corporate and product marketing, call cen ter quality control, and within \"recommender\" sys tems which are all contexts where it is important to recognize that \"X does like Y\" is contrary to \"X does not like Y\". Similarly in biomedical text such as research papers and abstracts, diagnostic proce dure reports, and medical records it is important to differentiate between statements about what is the case and what is not the case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The *SEM 2012 Shared Task is actually two re lated tasks run in parallel. The one this system was developed for is the identification of three features of negation: the cue, the scope, and the factual negated event (if any). The other task is concerned with the focus of negation. Detailed description of both subtasks, including definition of the relevant concepts and terminology (negation, cue, scope, event, and focus) appears in this volume (Morante and Blanco, 2012) . Roser Morante and Eduardo Blanco describe the corpora provided to partici pants with numbers and examples, methods used used to process the data, and briefly describes each participant and analyzes the overall results.",
"cite_spans": [
{
"start": 446,
"end": 472,
"text": "(Morante and Blanco, 2012)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Annotation of the corpus was undertaken at the University of Antwerp and was performed on sev eral Sherlock Holmes works of fiction written by Sir Arthur Conan Doyle. The corpus includes all sentences from the original text, not just those em ploying negation. Roser Morante and Walter Daelemans provide a thorough explanation of those gold annotations of negation cue, scope, and negated event (if any) (Morante and Daelemans, 2012) . Their paper explains the motivations for the particular annotation decisions and describes in de tail the guidelines, including many examples.",
"cite_spans": [
{
"start": 404,
"end": 433,
"text": "(Morante and Daelemans, 2012)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recognition of phrases containing negation, partic ularly in the medical domain, using regular expres sions has been described using several different ap proaches. Systems such as Negfinder (Mutalik et al, 2001 ) and NegEx (Chapman et al, 2001 ) use manually constructed rules to extract phrases from text and classify them as to whether they contain an expression of negation. Rokach et al evaluate several methods and show their highest level of performance (an F 1 of 95.9 \u00b1 1.9%) by using cas caded decision trees of regular expressions learned from labelled narrative medical reports (Rokach et al, 2008) .",
"cite_spans": [
{
"start": 190,
"end": 210,
"text": "(Mutalik et al, 2001",
"ref_id": "BIBREF6"
},
{
"start": 223,
"end": 243,
"text": "(Chapman et al, 2001",
"ref_id": "BIBREF1"
},
{
"start": 589,
"end": 609,
"text": "(Rokach et al, 2008)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Those systems perform a different function than that required for this task though. They classify phrases extracted from plain text as to whether they contain negation or not, while the requirement of this shared task for negation cue detection is to identify the particular token(s) or part of a token that signals the presence of negation. Furthermore, those systems only identify the scope of negation at the level of phrasal constituents, which is differ ent than what is required for this task in which the scopes are not necessarily contiguous.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Conditional Random Field (CRF) sequence tag gers have been successfully applied to many scope resolution problems, including those of negation. The NegScope system (Agarwal and Yu, 2010) trains a CRF sequence tagger on labelled data to identify both the cue and scope of negation. How ever, that system only recognizes a whole word as a cue and does not recognize nor generalize nega tion cues which are affixes. There are also systems that use CRF sequence taggers for detection of hedge scopes (Tang et al, 2010 , Zhao et al, 2010 . Morante and Daelemans describe a method for im proving resolution of the scope of negation by combining IGTREE, CRF, and Support Vector Ma chines (SVM) (Morante and Daelemans, 2009) .",
"cite_spans": [
{
"start": 164,
"end": 186,
"text": "(Agarwal and Yu, 2010)",
"ref_id": "BIBREF0"
},
{
"start": 496,
"end": 513,
"text": "(Tang et al, 2010",
"ref_id": "BIBREF8"
},
{
"start": 514,
"end": 532,
"text": ", Zhao et al, 2010",
"ref_id": "BIBREF9"
},
{
"start": 687,
"end": 716,
"text": "(Morante and Daelemans, 2009)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "This system is implemented as a three stage cas cade with the output from each of the first two stages included as input to the subsequent stage. The stages are ordered as cue detection, scope de tection, and finally negated event detection. The format of the inputs and outputs for each stage use the shared task's CoNLLstyle file format. That simplifies the use of the supplied goldstandard data for training of each stage separately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "3"
},
{
"text": "Because this system was designed for the closed track of the shared task, it makes minimal languagespecific assumptions and learns (nearly) all languagespecific rules from the goldlabelled training data (which includes the development set for the final system).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "3"
},
{
"text": "The CRF sequence tagger used by the system is that implemented in the SimpleTagger class of the MALLET toolkit, which is a Java library distrib uted under the Common Public License 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "3"
},
{
"text": "The system is implemented in the Groovy pro gramming language, an agile and dynamic lan guage for the Java Virtual Machine 3 . The source code is available under the GNU Public License on GitHub 4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "3"
},
{
"text": "Cues are recognized by four different regular ex pression rule patterns: affixes (partial token), single (whole) token, contiguous multiple token, and gappy (discontiguous) multiple token. The rules are learned by a two pass process. In the first pass, for each positive example of a negation cue in the training data, a rule that matches that example is added to the prospective rule set. Then, in the sec ond pass, the rules are applied to the training data and the counts of correct and incorrect matches are accumulated. Rules that are wrong more often than they are right are removed from the set used by the system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cue Detection",
"sec_num": "3.1"
},
{
"text": "A further filtering of the prospective rules is done in which gappy multiple token rules that match the same word type more than once are re moved. Those prospective rules are created to match cases in the supplied training data where the a repetition has occurred and then encoded by the annotators as a single cue (and thus scope) of nega tion 5 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cue Detection",
"sec_num": "3.1"
},
{
"text": "The single token and multiple token rules match both the word string feature (ignoring case) and the partofspeech (POS) feature of each token. And because a single token rule might also match a cue that belongs to a multiple token rule, multiple to ken rules are checked first.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cue Detection",
"sec_num": "3.1"
},
{
"text": "Affix rules are of two types: prefix cues and nonprefix cues. The distinction is that while pre fix cues must match starting at the beginning of the word string, the nonprefix cues may have a suffix following them in the word string that is not part of the cue. Affix rules only match against the word string feature of the tokens and are insensitive to the POS feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cue Detection",
"sec_num": "3.1"
},
{
"text": "In order to generalize the affix rules, sets are ac cumulated of both base word strings (the substring following a prefix cue or substring preceding a nonprefix cue) and suffixes (the substring follow ing nonprefix cues, if any). In addition, all other word strings and lemma strings in the training cor pus that are at least four characters long are added to the set of possible base word strings 6 . A set of negative word strings is also accumulated in the second pass of the rule training to condition against false positive matches for each affix rule.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cue Detection",
"sec_num": "3.1"
},
{
"text": "A prefix cue rule will match a token with a word string that starts with the cue string and is followed by any of the strings in the base word set. Similarly a suffix cue rule will match a token whose word string contains the cue string preceded by a string in the base word set and is either at the end of the string or is followed by one of the strings in the suffix string set. Affix rules, unlike the other cuematching rules, also output the string for matched base word as the value of the scope for the matched token. In any case, if the token's word string is in the negative word string set for the rule then it will not be matched.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cue Detection",
"sec_num": "3.1"
},
{
"text": "Following submission of the system outputs for the shared tasked I discovered that a hand written regular expression rule that filters out the (poten tial) cues detected for \"(be|have) no doubt\" and \"none the (worse|less)\" was inadvertently included in the system. Although those rules could be learned automatically from the training data (and such was my intention), the system as reported here does not currently do so.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cue Detection",
"sec_num": "3.1"
},
{
"text": "For each cue detected, scope resolution is per formed as a ternary classification of each token in the sentence as to whether it is part of a cue, part of a scope, or neither. The classifier is the CRF se quence tagger implemented in the SimpleTagger class of the MALLET toolkit (McCallum, 2002) . Training is performed using the goldstandard data including the gold cues. The output of the tagger is not used to determine the scope value of a token in those cases where an affix rule in the cue detector has matched a token and therefore has supplied the matched base word string as the value of the scope for the token.",
"cite_spans": [
{
"start": 279,
"end": 295,
"text": "(McCallum, 2002)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Negation Scope Resolution",
"sec_num": "3.2"
},
{
"text": "For features that are computed in terms of the cue token, the first (lowest numbered) token marked as a cue is used when there is more than one cue token for the scope.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Negation Scope Resolution",
"sec_num": "3.2"
},
{
"text": "Features used by the scope CRF sequence tag ger are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Negation Scope Resolution",
"sec_num": "3.2"
},
{
"text": "\u2022 Of the pertoken data: word string in low ercase, lemma string in lowercase, partof speech (POS) tag, binary flag indicating whether the token is a cue, a binary flag in dicating whether the token is at the edge of its parent nonterminal node or an internal sibling, a binary flag indicating whether the token is a cue token, and relative posi tion to the cue token in number of tokens. \u2022 Of the cue token data: word string in low ercase, lemma string in lowercase, and POS tag. \u2022 Of the path through the syntax tree from the cue token: an ordered list of the non terminal labels of each node up the tree to the lowest common parent, an ordered list of the nonterminal labels of each node down the tree from that lowest common parent, a path relation value consisting of the label of the lowest common parent node concatenated with an indication of the relative position of the paths to the cue and token in terms of sibling order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Negation Scope Resolution",
"sec_num": "3.2"
},
{
"text": "Detection of the negated event or property is per formed using the same CRF sequence tagger and features used for scope detection. The only differ ence is that the token classification is in terms of whether each token in the sentence is part of a fac tual negated event for each negation cue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Negated Event Resolution",
"sec_num": "3.3"
},
{
"text": "A comparison of the endtoend performance of this system using several different sets of per token feature choices for the scope and negated event classifiers is shown in Table 1 . In each case the training data is the entire training data and the dev data is the entire dev data supplied by the organiz ers for this shared task. The scores are computed by the evaluation program also supplied by the or ganizers. The baseline features are those provided in the data, with the exception of the syntactic tree fragment: word string in lowercase, lemma in low ercase, and POS tag. The \"set 1\" features are the remainder of the features described in section 3.2, with the exception of those of the path through the syntax tree from the cue token. The \"set 2\" fea tures are the three baseline features plus the three features of the path through the syntax tree from the cue token: list of nonterminal labels from cue up to the lowest common parent, lowest common parent label concatenated with the relative distance in nodes between the siblings, list of nonterminals from the lowest common parent down to the token.",
"cite_spans": [],
"ref_spans": [
{
"start": 170,
"end": 177,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Feature Set Selection",
"sec_num": "3.4"
},
{
"text": "The \"system\" feature set is the union of set 1 and set 2, and is the one used by the submitted system. The baseline score is an F 1 of 31.5% (P=79.1%, R=19.7%) on the dev data. Using either feature set 1 or 2 results in substantially better performance. They achieve nearly the same score on the dev set with an F 1 of 50\u00b10.5% (P=87\u00b10.2%, R=35\u00b10.3%) in which the difference is that between one case of true positive vs. false negative out of 173. The combination of those feature sets is better still though with an F 1 of 54.4% (P=88.3%, R=39.3%). Table 2 presents the scores computed for the sys tem output on the heldout evaluation data. The F 1 for full negation is 48.1% (P=74%, R=35.6%), which is noticeably lower than that seen for the dev data (54.4%). That reduction is to be expected because the dev data was used for system tuning. There was also evidence of significant overfitting to the training data because the F 1 for that was 76.5% (P=92%, R=65.5%). The largest compo nent of the fall off in performance is in the recall.",
"cite_spans": [],
"ref_spans": [
{
"start": 549,
"end": 556,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Feature Set Selection",
"sec_num": "3.4"
},
{
"text": "The worst performing component of the system is the negated event detection which has an F 1 of 54.3% (P=58%, R=51%) on the evaluation data. One contributor to low precision for the negated event detector is that the root word of an affix cue is always output as a negated event, bypassing the negated event CRF sequence classifier. In the combined training and dev data there is a total of 1157 gold cues (and scopes) of which 738 (63.8%) are annotated as having a negated event. Of the 1198 cues the system outputs for that data, 188 (15.7%) are affix cues, each of which will also be output as a negated event. Therefore it would be reasonable to expect that approximately 16 (27.7%) of the false positives for the negated event in the evaluation (60) are due to that behavior. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "This paper describes the system I implemented for the closed track of the *SEM 2012 Shared Task for negation cue, scope, and event resolution. The sys tem's performance on the heldout evaluation data, an F 1 of 48.09% (P=74.02%, R=35.61%) for the full negation, relative to the other entries for the task is fourth among the six teams that partici pated. The strongest part of this system is the scope re solver which performs at a level near that of the bestperforming systems in this shared task. I think it is likely that the performance on scope res olution would be equivalent to them with a better negation cue detector. That is supported by the \"no cue match\" version of the scope resolution evalua tion for which this system has the highest F 1 (72.4%).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Clearly the weakest link is the negated event detector. Since one obvious source of error is that the root word extracted when an affix cue is de tected is always output as a negated event, a promising approach for improvement would be to instead utilize that as a feature for the negated event's CRF sequence tagger so that they have a chance to be filtered out in nonfactual contexts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "A one token difference between the strings surely indicating at least an inexact match.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://mallet.cs.umass.edu/ 3 http://groovy.codehaus.org/ 4 https://github.com/jimwhite/SEMST2012 5 Such as baskervilles12 174: \"Not a whisper, not a rustle, rose...\" which has a cue annotation of \"Not\" gap \"not\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This \"longer than four character\" rule was manually created to correct for overgeneralization observed in the training data. If the affix rule learner selected this value using the correct/in correct counts as it does with the other rule parameters then this bit of languagespecific tweaking would be unnecessary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "I want to thank Roser Morante and Eduardo Blanco for organizing this task, the reviewers for their thorough and very helpful suggestions, and Emily Bender for her guidance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Biomedical negation scope detection with conditional random fields",
"authors": [
{
"first": "Shashank",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of the American Medical Informatics As sociation",
"volume": "17",
"issue": "6",
"pages": "696--701",
"other_ids": {
"DOI": [
"10.1136/jamia.2010.003228"
]
},
"num": null,
"urls": [],
"raw_text": "Shashank Agarwal and Hong Yu. 2010. Biomedical negation scope detection with conditional random fields. Journal of the American Medical Informatics As sociation, 17(6), 696-701. doi:10.1136/jamia.2010.003228",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A simple algorithm for identifying negated findings and diseases in dis charge summaries",
"authors": [
{
"first": "W",
"middle": [
"W"
],
"last": "Chapman",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Bridewell",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Hanbury",
"suffix": ""
},
{
"first": "G",
"middle": [
"F"
],
"last": "Cooper",
"suffix": ""
},
{
"first": "B",
"middle": [
"G"
],
"last": "Buchanan",
"suffix": ""
}
],
"year": 2001,
"venue": "Journal of Biomedical Informatics",
"volume": "34",
"issue": "5",
"pages": "301--310",
"other_ids": {
"DOI": [
"10.1006/jbin.2001.1029"
]
},
"num": null,
"urls": [],
"raw_text": "Chapman, W. W., Bridewell, W., Hanbury, P., Cooper, G. F., & Buchanan, B. G.. 2001. A simple algorithm for identifying negated findings and diseases in dis charge summaries. Journal of Biomedical Informatics, 34(5), 301-310. doi:10.1006/jbin.2001.1029",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "MALLET: A Machine Learning for Language Toolkit",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew McCallum. 2002. MALLET: A Machine Learning for Language Toolkit. Retrieved from http://mallet.cs.umass.edu",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "*SEM 2012 Shared Task: Resolving the Scope and Focus of Negation",
"authors": [
{
"first": "Roser",
"middle": [],
"last": "Morante",
"suffix": ""
},
{
"first": "Eduardo",
"middle": [],
"last": "Blanco",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics. Presented at the *SEM 2012",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roser Morante and Eduardo Blanco. 2012. *SEM 2012 Shared Task: Resolving the Scope and Focus of Negation. Proceedings of the First Joint Conference on Lexical and Computational Semantics. Presented at the *SEM 2012, Montreal, Canada.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A Met alearning Approach to Processing the Scope of Nega tion",
"authors": [
{
"first": "Roser",
"middle": [],
"last": "Morante",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Thirteenth Conference on Com putational Natural Language Learning (CoNLL2009)",
"volume": "",
"issue": "",
"pages": "21--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roser Morante and Walter Daelemans. 2009. A Met alearning Approach to Processing the Scope of Nega tion. Proceedings of the Thirteenth Conference on Com putational Natural Language Learning (CoNLL2009) (pp. 21-29). Boulder, Colorado: Association for Com putational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Conan Doyleneg: Annotation of negation in Conan Doyle sto ries",
"authors": [
{
"first": "Roser",
"middle": [],
"last": "Morante",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eighth International Confer ence on Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roser Morante and Walter Daelemans. 2012. Conan Doyleneg: Annotation of negation in Conan Doyle sto ries. Proceedings of the Eighth International Confer ence on Language Resources and Evaluation (LREC).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Use of generalpurpose negation detection to augment concept indexing of medical docu ments: a quantitative study using the UMLS",
"authors": [
{
"first": "G",
"middle": [],
"last": "Pradeep",
"suffix": ""
},
{
"first": "Aniruddha",
"middle": [],
"last": "Mutalik",
"suffix": ""
},
{
"first": "Prakash",
"middle": [
"M"
],
"last": "Deshpande",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nadkarni",
"suffix": ""
}
],
"year": 2001,
"venue": "Journal of the American Medical Informatics Association: JAMIA",
"volume": "8",
"issue": "6",
"pages": "598--609",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pradeep G. Mutalik, Aniruddha Deshpande, and Prakash M. Nadkarni. 2001. Use of generalpurpose negation detection to augment concept indexing of medical docu ments: a quantitative study using the UMLS. Journal of the American Medical Informatics Association: JAMIA, 8(6), 598-609.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Negation recognition in medical narrative reports",
"authors": [
{
"first": "Lior",
"middle": [],
"last": "Rokach",
"suffix": ""
},
{
"first": "Roni",
"middle": [],
"last": "Romano",
"suffix": ""
},
{
"first": "Oded",
"middle": [],
"last": "Maimon",
"suffix": ""
}
],
"year": 2008,
"venue": "Information Retrieval",
"volume": "11",
"issue": "6",
"pages": "499--538",
"other_ids": {
"DOI": [
"10.1007/s1079100890610"
]
},
"num": null,
"urls": [],
"raw_text": "Lior Rokach, Roni Romano, and Oded Maimon. 2008. Negation recognition in medical narrative reports. Information Retrieval, 11(6), 499-538. doi:10.1007/s1079100890610",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A Cascade Method for Detecting Hedges and their Scope in Natural Language Text",
"authors": [
{
"first": "Buzhou",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Xiaolong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Shixi",
"middle": [],
"last": "Fan",
"suffix": ""
}
],
"year": 2010,
"venue": "Pro ceedings of the Fourteenth Conference on Computa tional Natural Language Learning",
"volume": "",
"issue": "",
"pages": "13--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Buzhou Tang, Xiaolong Wang, Xuan Wang, Bo Yuan, and Shixi Fan. 2010. A Cascade Method for Detecting Hedges and their Scope in Natural Language Text. Pro ceedings of the Fourteenth Conference on Computa tional Natural Language Learning (pp. 13-17). Upp sala, Sweden: Association for Computational Linguis tics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Learning to Detect Hedges and their Scope Using CRF",
"authors": [
{
"first": "Qi",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Chengjie",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Bingquan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Cheng",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "100--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qi Zhao, Chengjie Sun, Bingquan Liu, and Yong Cheng. 2010. Learning to Detect Hedges and their Scope Using CRF. Proceedings of the Fourteenth Conference on Computational Natural Language Learning (pp. 100- 105). Uppsala, Sweden: Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table><tr><td/><td/><td colspan=\"2\">Gold System</td><td>TP</td><td>FP</td><td colspan=\"3\">FN Precision (%) Recall (%)</td><td>F1 (%)</td></tr><tr><td colspan=\"2\">Baseline (train)</td><td>984</td><td colspan=\"2\">1034 382</td><td colspan=\"2\">56 602</td><td>87.21</td><td>38.82</td><td>53.73</td></tr><tr><td/><td>(dev)</td><td>173</td><td>164</td><td>34</td><td colspan=\"2\">9 139</td><td>79.07</td><td>19.65</td><td>31.48</td></tr><tr><td>Set 1</td><td>(train)</td><td>984</td><td colspan=\"2\">1034 524</td><td colspan=\"2\">56 460</td><td>90.34</td><td>53.25</td><td>67.00</td></tr><tr><td/><td>(dev)</td><td>173</td><td>164</td><td>60</td><td colspan=\"2\">9 113</td><td>86.96</td><td>34.68</td><td>49.59</td></tr><tr><td>Set 2</td><td>(train)</td><td>984</td><td colspan=\"2\">1034 666</td><td colspan=\"2\">56 318</td><td>92.24</td><td>67.68</td><td>78.07</td></tr><tr><td/><td>(dev)</td><td>173</td><td>164</td><td>61</td><td colspan=\"2\">9 112</td><td>87.14</td><td>35.26</td><td>50.21</td></tr><tr><td colspan=\"2\">System (train)</td><td>984</td><td colspan=\"2\">1034 644</td><td colspan=\"2\">56 340</td><td>92.00</td><td>65.45</td><td>76.49</td></tr><tr><td/><td>(dev)</td><td>173</td><td>164</td><td>68</td><td colspan=\"2\">9 105</td><td>88.31</td><td>39.31</td><td>54.40</td></tr></table>",
"text": "Comparison of full negation scores for various feature sets.",
"type_str": "table",
"html": null,
"num": null
},
"TABREF1": {
"content": "<table><tr><td/><td colspan=\"2\">Gold System</td><td>TP</td><td colspan=\"5\">FP FN Precision (%) Recall (%) F1 (%)</td></tr><tr><td>Cues</td><td>264</td><td colspan=\"2\">285 243</td><td>33</td><td>21</td><td>88.04</td><td>92.05</td><td>90.00</td></tr><tr><td>Scopes (no cue match)</td><td>249</td><td colspan=\"2\">270 158</td><td>33</td><td>89</td><td>82.90</td><td>64.26</td><td>72.40</td></tr><tr><td>Scope tokens (no cue match)</td><td>1805</td><td colspan=\"4\">1816 1512 304 293</td><td>83.26</td><td>83.77</td><td>83.51</td></tr><tr><td>Negated (no cue match)</td><td>173</td><td>154</td><td>83</td><td>60</td><td>80</td><td>58.04</td><td>50.92</td><td>54.25</td></tr><tr><td>Full negation</td><td>264</td><td>285</td><td>94</td><td colspan=\"2\">33 170</td><td>74.02</td><td>35.61</td><td>48.09</td></tr><tr><td>Cues B</td><td>264</td><td colspan=\"2\">285 243</td><td>33</td><td>21</td><td>85.26</td><td>92.05</td><td>88.52</td></tr><tr><td>Scopes B (no cue match)</td><td>249</td><td colspan=\"2\">270 158</td><td>33</td><td>89</td><td>59.26</td><td>64.26</td><td>61.66</td></tr><tr><td>Negated B (no cue match)</td><td>173</td><td>154</td><td>83</td><td>60</td><td>80</td><td>53.9</td><td>50.92</td><td>52.37</td></tr><tr><td>Full negation B</td><td>264</td><td>285</td><td>94</td><td colspan=\"2\">33 170</td><td>32.98</td><td>35.61</td><td>34.24</td></tr></table>",
"text": "System evaluation on heldout data.",
"type_str": "table",
"html": null,
"num": null
}
}
}
}