| { |
| "paper_id": "U17-1008", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T03:11:31.113636Z" |
| }, |
| "title": "Automatic Negation and Speculation Detection in Veterinary Clinical Text", |
| "authors": [ |
| { |
| "first": "Katharine", |
| "middle": [], |
| "last": "Cheng", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "The University of Melbourne", |
| "location": {} |
| }, |
| "email": "katharinec@student.unimelb.edu.au" |
| }, |
| { |
| "first": "Timothy", |
| "middle": [], |
| "last": "Baldwin", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "The University of Melbourne", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Karin", |
| "middle": [], |
| "last": "Verspoor", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "The University of Melbourne", |
| "location": {} |
| }, |
| "email": "karin.verspoor@unimelb.edu.au" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "The automatic detection of negation and speculation in clinical notes is vital when searching for genuine instances of a given phenomenon. This paper describes a new corpus of negation and speculation data, in the veterinary clinical note domain, and describes a series of experiments whereby we port a CRF-based method across from the BioScope corpus to this novel domain.", |
| "pdf_parse": { |
| "paper_id": "U17-1008", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "The automatic detection of negation and speculation in clinical notes is vital when searching for genuine instances of a given phenomenon. This paper describes a new corpus of negation and speculation data, in the veterinary clinical note domain, and describes a series of experiments whereby we port a CRF-based method across from the BioScope corpus to this novel domain.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Negation and speculation are common in clinical texts, yet pose a challenge for natural language processing of these texts. Negation indicates the absence or opposite of something, and is defined within the previously released BioScope corpus (a collection of biomedical and clinical documents annotated for the task of negation/speculation detection) to be the \"implication of the non-existence of something\" (Szarvas et al., 2008) . For example, the statement no abnormalities were found in the patient indicates the absence of abnormalities in the patient. Speculation is used to indicate uncertainty or the possibility of something, and is defined within BioScope to be statements of \"the possible existence of something\". For example, there is possible bacterial infection indicates that an infection might be present, without any certainty that it is. Both are commonly used in clinical texts as a means of ruling out diagnostic possibilities and hypothesising. This paper will discuss a method for detecting negation and speculation over clinical records from the Veterinary Companion Animal Surveillance System (VetCompass) project. 1 The Vet-Compass project is a database of veterinary clinical records for tracking animal health. The database may be used for research on the effects and usage of a particular drug, or the prevalence and distribution of a disease. Such studies are typically performed by querying for terms relevant to a drug or disease of interest, and analysing the retrieved clinical records. However, results identified using keyword matching are often speculative or negated mentions rather than true occurrences. By automatically detecting negation and speculation, we aim to suppress these results, and provide a higher-utility set of documents to the user.", |
| "cite_spans": [ |
| { |
| "start": 410, |
| "end": 432, |
| "text": "(Szarvas et al., 2008)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 1141, |
| "end": 1142, |
| "text": "1", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The task of negation/speculation detection is often defined in terms of two subtasks: (1) signal (or cue) detection; and (2) scope detection. Negation/speculation signal (or cue) detection involves determining which words in a sentence indicate that a negation/speculation is occurring. Negation/speculation scope detection involves determining which words in a sentence the negation/speculation applies to, under the constraints that: (a) the cue word is contained within the span of the scope; and (b) the span is contiguous. Consider two examples from the clinical notes subset of the BioScope corpus: The cues here for negation and speculation are not and possibly, respectively, and the words inside the brackets are within the scope of the cues. We apply this task formulation to the veterinary clinical notes of VetCompass. The VetCompass records (which mainly consists of notes from veterinary general practitioners) have a few important differences from the radiology clinical notes of the publicly available BioScope corpus. First, radiology notes are often shared between clinicians treating the same patient, and as such are generally written to be accessible to others. In notes from veterinary general practitioners, it is often the case that a single clinician treats the patient, meaning that clinical notes are largely for personal consumption, and thus are highly idiosyncratic in nature. Second, while radiology clinical notes are often professionally transcribed from an oral account by the clinician, in the veterinary general practice context, notes are authored directly by the clinician as text. Inevitably, this is done under time pressure, meaning that the text is often ungrammatical and lacks punctuation. Examples 3and (4) exemplify negation and speculation in Vet-Compass:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "( Such differences in usage between veterinary clinicians and other medical professionals such as radiologists are a major focus of this work, in adapting the annotation framework from BioScope to this new domain. This paper attempts to address the following research questions: (1) Can the task of negation/speculation detection be applied to veterinary clinical records? (2) Are models trained over the human clinical records of the BioScope corpus applicable to veterinary clinical notes?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This paper describes the process of annotating negation and speculation in veterinary clinical records. We then demonstrate that the task of negation and speculation detection can be successfully applied to veterinary clinical notes using a simple conditional random field (CRF) model. We additionally show that models trained on a related out-of-domain corpus such as the BioScope have utility over veterinary clinical records, in particular for negation detection.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Speculation Detection Most work on negation and speculation detection has focused on biomedical documents such as biological research papers and clinical notes, with the latter being most relevant to this research.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Work in Negation and", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Early approaches to negation detection were primarily rule-based. One of the best-known systems for negation detection is NegEx (Chapman et al., 2001) , which is based on regular expressions containing a negation cue term (such as no or not). Another rule-based negation detection system is NegFinder (Mutalik et al., 2001 ).", |
| "cite_spans": [ |
| { |
| "start": 128, |
| "end": 150, |
| "text": "(Chapman et al., 2001)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 301, |
| "end": 322, |
| "text": "(Mutalik et al., 2001", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Work in Negation and", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "More recently, machine learning approaches have become popular. Morante et al. (2008) proposed a machine learning approach that consists of two phases: (1) classification of whether each token in a sentence is a negation cue, and (2) classification of whether each token is part of the negation scope of a given cue. Both phases used a memory-based classifier using features such as the the wordform of the token, part-of-speech (POS) tag, and chunk tags of the token and neighbouring tokens. The approach was also applied to speculation detection (Morante and Daelemans, 2009a) , and incorporated into a meta-learning approach to the second phase of negation scope detection (Morante and Daelemans, 2009b) . Other approaches that use machine learning include the work of Agarwal and Yu (2010a,b ) that uses conditional random fields (CRFs) to detect negation and speculation, and Cruz D\u00edaz et al. 2012who experimented with the use of decision trees and support vector machines.", |
| "cite_spans": [ |
| { |
| "start": 64, |
| "end": 85, |
| "text": "Morante et al. (2008)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 548, |
| "end": 578, |
| "text": "(Morante and Daelemans, 2009a)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 676, |
| "end": 706, |
| "text": "(Morante and Daelemans, 2009b)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 772, |
| "end": 795, |
| "text": "Agarwal and Yu (2010a,b", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Work in Negation and", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Most work on negation and speculation detection has focused on a specific corpus and domain, with some exceptions. Wu et al. (2014) investigated the generalisability of different negation detection methods over different domains, and found that performance often suffers without in-domain training data. Miller et al. (2017) also investigated the use of different unsupervised domain adaptation algorithms for negation detection in the clinical domain and found that such algorithms only achieved marginal increase in performance compared to systems that use in-domain training data.", |
| "cite_spans": [ |
| { |
| "start": 115, |
| "end": 131, |
| "text": "Wu et al. (2014)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 304, |
| "end": 324, |
| "text": "Miller et al. (2017)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Work in Negation and", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "We are only aware of a few papers that have applied natural language processing in the veterinary domain. Ding and Riloff (2015) conducted work on detecting mentions of medication usage in a discussion forum for veterinarians, and categorizing the usage of the medication. A classifier determines whether each word is part of a medication mention using features such as the POS tags and neighbouring words The output of the medication mention detector is used by another classifier to determine its usage category such as whether the clinician prescribed the medication or changed it. Text classification is a task that had been previously applied to veterinary clinical records. Anholt et al. (2014) performed classification of a collection of veterinary medical records to identify cases of enteric syndrome. Lam et al. (2007) used clinical records of racing horses to categorise their reason for retirement. Duz et al. (2017) used classification to identify cases of certain conditions and drug use in clinical records from equine veterinary practices. In each of these studies, a dictionary was compiled to identify and detect phrases that indicate a certain category.", |
| "cite_spans": [ |
| { |
| "start": 106, |
| "end": 128, |
| "text": "Ding and Riloff (2015)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 811, |
| "end": 828, |
| "text": "Lam et al. (2007)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 911, |
| "end": 928, |
| "text": "Duz et al. (2017)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Veterinary NLP", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Currently, there is no publicly available corpus for training models over veterinary clinical notes. However, the BioScope corpus (Szarvas et al., 2008) provides a relevant dataset from which to train out-of-domain models. It is a publicly available collection of biomedical documents that have been annotated for both negation and speculation, in the form of cue words and their scope (see Section 1). BioScope consists of three subcollections: clinical radiology notes, biological papers, and abstracts of biological papers from the GENIA corpus (Collier et al., 1999) .", |
| "cite_spans": [ |
| { |
| "start": 130, |
| "end": 152, |
| "text": "(Szarvas et al., 2008)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 548, |
| "end": 570, |
| "text": "(Collier et al., 1999)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BioScope Corpus", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The VetCompass project is a collection of clinical records of veterinary consultations from several participating practices, to support analysis of animal health trends (McGreevy et al., 2017) . To conduct these studies, clinicians use an information retrieval (IR) front-end to retrieve clinical records related to their particular information need, based on Boolean searches. A major bottleneck for the naive IR setup of returning all matching documents is the prevalence of term occurrences in negated or speculative contexts, which dominate the results for many queries. This is the primary motivation for this research: to improve the quality of the search results by filtering out document matches where the component term only occurs in negated or speculative context. The major challenge here is that the language used in the veterinary clinical notes of VetCompass differs from that used in related publicly available datasets such as the BioScope radiology clinical notes.", |
| "cite_spans": [ |
| { |
| "start": 169, |
| "end": 192, |
| "text": "(McGreevy et al., 2017)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "VetCompass Corpus", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The corpus used in this work was constructed from a random sample of 1 million clinical records from VetCompass UK. 2 VetCompass clinical records contain a wide variety of text. Many records contain free text describing the clinician's observations, hypotheses, and descriptions of treatments and future actions. However, there are also records that contain only billing information, document the weight of the patient, or are reminders to perform certain actions like sending an invoice to the owner of the patient.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion of VetCompass Corpus", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Compared to the BioScope radiology clinical notes, VetCompass clinical notes are much more informal, possibly due to the fact that they are largely \"notes to self\" (see Section 1). As such, ad hoc abbreviations and shortening of terms as shown in Examples 3and 4 There are certain negation and speculation cue terms that appear only in the VetCompass corpus such as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion of VetCompass Corpus", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(8) Examination: v lively, [[NEG nad on oral exam NEG ]]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion of VetCompass Corpus", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "and ghc all fine. The term nad is often used in place of no acute distress or no abnormalities detected, and is an instance of negation. Question marks were often used as speculative cue terms such as in Example (9). The use of domain-specific cue terms presents a challenge for applying models that were trained on a corpus like BioScope clinical notes. Misspellings, grammatical errors and lack of punctuation are also common in the text of the veterinary general practice clinical notes, e.g.: 10 In Example (10), the negation cue without is misspelled. In Example (11), punctuation is missing, making it hard to clearly separate the different statements in the sentence, and suggesting that pure parser-based approaches will struggle over this data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion of VetCompass Corpus", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In terms of annotation, while some abbreviations, shorthands, misspellings, and punctuation errors are easy to interpret, others are more difficult to understand:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion of VetCompass Corpus", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(12) -other poss: renal diz (given that had low sg + proteinuria,\u02c6BUN/\u02c6Phosp BUT N -creat)/liver diz (given hepatomegally on rads +\u02c6ALP, Bile acids, Cholest, ? low sod/K+ ratio -could be related to kids or addisonian crisis BUT no hx of pu/pd Symbols like\u02c6require domain expertise to interpret. The appearance of terms like poss indicates that the sentence contains speculation but the irregular use of punctuation makes determining the correct boundaries of the speculation scope difficult. In fact, the absence of certain punctuation marks such as full stops can make it difficult for sentence tokenizers to work correctly. In the VetCompass corpus, a single statement of speculation is sometimes expressed using multiple speculation cue terms, e.g.: Here, the clinician is reporting that the owner of the patient (shortened to o) speculated that the patient was stung, as indicated by two cue terms, thinks and poss, presumably to indicate their lack of confidence in the statement. Such instances of \"double hedging\" are very rare in BioScope, presenting an extra point of differentiation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion of VetCompass Corpus", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Here, we outline the annotation guidelines for the VetCompass corpus, which borrow heavily from the BioScope annotation guidelines. As per the BioScope annotation guidelines, sentences from VetCompass are annotated for speculation if they express uncertainty or speculation, and annotated for negation if they express the non-existence of something. The min-max strategy of Bio-Scope annotation is also followed (Szarvas et al., 2008) . Negation/speculation cues are annotated such that the minimal unit that expresses negation/speculation by itself is marked. Scopes are then annotated relative to cue words, to have maximal size or the largest syntactic unit possible. Below, we detail important deviations from the Bio-Scope annotation guidelines, which are motivated in part by the usage of the negation/speculation detection system in an information retrieval context.", |
| "cite_spans": [ |
| { |
| "start": 412, |
| "end": 434, |
| "text": "(Szarvas et al., 2008)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation Guidelines", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The VetCompass annotation guidelines use the same set of cue words as BioScope, with the addi-tion of NAD (a negation cue -see above), question marks (which are potentially speculation cues -see above), and shortened and misspelled variants of cue words (like poss for possible).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation of Cues", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "As with BioScope, not all occurrences of a negation or speculation keyword indicate negation or speculation. For instance, occurrences of negation or speculation keywords in descriptions of proposed actions are generally not annotated for negation or speculation. Examples of such cases are:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation of Cues", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "(14) Advised to not give last onsior due to d+.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation of Cues", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "(15) Suggested FNA if increase in size", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation of Cues", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "In Example 14, not is not annotated as a negation cue since the sentence is stating a recommendation rather than expressing the absence or opposite of anything. In Example (15), suggested is not annotated since it is being used in the sense of proposing an action rather than hypothesising. These examples are also not annotated because of the utility they might provide for a clinician. If a clinician was researching FNA, the document containing Example (15) would be potentially useful for understanding situations where such a procedure was proposed. However, actions that were performed in the past that contains negation or speculation would be annotated such as cannot in Example (6) which is clearly expressing the opposite of the ability to perform that action.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation of Cues", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "Conditionals are another situation where negation or speculation keywords may not always be annotated as cues. If a negation or speculation keyword appears in the clause expressing the condition (clause containing the if ), then they should not be annotated as cues as demonstrated in the following examples: Here, there is not clear negation or speculation, but rather the lack of something in the conditional (e.g. consider euthanasia) or consequent (e.g. treatment). While these two sentences may be annotated under the BioScope annotation guidelines, we chose not to do this for the VetCompass clinical records because of the utility they might provide for a clinician. Even if a certain term is negated inside of a conditional, there is usually other information in the clinical record that provides instructions about what to do in non-negated circumstances which is useful for a clinician. In the case of a term being speculated inside of a conditional, the consequences of the term occurring is certain even if the condition had not occurred.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation of Cues", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "In many cases, negation and speculation scopes start at cue terms and end at the end of the clause or sentence. However, punctuation is often omitted, meaning that boundaries of clauses and sentences can be unclear. The annotator must use their own judgement and interpretation of the sentence in order to create a suitable annotation. The following example demonstrates a sentence where an annotator must interpret the sentence to understand where the clause boundaries are: Unlike the BioScope annotations, VetCompass clinical records were not annotated to contain nested speculation scopes, i.e. speculation scopes are never contained within other speculation scopes. This decision was motivated by the expected retrieval usage of the negation/speculation system: such information does not provide additional information to help filter out negated or speculated mentions of certain terms from search results. An example of the implication of this guideline is shown in the following sentence that is annotated with one negation scope and one speculation scope: The above sentence would have been annotated as three nested speculation scopes under the Bio-Scope annotation guidelines. However, using the VetCompass annotation guidelines, only a single speculation scope will be annotated, containing three separate speculations cues. If a user had wanted to search for documents with trichobezoars, this sentence will not be retrieved regardless of whether the nested structure is annotated or not. However, nested negation scopes in VetCompass are annotated. Moreover, speculation scopes that are nested within a negation scope and vice versa are also annotated. Table 1 : Inter-annotator agreemenent rates 3.3 Annotation Process 1041 records were randomly selected for annotation. These were divided into a training set, development set and test set, comprising 624, 208 and 209 records, respectively. The data was singleannotated by the first author using the BRAT annotation tool (Stenetorp et al., 2012) , in consultation with the other authors in instances of doubt. 100 records (containing 586 sentences) from the test set were selected and annotated by one of the other authors, following the guidelines in Section 3.1. The agreement between the two annotators was calculated using Cohen's kappa (\uf8ff) and F1-score (obtained by treating the annotations made by the main annotator as the goldstandard). We measure the amount that the two annotators agreed that a particular token is a negation/speculation cue or scope. The inter-annotator agreement is described in Table 1 .", |
| "cite_spans": [ |
| { |
| "start": 1986, |
| "end": 2010, |
| "text": "(Stenetorp et al., 2012)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1666, |
| "end": 1673, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 2573, |
| "end": 2580, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Annotation of Scopes", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "The \uf8ff values in Table 1 demonstrate a reasonable amount of agreement between the two annotators. However, there is still some subjectivity, particularly for the speculation cues.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 16, |
| "end": 23, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Annotation of Scopes", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "There are several reasons for the discrepancy in annotations between the two annotators: (1) the limited experience in linguistics and text analysis on the part of the main annotator of VetCompass;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation of Scopes", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "(2) the lack of pre-training for annotating the Vet-Compass corpus for the other annotator, beyond receiving the annotation guidelines; and (3) the different levels of familiarity with the datasets of BioScope and VetCompass.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation of Scopes", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "Sentence tokenization was performed to prepare the corpus for usage, based on the findings of Read et al. (2012) . The output of the sentence tokenizer was converted into the BRAT annotation format so that the output could be manually corrected if needed. However, the correction was not a systematic process. A sentence tokenization output was corrected only if it was clearly incorrect from a quick inspection during the annotation process. Most corrections only occurred when nega- Table 2 provides details of the annotated corpus.", |
| "cite_spans": [ |
| { |
| "start": 94, |
| "end": 112, |
| "text": "Read et al. (2012)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 485, |
| "end": 492, |
| "text": "Table 2", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Preparation of corpus", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "In general, large variations in sentence length can be observed: some sentences are as short as two words (e.g. reporting the patient weight), while others contain long detailed descriptions of the consultation. The annotated VetCompass corpus contains a slightly lower proportion of negated sentences compared to those in the BioScope clinical notes (where 13.55% of the sentences were annotated as negated), and a much lower proportion of speculative sentences (compared to 13.39% in BioScope).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Summary of Corpus", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "To evaluate whether the task of negation and speculation detection can be applied to the veterinary clinical notes of VetCompass, a simple linearchain conditional random field (CRF: Lafferty et al. (2001) ) model was trained, in the form of a re-implementation of the negation and speculation detection methods proposed by Agarwal and Yu (2010a,b) . The negation detection system consists of two parts: a cue detection system, and a scope detection system. The cue detection system is a CRF that classifies whether or not a given token is a negation cue. A CRF was used for cue detection to be able to model contexts in which cues appear in both negation and non-negation contexts, and to model multiword cues. The scope detection system is also a CRF, and classifies whether or not a token in a sentence is part of a negation scope.", |
| "cite_spans": [ |
| { |
| "start": 182, |
| "end": 204, |
| "text": "Lafferty et al. (2001)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 323, |
| "end": 347, |
| "text": "Agarwal and Yu (2010a,b)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Description", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The negation cue CRF uses only the words of the sentence as features. For the negation scope CRF, both the words of the sentence and the POS tags were used. When POS tags are used, the words that are part of a negation cue (that were detected by the negation cue CRF model) were either retained or replaced with a special CUE tag. The speculation detection system has a similar setup, except the system classifies a token as being the inside or outside of a speculation cue or scope. The cue detection system is based on the following features: the target word, and the two words to the left and right of the target word. The scope detection system determines if a token is inside or outside a negation or speculation signal using either the words and POS tags of the token, five tokens to the left and right.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Description", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Our experiments are based on the corpus described in Section 3.4. The size of the context window for the CRF model was selected based on preliminary experiments with the development set. The parameter that achieved the best F-score over that set was chosen. NLTK 3 was used to tokenise the sentence and obtain the POS tags. As our CRF learner, we used CRF++ v0.58. 4 In our experiments, CRF models were either trained on BioScope clinical dataset, VetCompass, or both.", |
| "cite_spans": [ |
| { |
| "start": 365, |
| "end": 366, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Description", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We used NegEx system and LingScope as baselines. LingScope is a Java implementation of the CRF models developed by Agarwal and Yu (2010a,b) . It contains models that were pretrained using the BioScope clinical data. Though our CRF model and LingScope were based on the same paper, LingScope differs from our models through the use of a different CRF implementation (using the CRF model provided by the Abner tool (Settles, 2005) ), the size of context window used for the classification, and the POS tagger (the Stanford POS tagger).", |
| "cite_spans": [ |
| { |
| "start": 115, |
| "end": 139, |
| "text": "Agarwal and Yu (2010a,b)", |
| "ref_id": null |
| }, |
| { |
| "start": 413, |
| "end": 428, |
| "text": "(Settles, 2005)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We used a Python implementation of NegEx. 5 This version of NegEx detects negation scopes to be between a trigger term/phrase identified by NegEx and either a conjunction, start or end of a sentence (which can be longer than the limit of five tokens in the original version of NegEx by Chapman et al. (2001) ).", |
| "cite_spans": [ |
| { |
| "start": 286, |
| "end": 307, |
| "text": "Chapman et al. (2001)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Our experiments were based on the fixed split of the corpus described in Section 3.3. We evaluate both the cue detection and scope detection system using precision (P), recall (R) and microaverage F-score (F). Evaluation was performed on a token-level based on whether it is inside or outside of any negation/speculation cue or scope.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We experimented with using different training data to determine whether models trained on outof-domain data such as BioScope clinical data are suitable for veterinary clinical notes. Since Bio-Scope clinical dataset is much larger than Vet-Compass, we also experimented with oversampling of instances from the VetCompass training data when both corpora were used for training (at oversampling rates of 1, 2 and 5). When an oversampling rate of 2 is used, we use two duplicates of each VetCompass training record during the training process, and similarly for oversampling rate of 5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Results for negation cue detection and negation scope detection are presented in Table 3 and Table 4, respectively. Results for speculation cue detection and speculation scope detection are presented in Table 5 and Table 6 , respectively.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 81, |
| "end": 88, |
| "text": "Table 3", |
| "ref_id": "TABREF9" |
| }, |
| { |
| "start": 203, |
| "end": 222, |
| "text": "Table 5 and Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5" |
| }, |
| { |
| "text": "When trained only on BioScope clinical data, the CRF systems (for both cue detection and scope detection) performed worse than their respective baselines. The model only outperforms the baselines when VetCompass training data is used. For negation cue detection and scope detection, incorporating both BioScope clinical data with VetCompass records as training instances helps improves the F-scores for most cases. Further marginal improvements can be achieved with oversampling of the VetCompass training instances as well in most cases.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5" |
| }, |
| { |
| "text": "However, for speculation cue and scope detection, the inclusion of BioScope clinical data with VetCompass training data helps improve the recall but reduces the precision, leading to only marginal improvements in F-scores. Oversampling Vet-Compass helps to improve the precision, recall and F-score slightly, but the precision is still lower than when the BioScope clinical data was not included in the training set. In both speculation cue detection and scope detection results, the recall is consistently much lower than the precision. The re- Table 5 : Results for Speculation Cue Detection Training data used for CRF models are either Bio-Scope (BIO) and VetCompass (VC) or both sults achieved for speculation detection are also much lower than those achieved for negation detection.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 546, |
| "end": 553, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Unsurprisingly, when the cue detection system does not incorporate VetCompass data, cues that appear only in VetCompass records were usually not detected. For negation, these cues in- Table 6 : Results for Speculation Scope Detection. Training data used for CRF models are either Bio-Scope (BIO) and VetCompass (VC) or both clude NAD, unable, and contractions such as doesn't. For speculation, these cues include question marks, poss and think. In speculation cue detection, it was particularly important to have indomain training data as there are more domainspecific speculation cues.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 184, |
| "end": 191, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Error Analysis", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "However, even with VetCompass training data, the cue detection systems (particularly speculation cue detection) still have difficulty detecting all of the cues. Some of this was caused by cue words being misspelled (e.g. doestn instead of doesn't) or a variant not seen in the training data (such as susp for suspect). A useful feature could be to use word or string similarity to known cue terms to overcome this issue. Author or patient metadata could also be useful, since some of this is consistent across consultations for a given individual. Such data could be used as additional features for a classifier or by having separate models for different authors/patients. However, even cues where the form appears in the training data are still sometimes not detected by our system, particularly for speculation cues. This may be because the system was not able to generalise from the limited training data. There was also a greater variety of speculation cues than negation cues. This observation, combined with the smaller proportion of sentences that were speculative, means that there were less training instances for each possible speculation cue.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error Analysis", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Both negation and speculation cues also have false-positives that resulted from identifying negation-like or speculation-like terms, such as not bad. The speculation cue detection system also often did not detect speculation cues that contained negation-like terms such as not sure, while the negation cue detection system incorrectly classifies the not in this example as a negation cue. The errors in cue detection create further errors in the associated scope detection system. However, even with correctly detected cues, the scope detection system still has problems with recall. In most of these cases, the system does not correctly determine one token at the start or end of the scope as being part of it. If the scope is very long, the system will often only detect the first few tokens as being part of the scope and miss the remaining tokens. Scopes where the cues are question marks are also often smaller than the reference annotation, as the system usually only includes the token directly to the left or right of the question mark as part of the speculation scope.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error Analysis", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "http://www.rvc.ac.uk/vetcompass", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.rvc.ac.uk/VetCOMPASS", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.nltk.org/ 4 https://taku910.github.io/crfpp/ 5 https://code.google.com/archive/p/ negex/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This paper describes the annotation of a new dataset for negation and speculation detection over veterinary clinical notes. We reimplemented a simple CRF approach for detecting negation and speculation cues and scope, and trained the model over VetCompass training data, BioScope, or both. Our results demonstrated that while datasets such as the BioScope clinical corpus have utility, indomain training data is often necessary to attain reasonable performance levels, particularly for speculation detection.Further work will focus on improving the recall of negation and speculation detection systems for veterinary clinical notes. Improving the recall is important for the IR use case that the system will be deployed in. We will also focus on expanding the features used for classification, and experiment with different classifiers. Another focus could be on learning features that are particular to the different authors of notes, and using these to improve negation and speculation detection.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Further Work", |
| "sec_num": "6" |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Biomedical negation scope detection with conditional random fields", |
| "authors": [ |
| { |
| "first": "Shashank", |
| "middle": [], |
| "last": "Agarwal", |
| "suffix": "" |
| }, |
| { |
| "first": "Hong", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Journal of the American Medical Informatics Association", |
| "volume": "17", |
| "issue": "6", |
| "pages": "696--701", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shashank Agarwal and Hong Yu. 2010a. Biomedical negation scope detection with conditional random fields. Journal of the American Medical Informat- ics Association 17(6):696-701.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Detecting hedge cues and their scope in biomedical text with conditional random fields", |
| "authors": [ |
| { |
| "first": "Shashank", |
| "middle": [], |
| "last": "Agarwal", |
| "suffix": "" |
| }, |
| { |
| "first": "Hong", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Journal of Biomedical Informatics", |
| "volume": "43", |
| "issue": "6", |
| "pages": "953--961", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shashank Agarwal and Hong Yu. 2010b. Detecting hedge cues and their scope in biomedical text with conditional random fields. Journal of Biomedical Informatics 43(6):953-961.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Mining free-text medical records for companion animal enteric syndrome surveillance", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "R Michele Anholt", |
| "suffix": "" |
| }, |
| { |
| "first": "Iqbal", |
| "middle": [], |
| "last": "Berezowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Carl", |
| "middle": [], |
| "last": "Jamal", |
| "suffix": "" |
| }, |
| { |
| "first": "Craig", |
| "middle": [], |
| "last": "Ribble", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Stephen", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Preventive Veterinary Medicine", |
| "volume": "113", |
| "issue": "4", |
| "pages": "417--422", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R Michele Anholt, John Berezowski, Iqbal Jamal, Carl Ribble, and Craig Stephen. 2014. Mining free-text medical records for companion animal enteric syn- drome surveillance. Preventive Veterinary Medicine 113(4):417-422.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "A simple algorithm for identifying negated findings and diseases in discharge summaries", |
| "authors": [ |
| { |
| "first": "Will", |
| "middle": [], |
| "last": "Wendy W Chapman", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Bridewell", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hanbury", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Gregory", |
| "suffix": "" |
| }, |
| { |
| "first": "Bruce G", |
| "middle": [], |
| "last": "Cooper", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Buchanan", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Journal of biomedical informatics", |
| "volume": "34", |
| "issue": "5", |
| "pages": "301--310", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wendy W Chapman, Will Bridewell, Paul Hanbury, Gregory F Cooper, and Bruce G Buchanan. 2001. A simple algorithm for identifying negated findings and diseases in discharge summaries. Journal of biomedical informatics 34(5):301-310.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "The genia project: corpus-based knowledge acquisition and information extraction from genome research papers", |
| "authors": [ |
| { |
| "first": "Nigel", |
| "middle": [], |
| "last": "Collier", |
| "suffix": "" |
| }, |
| { |
| "first": "Hyun", |
| "middle": [ |
| "Seok" |
| ], |
| "last": "Park", |
| "suffix": "" |
| }, |
| { |
| "first": "Norihiro", |
| "middle": [], |
| "last": "Ogata", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuka", |
| "middle": [], |
| "last": "Tateishi", |
| "suffix": "" |
| }, |
| { |
| "first": "Chikashi", |
| "middle": [], |
| "last": "Nobata", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomoko", |
| "middle": [], |
| "last": "Ohta", |
| "suffix": "" |
| }, |
| { |
| "first": "Tateshi", |
| "middle": [], |
| "last": "Sekimizu", |
| "suffix": "" |
| }, |
| { |
| "first": "Hisao", |
| "middle": [], |
| "last": "Imai", |
| "suffix": "" |
| }, |
| { |
| "first": "Katsutoshi", |
| "middle": [], |
| "last": "Ibushi", |
| "suffix": "" |
| }, |
| { |
| "first": "Junichi", |
| "middle": [], |
| "last": "Tsujii", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of the ninth conference on European chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "271--272", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nigel Collier, Hyun Seok Park, Norihiro Ogata, Yuka Tateishi, Chikashi Nobata, Tomoko Ohta, Tateshi Sekimizu, Hisao Imai, Katsutoshi Ibushi, and Jun- ichi Tsujii. 1999. The genia project: corpus-based knowledge acquisition and information extraction from genome research papers. In Proceedings of the ninth conference on European chapter of the Associ- ation for Computational Linguistics. Association for Computational Linguistics, pages 271-272.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "A machine-learning approach to negation and speculation detection in clinical texts", |
| "authors": [ |
| { |
| "first": "Cruz", |
| "middle": [], |
| "last": "Noa", |
| "suffix": "" |
| }, |
| { |
| "first": "Manuel J Ma\u00f1a", |
| "middle": [], |
| "last": "D\u00edaz", |
| "suffix": "" |
| }, |
| { |
| "first": "Jacinto", |
| "middle": [ |
| "Mata" |
| ], |
| "last": "L\u00f3pez", |
| "suffix": "" |
| }, |
| { |
| "first": "Victoria", |
| "middle": [], |
| "last": "V\u00e1zquez", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pach\u00f3n\u00e1lvarez", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Journal of the Association for Information Science and Technology", |
| "volume": "63", |
| "issue": "7", |
| "pages": "1398--1410", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Noa P Cruz D\u00edaz, Manuel J Ma\u00f1a L\u00f3pez, Jacinto Mata V\u00e1zquez, and Victoria Pach\u00f3n\u00c1lvarez. 2012. A machine-learning approach to negation and specula- tion detection in clinical texts. Journal of the As- sociation for Information Science and Technology 63(7):1398-1410.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Extracting information about medication use from veterinary discussions", |
| "authors": [ |
| { |
| "first": "Haibo", |
| "middle": [], |
| "last": "Ding", |
| "suffix": "" |
| }, |
| { |
| "first": "Ellen", |
| "middle": [], |
| "last": "Riloff", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Human Language Technologies: The", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Haibo Ding and Ellen Riloff. 2015. Extracting infor- mation about medication use from veterinary discus- sions. In Human Language Technologies: The 2015", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Annual Conference of the North American Chapter of the ACL", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "1452--1458", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Annual Conference of the North American Chapter of the ACL. pages 1452-1458.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Validation of an improved computer-assisted technique for mining free-text electronic medical records", |
| "authors": [ |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Duz", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "John", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Marshall", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Parkin", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "JMIR Medical Informatics", |
| "volume": "5", |
| "issue": "2", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marco Duz, John F Marshall, and Tim Parkin. 2017. Validation of an improved computer-assisted technique for mining free-text electronic medical records. JMIR Medical Informatics 5(2):e17.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Lafferty", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando Cn", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the 18th International Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "282--289", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data. In Proceedings of the 18th Interna- tional Conference on Machine Learning. pages 282- 289.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Use of free text clinical records in identifying syndromes and analysing health data", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Lam", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Parkin", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Riggs", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Morgan", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Veterinary Record", |
| "volume": "161", |
| "issue": "16", |
| "pages": "547--51", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K Lam, Tim Parkin, Christopher Riggs, and Kenton Morgan. 2007. Use of free text clinical records in identifying syndromes and analysing health data. Veterinary Record 161(16):547-51.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "VetCompass Australia: Big data and real-time surveillance for veterinary science", |
| "authors": [ |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Mcgreevy", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Thomson", |
| "suffix": "" |
| }, |
| { |
| "first": "Navneet", |
| "middle": [], |
| "last": "Dhand", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Raubenheimer", |
| "suffix": "" |
| }, |
| { |
| "first": "Sophie", |
| "middle": [], |
| "last": "Masters", |
| "suffix": "" |
| }, |
| { |
| "first": "Caroline", |
| "middle": [], |
| "last": "Mansfield", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Baldwin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ricardo", |
| "middle": [], |
| "last": "Soares Magalhaes", |
| "suffix": "" |
| }, |
| { |
| "first": "Jacquie", |
| "middle": [], |
| "last": "Rand", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Hill", |
| "suffix": "" |
| }, |
| { |
| "first": "Anne", |
| "middle": [], |
| "last": "Peaston", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Animals", |
| "volume": "7", |
| "issue": "10", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Paul McGreevy, Peter Thomson, Navneet Dhand, David Raubenheimer, Sophie Masters, Caroline Mansfield, Tim Baldwin, Ricardo Soares Magal- haes, Jacquie Rand, Peter Hill, Anne Peaston, James Gilkerson, Martin Combs, Shane Raidal, Peter Ir- win, Peter Irons, Richard Squires, David Brodbelt, and Jeremy Hammond. 2017. VetCompass Aus- tralia: Big data and real-time surveillance for vet- erinary science. Animals 7(10).", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Unsupervised domain adaptation for clinical negation detection", |
| "authors": [ |
| { |
| "first": "Timothy", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| }, |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Bethard", |
| "suffix": "" |
| }, |
| { |
| "first": "Hadi", |
| "middle": [], |
| "last": "Amiri", |
| "suffix": "" |
| }, |
| { |
| "first": "Guergana", |
| "middle": [], |
| "last": "Savova", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "BioNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "165--170", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Timothy Miller, Steven Bethard, Hadi Amiri, and Guergana Savova. 2017. Unsupervised domain adaptation for clinical negation detection. BioNLP 2017 pages 165-170.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Learning the scope of hedge cues in biomedical texts", |
| "authors": [ |
| { |
| "first": "Roser", |
| "middle": [], |
| "last": "Morante", |
| "suffix": "" |
| }, |
| { |
| "first": "Walter", |
| "middle": [], |
| "last": "Daelemans", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing. Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "28--36", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roser Morante and Walter Daelemans. 2009a. Learn- ing the scope of hedge cues in biomedical texts. In Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing. Associa- tion for Computational Linguistics, pages 28-36.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "A metalearning approach to processing the scope of negation", |
| "authors": [ |
| { |
| "first": "Roser", |
| "middle": [], |
| "last": "Morante", |
| "suffix": "" |
| }, |
| { |
| "first": "Walter", |
| "middle": [], |
| "last": "Daelemans", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning. Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "21--29", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roser Morante and Walter Daelemans. 2009b. A met- alearning approach to processing the scope of nega- tion. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning. As- sociation for Computational Linguistics, pages 21- 29.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Learning the scope of negation in biomedical texts", |
| "authors": [ |
| { |
| "first": "Roser", |
| "middle": [], |
| "last": "Morante", |
| "suffix": "" |
| }, |
| { |
| "first": "Anthony", |
| "middle": [], |
| "last": "Liekens", |
| "suffix": "" |
| }, |
| { |
| "first": "Walter", |
| "middle": [], |
| "last": "Daelemans", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "715--724", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roser Morante, Anthony Liekens, and Walter Daele- mans. 2008. Learning the scope of negation in biomedical texts. In Proceedings of the Conference on Empirical Methods in Natural Language Pro- cessing. Association for Computational Linguistics, pages 715-724.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Use of general-purpose negation detection to augment concept indexing of medical documents: a quantitative study using the umls", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Pradeep", |
| "suffix": "" |
| }, |
| { |
| "first": "Aniruddha", |
| "middle": [], |
| "last": "Mutalik", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Deshpande", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Prakash", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Nadkarni", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Journal of the American Medical Informatics Association", |
| "volume": "8", |
| "issue": "6", |
| "pages": "598--609", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pradeep G Mutalik, Aniruddha Deshpande, and Prakash M Nadkarni. 2001. Use of general-purpose negation detection to augment concept indexing of medical documents: a quantitative study using the umls. Journal of the American Medical Informatics Association 8(6):598-609.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Sentence boundary detection: A long solved problem?", |
| "authors": [ |
| { |
| "first": "Jonathon", |
| "middle": [], |
| "last": "Read", |
| "suffix": "" |
| }, |
| { |
| "first": "Rebecca", |
| "middle": [], |
| "last": "Dridan", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephan", |
| "middle": [], |
| "last": "Oepen", |
| "suffix": "" |
| }, |
| { |
| "first": "Lars", |
| "middle": [ |
| "J\u00f8rgen" |
| ], |
| "last": "Solberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "COLING (Posters)", |
| "volume": "12", |
| "issue": "", |
| "pages": "985--994", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jonathon Read, Rebecca Dridan, Stephan Oepen, and Lars J\u00f8rgen Solberg. 2012. Sentence boundary de- tection: A long solved problem? COLING (Posters) 12:985-994.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Abner: an open source tool for automatically tagging genes, proteins and other entity names in text", |
| "authors": [ |
| { |
| "first": "Burr", |
| "middle": [], |
| "last": "Settles", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Bioinformatics", |
| "volume": "21", |
| "issue": "14", |
| "pages": "3191--3192", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Burr Settles. 2005. Abner: an open source tool for au- tomatically tagging genes, proteins and other entity names in text. Bioinformatics 21(14):3191-3192.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Brat: a web-based tool for nlp-assisted text annotation", |
| "authors": [ |
| { |
| "first": "Pontus", |
| "middle": [], |
| "last": "Stenetorp", |
| "suffix": "" |
| }, |
| { |
| "first": "Sampo", |
| "middle": [], |
| "last": "Pyysalo", |
| "suffix": "" |
| }, |
| { |
| "first": "Goran", |
| "middle": [], |
| "last": "Topi\u0107", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomoko", |
| "middle": [], |
| "last": "Ohta", |
| "suffix": "" |
| }, |
| { |
| "first": "Sophia", |
| "middle": [], |
| "last": "Ananiadou", |
| "suffix": "" |
| }, |
| { |
| "first": "Jun'ichi", |
| "middle": [], |
| "last": "Tsujii", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "102--107", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pontus Stenetorp, Sampo Pyysalo, Goran Topi\u0107, Tomoko Ohta, Sophia Ananiadou, and Jun'ichi Tsu- jii. 2012. Brat: a web-based tool for nlp-assisted text annotation. In Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics. Associa- tion for Computational Linguistics, pages 102-107.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "The bioscope corpus: annotation for negation, uncertainty and their scope in biomedical texts", |
| "authors": [ |
| { |
| "first": "Gy\u00f6rgy", |
| "middle": [], |
| "last": "Szarvas", |
| "suffix": "" |
| }, |
| { |
| "first": "Veronika", |
| "middle": [], |
| "last": "Vincze", |
| "suffix": "" |
| }, |
| { |
| "first": "Rich\u00e1rd", |
| "middle": [], |
| "last": "Farkas", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00e1nos", |
| "middle": [], |
| "last": "Csirik", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing. Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "38--45", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gy\u00f6rgy Szarvas, Veronika Vincze, Rich\u00e1rd Farkas, and J\u00e1nos Csirik. 2008. The bioscope corpus: anno- tation for negation, uncertainty and their scope in biomedical texts. In Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing. Association for Computational Linguis- tics, pages 38-45.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Negation's not solved: generalizability versus optimizability in clinical natural language processing", |
| "authors": [ |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Timothy", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Masanz", |
| "suffix": "" |
| }, |
| { |
| "first": "Matt", |
| "middle": [], |
| "last": "Coarr", |
| "suffix": "" |
| }, |
| { |
| "first": "Scott", |
| "middle": [], |
| "last": "Halgrim", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Carrell", |
| "suffix": "" |
| }, |
| { |
| "first": "Cheryl", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "PloS one", |
| "volume": "9", |
| "issue": "11", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stephen Wu, Timothy Miller, James Masanz, Matt Coarr, Scott Halgrim, David Carrell, and Cheryl Clark. 2014. Negation's not solved: generalizabil- ity versus optimizability in clinical natural language processing. PloS one 9(11):e112774.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "text": "The lungs are well expanded, but [[NEG not hyperinflated NEG ]]. (2) Mild thoracic curvature, [[SPEC possibly positional SPEC ]].", |
| "num": null, |
| "uris": null |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "text": "(13) History-o concerned swollen lower lip, [[SPEC thinks poss stung SPEC ]], been there 2d", |
| "num": null, |
| "uris": null |
| }, |
| "FIGREF2": { |
| "type_str": "figure", |
| "text": "Adv if O not wanting to consider euthanasia then need to get a veterinary behaviourist involved ASAP (17) Stop treatment immediately if vomiting or diarrhoea occurs", |
| "num": null, |
| "uris": null |
| }, |
| "FIGREF3": { |
| "type_str": "figure", |
| "text": "19) [[NEG No obvious mass NEG ]], [[SPEC suspect poss trichobezoars? SPEC ]]", |
| "num": null, |
| "uris": null |
| }, |
| "TABREF1": { |
| "content": "<table/>", |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "text": "are very common, and informal speculative expressions such as feels like and looks like are prevalent:" |
| }, |
| "TABREF7": { |
| "content": "<table><tr><td>: VetCompass NegSpec Corpus Statistics</td></tr><tr><td>tion/speculation scopes had the potential to cross</td></tr><tr><td>sentence boundaries or in clear instances where</td></tr><tr><td>correct sentence boundaries were not added. Only</td></tr><tr><td>about 10% of the corpus underwent correction for</td></tr><tr><td>sentence tokenization.</td></tr></table>", |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "text": "" |
| }, |
| "TABREF9": { |
| "content": "<table><tr><td>System</td><td>Training Set</td><td>P</td><td>R</td><td>F</td></tr><tr><td>NegEx</td><td>-</td><td colspan=\"3\">56.3 75.4 64.4</td></tr><tr><td>LingScope (word)</td><td>-</td><td colspan=\"3\">79.3 52.4 63.1</td></tr><tr><td>LingScope (POS; keep cue)</td><td>-</td><td colspan=\"3\">66.8 64.4 65.6</td></tr><tr><td colspan=\"2\">LingScope (POS; replace cue) -</td><td colspan=\"3\">65.9 62.6 64.2</td></tr><tr><td/><td>VC</td><td colspan=\"3\">87.9 64.4 74.4</td></tr><tr><td/><td>BIO</td><td colspan=\"3\">70.3 57.8 63.4</td></tr><tr><td>CRF (word)</td><td>BIO + VC</td><td colspan=\"3\">86.8 68.1 76.3</td></tr><tr><td/><td colspan=\"4\">BIO + VC\u21e52 87.4 68.0 76.5</td></tr><tr><td/><td colspan=\"4\">BIO + VC\u21e55 88.1 68.3 77.0</td></tr><tr><td/><td>VC</td><td colspan=\"3\">86.6 68.0 76.1</td></tr><tr><td/><td>BIO</td><td colspan=\"3\">78.2 51.1 61.8</td></tr><tr><td>CRF (POS; keep cue)</td><td>BIO + VC</td><td colspan=\"3\">84.8 71.3 77.5</td></tr><tr><td/><td colspan=\"4\">BIO + VC\u21e52 85.1 74.3 79.3</td></tr><tr><td/><td colspan=\"4\">BIO + VC\u21e55 85.5 73.3 79.0</td></tr><tr><td/><td>VC</td><td colspan=\"3\">81.5 67.0 73.6</td></tr><tr><td/><td>BIO</td><td colspan=\"3\">63.6 55.7 59.4</td></tr><tr><td>CRF (POS; replace cue)</td><td>BIO + VC</td><td colspan=\"3\">82.2 70.7 76.0</td></tr><tr><td/><td colspan=\"4\">BIO + VC\u21e52 82.4 74.4 78.2</td></tr><tr><td/><td colspan=\"4\">BIO + VC\u21e55 82.1 73.9 77.8</td></tr></table>", |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "text": "Results for Negation Cue Detection Training data used for CRF models are either Bio-Scope (BIO) and VetCompass (VC) or both" |
| }, |
| "TABREF10": { |
| "content": "<table><tr><td/><td>P</td><td>R</td><td>F</td></tr><tr><td>LingScope</td><td colspan=\"3\">43.3 27.6 33.7</td></tr><tr><td>CRF (VC)</td><td colspan=\"3\">88.7 44.8 59.5</td></tr><tr><td>CRF (BIO)</td><td colspan=\"3\">19.7 24.8 21.9</td></tr><tr><td>CRF (BIO + VC)</td><td colspan=\"3\">76.5 49.5 60.1</td></tr><tr><td colspan=\"4\">CRF (BIO + VC\u21e52) 79.7 52.4 63.2</td></tr><tr><td colspan=\"4\">CRF (BIO + VC\u21e55) 81.4 54.3 65.1</td></tr></table>", |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "text": "Results for Negation Scope Detection Training data used for CRF models are either Bio-Scope (BIO) and VetCompass (VC) or both" |
| } |
| } |
| } |
| } |