ACL-OCL / Base_JSON /prefixJ /json /J12 /J12-2004.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "J12-2004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:00:41.375389Z"
},
"title": "Cross-Genre and Cross-Domain Detection of Semantic Uncertainty",
"authors": [
{
"first": "Gy\u00f6rgy",
"middle": [],
"last": "Szarvas",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Veronika",
"middle": [],
"last": "Vincze",
"suffix": "",
"affiliation": {},
"email": "vinczev@inf.u-szeged.hu."
},
{
"first": "Rich\u00e1rd",
"middle": [],
"last": "Farkas",
"suffix": "",
"affiliation": {},
"email": "farkas@ims.uni-stuttgart.de."
},
{
"first": "Gy\u00f6rgy",
"middle": [],
"last": "M\u00f3ra",
"suffix": "",
"affiliation": {},
"email": "gymora@inf.u-szeged.hu.submission"
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Uncertainty is an important linguistic phenomenon that is relevant in various Natural Language Processing applications, in diverse genres from medical to community generated, newswire or scientific discourse, and domains from science to humanities. The semantic uncertainty of a proposition can be identified in most cases by using a finite dictionary (i.e., lexical cues) and the key steps of uncertainty detection in an application include the steps of locating the (genre-and domain-specific) lexical cues, disambiguating them, and linking them with the units of interest for the particular application (e.g., identified events in information extraction). In this study, we focus on the genre and domain differences of the context-dependent semantic uncertainty cue recognition task. We introduce a unified subcategorization of semantic uncertainty as different domain applications can apply different uncertainty categories. Based on this categorization, we normalized the annotation of three corpora and present results with a state-of-the-art uncertainty cue recognition model for four fine-grained categories of semantic uncertainty.",
"pdf_parse": {
"paper_id": "J12-2004",
"_pdf_hash": "",
"abstract": [
{
"text": "Uncertainty is an important linguistic phenomenon that is relevant in various Natural Language Processing applications, in diverse genres from medical to community generated, newswire or scientific discourse, and domains from science to humanities. The semantic uncertainty of a proposition can be identified in most cases by using a finite dictionary (i.e., lexical cues) and the key steps of uncertainty detection in an application include the steps of locating the (genre-and domain-specific) lexical cues, disambiguating them, and linking them with the units of interest for the particular application (e.g., identified events in information extraction). In this study, we focus on the genre and domain differences of the context-dependent semantic uncertainty cue recognition task. We introduce a unified subcategorization of semantic uncertainty as different domain applications can apply different uncertainty categories. Based on this categorization, we normalized the annotation of three corpora and present results with a state-of-the-art uncertainty cue recognition model for four fine-grained categories of semantic uncertainty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In computational linguistics, especially in information extraction and retrieval, it is of the utmost importance to distinguish between uncertain statements and factual information. In most cases, what the user needs is factual information, hence uncertain propositions should be treated in a special way: Depending on the exact task, the system should either ignore such texts or separate them from factual information. In machine translation, it is also necessary to identify linguistic cues of uncertainty because the source and the target language may differ in their toolkit to express uncertainty (one language uses an auxiliary, the other uses just a morpheme). To cite another example, in clinical document classification, medical reports can be grouped according to whether the patient definitely suffers, probably suffers, or does not suffer from an illness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "There are several linguistic phenomena that are referred to as uncertainty in the literature. We consider propositions to which no truth value can be attributed, given the speaker's mental state, as instances of semantic uncertainty. In contrast, uncertainty may also arise at the discourse level, when the speaker intentionally omits some information from the statement, making it vague, ambiguous, or misleading. Determining whether a given proposition is uncertain or not may involve using a finite dictionary of linguistic devices (i.e., cues). Lexical cues (such as modal verbs or adverbs) are responsible for semantic uncertainty whereas discourse-level uncertainty may be expressed by lexical cues and syntactic cues (such as passive constructions) as well. We focus on four types of semantic uncertainty in this study and henceforth the term cue will be taken to mean lexical cue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The key steps of recognizing semantically uncertain propositions in a natural language processing (NLP) application include the steps of locating lexical cues for uncertainty, disambiguating them (as not all occurrences of the cues indicate uncertainty), and finally linking them with the textual representation of the propositions in question. The linking of a cue to the textual representation of the proposition can be performed on the basis of syntactic rules that depend on the word class of the lexical cue, but they are independent of the actual application domain or text type where the cue is observed. The set of cues used and the frequency of their certain and uncertain usages are domain and genre dependent, however, and this has to be addressed if we seek to craft automatic uncertainty detectors. Here we interpret genre as the basic style and formal characteristics of the writing that is independent of its topic (e.g., scientific papers, newswire texts, or business letters), and domain as a particular field of knowledge and is related to the topic of the text (e.g., medicine, archeology, or politics).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Uncertainty cue candidates do not display uncertainty in all of their occurrences. For instance, the mathematical sense of probable is dominant in mathematical texts whereas its ratio can be relatively low in papers in the humanities. The frequency of the two distinct meanings of the verb evaluate (which can be a synonym of judge [an uncertain meaning] and calculate) is also different in the bioinformatics and cell biology domains. Compare:",
"cite_spans": [
{
"start": 332,
"end": 354,
"text": "[an uncertain meaning]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "(1) To evaluate CUE the PML/RARalpha role in myelopoiesis, transgenic mice expressing PML/RARalpha were engineered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "(2) Our method was evaluated on the Lindahl benchmark for fold recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In this article we focus on the domain-dependent aspects of uncertainty detection and we examine the recognition of uncertainty cues in context. We do not address the problem of linking cues to propositions in detail (see, e.g., Chapman, Chu, and Dowling [2007] and Kilicoglu and Bergler [2009] for the information extraction case).",
"cite_spans": [
{
"start": 229,
"end": 261,
"text": "Chapman, Chu, and Dowling [2007]",
"ref_id": "BIBREF0"
},
{
"start": 266,
"end": 294,
"text": "Kilicoglu and Bergler [2009]",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "For the empirical investigation of the domain dependent aspects, data sets are required from various domains. To date, several corpora annotated for uncertainty have been constructed for different genres and domains (BioScope, FactBank, WikiWeasel, and MPQA, to name but a few). These corpora cover different aspects of uncertainty, however, being grounded on different linguistic models, which makes it hard to exploit cross-domain knowledge in applications. These differences in part stem from the varied application needs across application domains. Different types of uncertainty and classes of linguistic expressions are relevant for different domains. Although hypotheses and investigations form a crucial part of the relevant cases in scientific applications, they are less prominent in newswire texts, where beliefs and rumors play a major role. This finding motivates a more fine-grained treatment of uncertainty. In order to bridge the existing gaps between application goals, these typical cases need to be differentiated. A fine-grained categorization enables the individual treatment of each subclass, which is less dependent on domain differences than using one coarse-grained uncertainty class. Moreover, this approach enables each particular application to identify and select from a pool of models only those aspects of uncertainty that are relevant in the specific domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "As one of the main contributions of this study, we propose a uniform subcategorization of semantic uncertainty in which all the previous corpus annotation works can be placed, and which reveals the fundamental differences between the currently existing resources. In addition, we manually harmonized the annotations of three corpora and performed the fine-grained labeling according to the suggested subcategorization so as to be able to perform cross-domain experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "An important factor in training robust cross-domain models is to focus on shallow features that can be reliably obtained for many different domains and text types, and to craft models that exploit the shared knowledge from different sources as much as possible, making the adaptation to new domains efficient. The study of learning efficient models across different domains is the subject of transfer learning and domain adaptation research (cf. Daum\u00e9 III and Marcu 2006; Pan and Yang 2010) . The domain adaptation setting assumes a target domain (for which an accurate model should be learned with a limited amount of labeled training data), a source domain (with characteristics different from the target and for which a substantial amount of labeled data is available), and an arbitrary supervised learning model that exploits both the target and source domain data in order to learn an improved target domain model.",
"cite_spans": [
{
"start": 446,
"end": 471,
"text": "Daum\u00e9 III and Marcu 2006;",
"ref_id": "BIBREF4"
},
{
"start": 472,
"end": 490,
"text": "Pan and Yang 2010)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The success of domain adaptation mainly depends on two factors: (i) the similarity of the target and source domains (the two domains should be sufficiently similar to allow knowledge transfer); and (ii) the application of an efficient domain adaptation method (which permits the learning algorithm to exploit the commonalities of the domains while preserving the special characteristics of the target domain).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "As our second main contribution, we study the impact of domain differences on uncertainty detection, how this impact depends on the distance between target and source domains concerning their domains and genres, and how these differences can be reduced to produce accurate target domain models with limited annotation effort.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Because previously existing resources exhibited fundamental differences that made domain adaptation difficult, 1 to our knowledge this is the first study to analyze domain differences and adaptability in the context of uncertainty detection in depth, and also the first study to report consistently positive results in cross-training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The main contributions of the current paper can be summarized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "r We provide a uniform subcategorization of semantic uncertainty (with definitions, examples, and test batteries for annotation) and classify all major previous studies on uncertainty corpus annotation into the proposed categorization system, in order to reveal and analyze the differences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "r We provide a harmonized, fine-grained reannotation of three corpora, according to the suggested subcategorization, to allow an in-depth analysis of the domain dependent aspects of uncertainty detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "r We compare the two state-of-the-art approaches to uncertainty cue detection (i.e., the one based on token classification and the one on sequence labeling models), using a shared feature set, in the context of the CoNLL-2010 shared task, to understand their strengths and weaknesses. 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "r We train an accurate semantic uncertainty detector that distinguishes four fine-grained categories of semantic uncertainty (epistemic, doxastic, investigation, and condition types) and thus is better for future applications in various domains than previous models. Our experiments reveal that, similar to the best model of the CoNLL-2010 shared task for biological texts but in a fine-grained context, shallow features provide good results in recognizing semantic uncertainty. We also show that this representation is less suited to detecting discourse-level uncertainty (which was part of the CoNLL task for Wikipedia texts).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "r We examine in detail the differences between domains and genres as regards the language used to express semantic uncertainty, and learn how the domain or genre distance affects uncertainty recognition in texts with unseen characteristics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "r We apply domain adaptation techniques to fully exploit out-of-domain data and minimize annotation costs to adapt to a new domain, and we report successful results for various text domains and genres.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The rest of the paper is structured as follows. In Section 2, our classification of uncertainty phenomena is presented in detail and it is compared with the concept of uncertainty used in existing corpora. A framework for detecting semantic uncertainty is then presented in Section 3. Related work on cue detection is summarized in Section 4, which is followed by a description of our cue recognition system and a presentation of our experimental set-up using various source and target genre and domain pairs for cross-domain learning and domain adaptation in Section 5. Our results are elaborated on in Section 6 with a focus on the effect of domain similarities and on the annotation effort needed to cover a new domain. We then conclude with a summary of our results and make some suggestions for future research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In order to be able to introduce and discuss our data sets, experiments, and findings, we have to clarify our understanding of the term uncertainty. Uncertainty-in its most general sense-can be interpreted as lack of information: The receiver of the information (i.e., the hearer or the reader) cannot be certain about some pieces of information. In this respect, uncertainty differs from both factuality and negation; as regards the former, the hearer/reader is sure that the information is true and as for the latter, he is sure that the information is not true. From the viewpoint of computer science, uncertainty emerges due to partial observability, nondeterminism, or both (Russell and Norvig 2010) . Linguistic theories usually associate the notion of modality with uncertainty: Epistemic modality encodes how much certainty or evidence a speaker has for the proposition expressed by his utterance (Palmer 1986) or it refers to a possible state of the world in which the given proposition holds (Kiefer 2005) . The common point in these approaches is that in the case of uncertainty, the truth value/reliability of the proposition cannot be decided because some other piece of information is missing. Thus, uncertain propositions are those in our understanding whose truth value or reliability cannot be determined due to lack of information.",
"cite_spans": [
{
"start": 679,
"end": 704,
"text": "(Russell and Norvig 2010)",
"ref_id": null
},
{
"start": 905,
"end": 918,
"text": "(Palmer 1986)",
"ref_id": "BIBREF28"
},
{
"start": 1002,
"end": 1015,
"text": "(Kiefer 2005)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Phenomenon Uncertainty",
"sec_num": "2."
},
{
"text": "In the following, we focus on semantic uncertainty and we suggest a tentative classification of several types of semantic uncertainty. Our classification is grounded on the knowledge of existing corpora and uncertainty recognition tools and our chief goal here is to provide a computational linguistics-oriented classification. With this in mind, our subclasses are intended to be well-defined and easily identifiable by automatic tools. Moreover, this classification allows different applications to choose the subset of phenomena to be recognized in accordance with their main task (i.e., we tried to avoid an overly coarse or fine-grained categorization).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Phenomenon Uncertainty",
"sec_num": "2."
},
{
"text": "Several corpora annotated for uncertainty have been published in different domains such as biology (Medlock and Briscoe 2007; Kim, Ohta, and Tsujii 2008; Settles, Craven, and Friedland 2008; Shatkay et al. 2008; Vincze et al. 2008; Nawaz, Thompson, and Ananiadou 2010) , medicine (Uzuner, Zhang, and Sibanda 2009) , news media (Rubin, Liddy, and Kando 2005; Wilson 2008; Saur\u00ed and Pustejovsky 2009; Rubin 2010) , and encyclopedia (Farkas et al. 2010) . As can be seen from publicly available annotation guidelines, there are many overlaps but differences as well in the understanding of uncertainty, which is sometimes connected to domain-and genre-specific features of the texts. Here we introduce a domain-and genre-independent classification of several types of semantic uncertainty, which was inspired by both theoretical and computational linguistic considerations.",
"cite_spans": [
{
"start": 99,
"end": 125,
"text": "(Medlock and Briscoe 2007;",
"ref_id": "BIBREF23"
},
{
"start": 126,
"end": 153,
"text": "Kim, Ohta, and Tsujii 2008;",
"ref_id": "BIBREF19"
},
{
"start": 154,
"end": 190,
"text": "Settles, Craven, and Friedland 2008;",
"ref_id": "BIBREF38"
},
{
"start": 191,
"end": 211,
"text": "Shatkay et al. 2008;",
"ref_id": "BIBREF38"
},
{
"start": 212,
"end": 231,
"text": "Vincze et al. 2008;",
"ref_id": "BIBREF49"
},
{
"start": 232,
"end": 268,
"text": "Nawaz, Thompson, and Ananiadou 2010)",
"ref_id": "BIBREF26"
},
{
"start": 280,
"end": 313,
"text": "(Uzuner, Zhang, and Sibanda 2009)",
"ref_id": "BIBREF45"
},
{
"start": 327,
"end": 357,
"text": "(Rubin, Liddy, and Kando 2005;",
"ref_id": "BIBREF33"
},
{
"start": 358,
"end": 370,
"text": "Wilson 2008;",
"ref_id": "BIBREF50"
},
{
"start": 371,
"end": 398,
"text": "Saur\u00ed and Pustejovsky 2009;",
"ref_id": "BIBREF37"
},
{
"start": 399,
"end": 410,
"text": "Rubin 2010)",
"ref_id": "BIBREF32"
},
{
"start": 430,
"end": 450,
"text": "(Farkas et al. 2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification of Uncertainty Types",
"sec_num": "2.1"
},
{
"text": "Types of uncertainty. FB = FactBank; Genia = Genia Event; Rubin = the data set described in Rubin, Liddy and Noriko (2005) ; META = the data set described in Nawaz, Thompson and Ananiadou (2010) ; Medlock = the data set described in Medlock and Briscoe (2007) ; Shatkay = the data set described in Shatkay et al. (2008) ; Settles = the data set described in Settles et al. (2008) .",
"cite_spans": [
{
"start": 92,
"end": 122,
"text": "Rubin, Liddy and Noriko (2005)",
"ref_id": "BIBREF33"
},
{
"start": 165,
"end": 194,
"text": "Thompson and Ananiadou (2010)",
"ref_id": "BIBREF26"
},
{
"start": 233,
"end": 259,
"text": "Medlock and Briscoe (2007)",
"ref_id": "BIBREF23"
},
{
"start": 298,
"end": 319,
"text": "Shatkay et al. (2008)",
"ref_id": "BIBREF38"
},
{
"start": 358,
"end": 379,
"text": "Settles et al. (2008)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1",
"sec_num": null
},
{
"text": "Based on corpus data and annotation principles, the expression uncertainty can be used as an umbrella term for covering phenomena at the semantic and discourse levels. 3 Our classification of semantic uncertainty is assumed to be language-independent, but our examples presented here come from the English language, to keep matters simple.",
"cite_spans": [
{
"start": 168,
"end": 169,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Tentative Classification.",
"sec_num": "2.1.1"
},
{
"text": "Semantically uncertain propositions can be defined in terms of truth conditional semantics. They cannot be assigned a truth value (i.e., it cannot be stated for sure whether they are true or false) given the speaker's current mental state.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Tentative Classification.",
"sec_num": "2.1.1"
},
{
"text": "Semantic level uncertainty can be subcategorized into epistemic and hypothetical (see Figure 1 ). The main difference between epistemic and hypothetical uncertainty is that whereas instances of hypothetical uncertainty can be true, false or uncertain, epistemically uncertain propositions are definitely uncertain-in terms of possible worlds, hypothetical propositions allow that the proposition can be false in the actual world, but in the case of epistemic uncertainty the factuality of the proposition is not known.",
"cite_spans": [],
"ref_spans": [
{
"start": 86,
"end": 94,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Tentative Classification.",
"sec_num": "2.1.1"
},
{
"text": "In the case of epistemic uncertainty, it is known that the proposition is neither true nor false: It describes a possible world where the proposition holds but this possible world does not coincide with the speaker's actual world. In other words, it is certain that the proposition is uncertain. Epistemic uncertainty is related to epistemic modality: a sentence is epistemically uncertain if on the basis of our world knowledge we cannot decide at the moment whether it is true or false (hence the name) (Kiefer 2005) . The source of an epistemically uncertain proposition cannot claim the uncertain proposition and be sure about its opposite at the same time.",
"cite_spans": [
{
"start": 505,
"end": 518,
"text": "(Kiefer 2005)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Tentative Classification.",
"sec_num": "2.1.1"
},
{
"text": "(3) EPISTEMIC: It may be raining.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Tentative Classification.",
"sec_num": "2.1.1"
},
{
"text": "As for hypothetical uncertainty, the truth value of the propositions cannot be determined either and nothing can be said about the probability of their happening. Propositions under investigation are an example of such statements: Until further analysis, the truth value of the proposition under question cannot be stated. Conditionals can also be classified as instances of hypotheses. It is also common in these two types of uncertain propositions that the speaker can utter them while it is certain (for others or even for him) that its opposite holds hence they can be called instances of paradoxical uncertainty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Tentative Classification.",
"sec_num": "2.1.1"
},
{
"text": "Hypothetical uncertainty is connected with non-epistemic types of modality as well. Doxastic modality expresses the speaker's beliefs-which may be known to be true or false by others in the current state of the world. Necessity (duties, obligation, orders) is the main objective of deontic modality; dispositional modality is determined by the dispositions (i.e., physical abilities) of the person involved; and circumstantial modality is defined by external circumstances. Buletic modality is related to wishes, intentions, plans, and desires. An umbrella term for deontic, dispositional, circumstantial, and buletic modality is dynamic modality (Kiefer 2005) . Conditions and instances of dynamic modality are related to the future: In the future, they may happen but at the moment it is not clear whether they will take place or not / whether they are true, false, or uncertain.",
"cite_spans": [
{
"start": 647,
"end": 660,
"text": "(Kiefer 2005)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Tentative Classification.",
"sec_num": "2.1.1"
},
{
"text": "Classifications. The feasibility of the classification proposed in this study can be justified by mapping the annotation schemes used in other existing corpora to our subcategorizations of uncertainty. This systematic comparison also highlights the major differences between existing works and partly explains why examples for successful cross-domain application of existing resources and models are hard to find in the literature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with other",
"sec_num": "2.1.2"
},
{
"text": "Most of the annotations found in biomedical corpora (Medlock and Briscoe 2007; Settles, Craven, and Friedland 2008; Shatkay et al. 2008; Thompson et al. 2008; Nawaz, Thompson, and Ananiadou 2010) fall into the epistemic uncertainty class. BioScope (Vincze et al. 2008) annotations mostly belong to the epistemic uncertainty category, with the exception of clausal hypotheses (i.e., hypotheses that are expressed by a clause headed by if or whether), which are instances of the investigation class. The probable class of Genia Event (Kim, Ohta, and Tsujii 2008) is of the epistemically uncertain type and the doubtful class belongs to the investigation class. Rubin, Liddy, and Kando (2005) consider uncertainty as a phenomenon belonging to epistemic modality: The high, moderate, and low levels of certainty coincide with our epistemic uncertainty category. The speculation annotations of the MPQA corpus also belong to the epistemic uncertainty class, with four levels (Wilson 2008) . The probable and possible classes found in FactBank (Saur\u00ed and Pustejovsky 2009) are of the epistemically uncertain type, events with a generic source belong to discourse-level uncertainty, whereas underspecified events are classified as hypothetical uncertainty in our system as, by definition, their truth value cannot be determined. WikiWeasel (Farkas et al. 2010) contains annotation for epistemic uncertainty, but discourse-level uncertainty is also annotated in the corpus (see Figure 1 for an overview). The categories used for the machine reading task described in Morante and Daelemans (2011) also overlap with our fine-grained classes: Uncertain events in their system fall into our epistemic uncertainty class. Their modal events expressing purpose, need, obligation, or desire are instances of dynamic modality, whereas their conditions are understood in a similar way to our condition class. The modality types listed in Baker et al. (2010) can be classified as types of dynamic modality, except for their belief category. Instances of the latter category are either certain (It is certain that he met the president) or epistemic or doxastic modality in our system.",
"cite_spans": [
{
"start": 52,
"end": 78,
"text": "(Medlock and Briscoe 2007;",
"ref_id": "BIBREF23"
},
{
"start": 79,
"end": 115,
"text": "Settles, Craven, and Friedland 2008;",
"ref_id": "BIBREF38"
},
{
"start": 116,
"end": 136,
"text": "Shatkay et al. 2008;",
"ref_id": "BIBREF38"
},
{
"start": 137,
"end": 158,
"text": "Thompson et al. 2008;",
"ref_id": "BIBREF43"
},
{
"start": 159,
"end": 195,
"text": "Nawaz, Thompson, and Ananiadou 2010)",
"ref_id": "BIBREF26"
},
{
"start": 248,
"end": 268,
"text": "(Vincze et al. 2008)",
"ref_id": "BIBREF49"
},
{
"start": 532,
"end": 560,
"text": "(Kim, Ohta, and Tsujii 2008)",
"ref_id": "BIBREF19"
},
{
"start": 659,
"end": 689,
"text": "Rubin, Liddy, and Kando (2005)",
"ref_id": "BIBREF33"
},
{
"start": 970,
"end": 983,
"text": "(Wilson 2008)",
"ref_id": "BIBREF50"
},
{
"start": 1038,
"end": 1066,
"text": "(Saur\u00ed and Pustejovsky 2009)",
"ref_id": "BIBREF37"
},
{
"start": 1333,
"end": 1353,
"text": "(Farkas et al. 2010)",
"ref_id": "BIBREF7"
},
{
"start": 1559,
"end": 1587,
"text": "Morante and Daelemans (2011)",
"ref_id": "BIBREF25"
},
{
"start": 1920,
"end": 1939,
"text": "Baker et al. (2010)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 1470,
"end": 1478,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with other",
"sec_num": "2.1.2"
},
{
"text": "We assume that the nature of the lexical unit determines the type of uncertainty it represents, that is, semantic uncertainty is highly lexical in nature. The part of speech of the uncertainty cue candidates serves as the basis for categorization, similar to the ones found in Hyland (1994 Hyland ( , 1996 Hyland ( , 1998 and Rizomilioti (2006) . In English, modality is often associated with modal auxiliaries (Palmer 1979 ), but, as Table 1 shows, there are many other parts of speech that can express uncertainty. It should be added that there are cues where it depends on the context, rather than the given lexical item, what subclass of uncertainty the cue refers to, for example, may can denote epistemic modality (It may rain. . . ) or dynamic modality (Now you may open the door). These categories are listed in Table 1 .",
"cite_spans": [
{
"start": 277,
"end": 289,
"text": "Hyland (1994",
"ref_id": "BIBREF12"
},
{
"start": 290,
"end": 305,
"text": "Hyland ( , 1996",
"ref_id": "BIBREF13"
},
{
"start": 306,
"end": 321,
"text": "Hyland ( , 1998",
"ref_id": "BIBREF14"
},
{
"start": 326,
"end": 344,
"text": "Rizomilioti (2006)",
"ref_id": "BIBREF31"
},
{
"start": 411,
"end": 423,
"text": "(Palmer 1979",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 435,
"end": 442,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 820,
"end": 827,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Types of Semantic Uncertainty Cues",
"sec_num": "2.2"
},
{
"text": "In our model, uncertainty detection is a standalone task that is largely independent of the underlying application. In this section, we briefly discuss how uncertainty detection can be incorporated into an information extraction task, which is probably the most relevant application area (see Kim et al. [2009] for more details). In the information extraction context, the key steps of recognizing uncertain propositions are locating the cues, disambiguating them (as not all occurrences of the cues indicate uncertainty; recall the example of evaluate), and finally linking them with the textual representation of the propositions in question. We note here that marking the textual representations of important propositions (often referred to as events in information extraction) is actually the main goal of an information extraction system, hence we will not focus on their identification and just assume that they are already marked in texts.",
"cite_spans": [
{
"start": 293,
"end": 310,
"text": "Kim et al. [2009]",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for Detecting Semantic Uncertainty",
"sec_num": "3."
},
{
"text": "The following is an example that demonstrates the process of uncertainty detection:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for Detecting Semantic Uncertainty",
"sec_num": "3."
},
{
"text": "(8) In this study we hypothesized CUE that the phosphorylation of TRAF2 inhibits EVENT binding to the CD40 cytoplasmic domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for Detecting Semantic Uncertainty",
"sec_num": "3."
},
{
"text": "Here the EVENT mark-up is produced by the information extraction system, and uncertainty detection consists of i) the recognition of the cue word hypothesized, and determining whether it denotes uncertainty in this specific case (producing the CUE mark-up) and ii) determining whether the cue hypothesized modifies the event triggered by inhibits or not (positive example in this case).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Framework for Detecting Semantic Uncertainty",
"sec_num": "3."
},
{
"text": "The cue detection and disambiguation problem can be essentially regarded as a token labeling problem. Here the task is to assign a label to each of the tokens of a sentence in question according to whether it is the starting token of an uncertainty cue (B-CUE TYPE), an inside token of a cue (I-CUE TYPE), or it is not part of any cue (O). Most previous studies assume a binary classification task, namely, each token is either part of an uncertainty cue, or it is not a cue. For fine-grained uncertainty detection, a different label has to be used for each uncertainty type to be distinguished. This way, the label sequence of a sentence naturally identifies all uncertainty cues (with their types) in the sentence, and disambiguation is solved jointly with recognition. Because the uncertainty cue vocabulary and the distribution of certain and uncertain senses of cues vary in different domains and genres, uncertainty cue detection and disambiguation are the main focus of the current study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Uncertainty Cue Detection and Disambiguation",
"sec_num": "3.1"
},
{
"text": "The task of linking the detected uncertainty cues to propositions can be formulated as a binary classification task over uncertainty cue and event marker pairs. The relation holds and is considered true if the cue modifies the truth value (confidence) of the event; it does not hold and is considered false if the cue does not have any impact on the interpretation of the event. That is, the pair (hypothesized, inhibits) in Example (8) is an instance of positive relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linking Uncertainty Cues to Propositions",
"sec_num": "3.2"
},
{
"text": "The linking of uncertainty cues and event markers can be established by using dependency grammar rules (i.e., the problem is mainly syntax driven). As the grammatical properties of the language are similar in various domains and genres, this task is largely domain-independent, as opposed to the recognition and disambiguation task. Because of this, we sketch the most important matching patterns, but do not address the linking task in great detail here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linking Uncertainty Cues to Propositions",
"sec_num": "3.2"
},
{
"text": "The following are the characteristic rules that can be used to link uncertainty cues to event markers. For practical implementations of heuristic cue/event matching, see Chapman, Chu, and Dowling (2007) and Kilicoglu and Bergler (2009) .",
"cite_spans": [
{
"start": 170,
"end": 202,
"text": "Chapman, Chu, and Dowling (2007)",
"ref_id": "BIBREF0"
},
{
"start": 207,
"end": 235,
"text": "Kilicoglu and Bergler (2009)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linking Uncertainty Cues to Propositions",
"sec_num": "3.2"
},
{
"text": "r If the event clue has an uncertain verb, noun, preposition, or auxiliary as a (not necessarily direct) parent in the dependency graph of the sentence, the event is regarded as uncertain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linking Uncertainty Cues to Propositions",
"sec_num": "3.2"
},
{
"text": "r If the event clue has an uncertain adverb or adjective as its child, it is treated as uncertain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linking Uncertainty Cues to Propositions",
"sec_num": "3.2"
},
{
"text": "Here we review the published works related to uncertainty cue detection. Earlier studies focused either on in-domain cue recognition for a single domain or on cue lexicon extraction from large corpora. The latter approach is applicable to multiple domains, but does not address the disambiguation of uncertain and other meanings of the extracted cue words. We are also aware of several studies that discussed the differences of cue distributions in various domains, without developing a cue detector. To the best of our knowledge, our study is the first to address the genre-and domain-adaptability of uncertainty cue recognition systems and thus uncertainty detection in a general context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work on Uncertainty Cue Detection",
"sec_num": "4."
},
{
"text": "We should add that there are plenty of studies on end-application oriented uncertainty detection, that is, how to utilize the recognized cues (see, for instance, Kilicoglu and Bergler [2008] , Uzuner, Zhang, and Sibanda [2009] and Saur\u00ed [2008] for information extraction or Farkas and Szarvas [2008] for document labeling applications), and a recent pilot task sought to exploit negation and hedge cue detectors in machine reading (Morante and Daelemans 2011) . As the focus of our paper is cue recognition, however, we omit their detailed description here.",
"cite_spans": [
{
"start": 184,
"end": 190,
"text": "[2008]",
"ref_id": null
},
{
"start": 193,
"end": 226,
"text": "Uzuner, Zhang, and Sibanda [2009]",
"ref_id": "BIBREF45"
},
{
"start": 231,
"end": 243,
"text": "Saur\u00ed [2008]",
"ref_id": "BIBREF36"
},
{
"start": 274,
"end": 299,
"text": "Farkas and Szarvas [2008]",
"ref_id": "BIBREF6"
},
{
"start": 431,
"end": 459,
"text": "(Morante and Daelemans 2011)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work on Uncertainty Cue Detection",
"sec_num": "4."
},
{
"text": "In-domain uncertainty detectors have been developed since the mid 1990s. Most of these systems use hand-crafted lexicons for cue recognition and they treat each occurrence of the lexicon items as a cue-that is, they do not address the problem of disambiguating cues (Friedman et al. 1994; Light, Qiu, and Srinivasan 2004; Farkas and Szarvas 2008; Saur\u00ed 2008; Conway, Doan, and Collier 2009; Van Landeghem et al. 2009) . ConText (Chapman, Chu, and Dowling 2007) uses regular expressions to define cues and \"pseudo-triggers\". A pseudo-trigger is a superstring of a cue and it is basically used for recognizing contexts where a cue does not imply uncertainty (i.e., it can be regarded as a hand-crafted cue disambiguation module). MacKinlay, Martinez, and Baldwin (2009) introduced a system which also used non-consecutive tokens as cues (like not+as+yet).",
"cite_spans": [
{
"start": 266,
"end": 288,
"text": "(Friedman et al. 1994;",
"ref_id": "BIBREF9"
},
{
"start": 289,
"end": 321,
"text": "Light, Qiu, and Srinivasan 2004;",
"ref_id": "BIBREF21"
},
{
"start": 322,
"end": 346,
"text": "Farkas and Szarvas 2008;",
"ref_id": "BIBREF6"
},
{
"start": 347,
"end": 358,
"text": "Saur\u00ed 2008;",
"ref_id": "BIBREF36"
},
{
"start": 359,
"end": 390,
"text": "Conway, Doan, and Collier 2009;",
"ref_id": "BIBREF2"
},
{
"start": 391,
"end": 417,
"text": "Van Landeghem et al. 2009)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "In-Domain Cue Detection",
"sec_num": "4.1"
},
{
"text": "Utilizing manually labeled corpora, machine learning-based uncertainty cue detectors have also been developed (to the best of our knowledge each of them uses an in-domain training data set). They use token classification Clausen 2010; Fernandes, Crestana, and Milidi\u00fa 2010; S\u00e1nchez, Li, and Vogel 2010) or sequence labeling approaches (Li et al. 2010; Rei and Briscoe 2010; Tang et al. 2010; Zhang et al. 2010) . In both cases the tokens are labeled according to whether they are part of a cue. The latter assigns a label sequence to a sentence (a sequence of tokens) thus it naturally deals with the context of a particular word. On the other hand, context information for a token is built into the feature space of the token classification approaches. \u00d6zg\u00fcr and Radev (2009) and Velldal (2010) match cues from a lexicon then apply a binary classifier based on features describing the context of the cue candidate.",
"cite_spans": [
{
"start": 221,
"end": 234,
"text": "Clausen 2010;",
"ref_id": "BIBREF1"
},
{
"start": 235,
"end": 273,
"text": "Fernandes, Crestana, and Milidi\u00fa 2010;",
"ref_id": "BIBREF8"
},
{
"start": 274,
"end": 302,
"text": "S\u00e1nchez, Li, and Vogel 2010)",
"ref_id": "BIBREF35"
},
{
"start": 335,
"end": 351,
"text": "(Li et al. 2010;",
"ref_id": "BIBREF20"
},
{
"start": 352,
"end": 373,
"text": "Rei and Briscoe 2010;",
"ref_id": "BIBREF30"
},
{
"start": 374,
"end": 391,
"text": "Tang et al. 2010;",
"ref_id": "BIBREF42"
},
{
"start": 392,
"end": 410,
"text": "Zhang et al. 2010)",
"ref_id": "BIBREF51"
},
{
"start": 754,
"end": 776,
"text": "\u00d6zg\u00fcr and Radev (2009)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "In-Domain Cue Detection",
"sec_num": "4.1"
},
{
"text": "Each of these approaches uses a rich feature representation for tokens, which usually includes surface-level, part-of-speech, and chunk-level features. A few systems have also used dependency relation types originating at the cue (Rei and Briscoe 2010; S\u00e1nchez, Li, and Vogel 2010; Velldal, \u00d8vrelid, and Oepen 2010; Zhang et al. 2010) ; the CoNLL-2010 Shared Task final ranking suggests that it has only a limited impact on the performance of an entire system (Farkas et al. 2010) , however.\u00d6zg\u00fcr and Radev 2009further extended the feature set with the other cues that occur in the same sentence as the cue, and positional features such as the section header of the article in which the cue occurs (the latter is only defined for scientific publications). Velldal (2010) argues that the dimensionality of the uncertainty cue detection feature space is too high and reports improvements by using the sparse random indexing technique. Ganter and Strube (2009) proposed a rather different approach for (weasel) cue detection-exploiting weasel tags 4 in Wikipedia articles given by editors. They used syntax-based patterns to recognize the internal structure of the cues, which has proved useful as discourse-level uncertainty cues are usually long and have a complex internal structure (as opposed to semantic uncertainty cues).",
"cite_spans": [
{
"start": 230,
"end": 252,
"text": "(Rei and Briscoe 2010;",
"ref_id": "BIBREF30"
},
{
"start": 253,
"end": 281,
"text": "S\u00e1nchez, Li, and Vogel 2010;",
"ref_id": "BIBREF35"
},
{
"start": 282,
"end": 315,
"text": "Velldal, \u00d8vrelid, and Oepen 2010;",
"ref_id": "BIBREF48"
},
{
"start": 316,
"end": 334,
"text": "Zhang et al. 2010)",
"ref_id": "BIBREF51"
},
{
"start": 460,
"end": 480,
"text": "(Farkas et al. 2010)",
"ref_id": "BIBREF7"
},
{
"start": 756,
"end": 770,
"text": "Velldal (2010)",
"ref_id": "BIBREF47"
},
{
"start": 933,
"end": 957,
"text": "Ganter and Strube (2009)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "In-Domain Cue Detection",
"sec_num": "4.1"
},
{
"text": "As can be seen, uncertainty cue detectors have mostly been developed in the biological and medical domains. All of these studies, however, focus on only one domain, namely, in-domain cue detection is carried out, which assumes the availability of a training data set of sufficient size. The only exception we are aware of is the CoNLL-2010 Shared Task (Farkas et al. 2010) , where participants had the chance to use Wikipedia data on biomedical domain and vice versa. Probably due to the differences in the annotated uncertainty types and the stylistic and topical characteristics of the texts, very few participants performed cross-domain experiments and reported only limited success (see Section 5.3.2 for more on this).",
"cite_spans": [
{
"start": 352,
"end": 372,
"text": "(Farkas et al. 2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "In-Domain Cue Detection",
"sec_num": "4.1"
},
{
"text": "Overall, the findings of these studies indicate that disambiguating cue candidates is an important aspect of uncertainty detection and that the domain specificity of disambiguation models and domain adaptation in general are largely unexplored problems in uncertainty detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "In-Domain Cue Detection",
"sec_num": "4.1"
},
{
"text": "Similar to our approach, several studies have addressed the problem of developing an uncertainty detector for a new domain using as little annotation effort as possible. The aim of these studies is to identify uncertain sentences; this is carried out by semiautomatic construction of cue lexicons. The weakly supervised approaches start with very small seed sets of annotated certain and uncertain sentences, and use bootstrapping to induce a suitable training corpus in an automatic way. Such approaches collect potentially certain and uncertain sentences from a large unlabeled pool based on their similarity to the instances in the seed sets (Medlock and Briscoe 2007) , or based on the known errors of an information extraction system that is itself sensitive to uncertain texts (Szarvas 2008) . Further instances are then collected (in an iterative fashion) on the basis of their similarity to the current training instances. Based on the observation that uncertain sentences tend to contain more than one uncertainty cue, these models successfully extend the seed sets with automatically labeled sentences, and can produce an uncertainty classifier with a sentence-level F-score of 60-80% for the uncertain class, given that the texts of the seed examples, the unlabeled pool, and the actual evaluation data share very similar properties. Szarvas (2008) showed that these models essentially learn the uncertainty lexicon (set of cues) of the given domain, but are otherwise unable to disambiguate the potential cue words-that is, to distinguish between the uncertain and certain uses of the previously seen cues. This deficiency of the derived models is inherent to the bootstrapping process, which considers all occurrences of the cue candidates as good candidates for positive examples (as opposed to unlabeled sentences without any previously seen cue words).",
"cite_spans": [
{
"start": 645,
"end": 671,
"text": "(Medlock and Briscoe 2007)",
"ref_id": "BIBREF23"
},
{
"start": 783,
"end": 797,
"text": "(Szarvas 2008)",
"ref_id": "BIBREF40"
},
{
"start": 1345,
"end": 1359,
"text": "Szarvas (2008)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Weakly Supervised Extraction of Cue Lexicon",
"sec_num": "4.2"
},
{
"text": "Kilicoglu and Bergler 2008proposed a semi-automatic method to expand a seed cue lexicon. Their linguistically motivated approach is also based on the weakly supervised induction of a corpus of uncertain sentences. It exploits the syntactic patterns of uncertain sentences to identify new cue candidates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weakly Supervised Extraction of Cue Lexicon",
"sec_num": "4.2"
},
{
"text": "The previous studies on weakly supervised approaches to uncertainty detection did not tackle the problem of disambiguating the certain and uncertain uses of cue candidates, which is a major drawback from a practical point of view.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weakly Supervised Extraction of Cue Lexicon",
"sec_num": "4.2"
},
{
"text": "Besides automatic uncertainty recognition, several studies investigated the distribution of hedge cues in scientific papers from different domains (Hyland 1998; Falahati 2006; Rizomilioti 2006) . The effect of different domains on the frequency of uncertain expressions was examined in Rizomilioti (2006) . Based on a previously defined dictionary of hedge cues, she analyzed the linguistic tools expressing epistemic modality in research papers from three domains, namely, archeology, literary criticism, and biology. Her results indicated that archaeological papers tend to contain the most uncertainty cues (which she calls downtoners) and the fewest uncertainty cues can be found in literary criticism papers. Different academic disciplines were contrasted in Hyland (1998) from the viewpoint of hedging: Papers belonging to the humanities contain significantly more hedging devices than papers in sciences. It is interesting to note, however, that in both studies, biological papers are situated in the middle as far as the percentage rate of uncertainty cues is concerned. Falahati (2006) examined hedges in research articles in medicine, chemistry, and psychology and concluded that it is psychology articles that contain the most hedges.",
"cite_spans": [
{
"start": 147,
"end": 160,
"text": "(Hyland 1998;",
"ref_id": "BIBREF14"
},
{
"start": 161,
"end": 175,
"text": "Falahati 2006;",
"ref_id": "BIBREF5"
},
{
"start": 176,
"end": 193,
"text": "Rizomilioti 2006)",
"ref_id": "BIBREF31"
},
{
"start": 286,
"end": 304,
"text": "Rizomilioti (2006)",
"ref_id": "BIBREF31"
},
{
"start": 764,
"end": 777,
"text": "Hyland (1998)",
"ref_id": "BIBREF14"
},
{
"start": 1079,
"end": 1094,
"text": "Falahati (2006)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cue Distribution Analyses",
"sec_num": "4.3"
},
{
"text": "Overall, these studies demonstrate that there are substantial differences in the way different technical/scientific domains and different genres express uncertainty in general, and in the use of semantic uncertainty in particular. Differences are found not just in the use of different vocabulary for expressing uncertainty, but also in the frequency of certain and uncertain usage of particular uncertainty cues. These findings underpin the practical importance of domain portability and domain adaptation of uncertainty detectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cue Distribution Analyses",
"sec_num": "4.3"
},
{
"text": "In this section, we present our uncertainty cue detector and the results of the cross-genre and -domain experiments carried out by us. Before describing our model and discussing the results of the experiments, a short overview of the texts used as training and test data sets will be given along with an empirical analysis of the sense distributions of the most frequent cues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Uncertainty Cue Recognition",
"sec_num": "5."
},
{
"text": "In our investigations, we selected three corpora (i.e., BioScope, WikiWeasel, and Fact-Bank) from different domains (biomedical, encyclopedia, and newswire, respectively). Genres also vary in the corpora (in the scientific genre, there are papers and abstracts whereas the other corpora contain pieces of news and encyclopedia articles). We preferred corpora on which earlier experiments had been carried out because this allowed us to compare our results with those of previous studies. This selection makes it possible to investigate domain and genre differences because each domain has its characteristic language use (which might result in differences in cue distribution) and different genres also require different writing strategies (e.g., in abstracts, implications of experimental results are often emphasized, which usually involves the use of uncertain language).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Sets",
"sec_num": "5.1"
},
{
"text": "The BioScope corpus (Vincze et al. 2008) contains clinical texts as well as biological texts from full papers and scientific abstracts; the texts were manually annotated for hedge cues and their scopes. In our experiments, 15 other papers annotated for the CoNLL-2010 Shared Task (Farkas et al. 2010) were also added to the set of BioScope papers. The WikiWeasel corpus (Farkas et al. 2010) was also used in the CoNLL-2010 Shared Task and it was manually annotated for weasel cues and semantic uncertainty in randomly selected paragraphs taken from Wikipedia articles. The FactBank corpus contains texts from the newswire domain (Saur\u00ed and Pustejovsky 2009) . Events are annotated in the data set and they are evaluated on the basis of their factuality from the viewpoint of their sources. Table 2 provides statistical data on the three corpora. Because in our experimental set-up, texts belonging to different genres also play an important role, data on abstracts and papers are included separately.",
"cite_spans": [
{
"start": 20,
"end": 40,
"text": "(Vincze et al. 2008)",
"ref_id": "BIBREF49"
},
{
"start": 280,
"end": 300,
"text": "(Farkas et al. 2010)",
"ref_id": "BIBREF7"
},
{
"start": 370,
"end": 390,
"text": "(Farkas et al. 2010)",
"ref_id": "BIBREF7"
},
{
"start": 629,
"end": 657,
"text": "(Saur\u00ed and Pustejovsky 2009)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [
{
"start": 790,
"end": 797,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data Sets",
"sec_num": "5.1"
},
{
"text": "Texts found in the three corpora to be investigated can be categorized into three genres, which can be further divided to subgenres at a finer level of distinction. Figure 2 depicts this classification.",
"cite_spans": [],
"ref_spans": [
{
"start": 165,
"end": 173,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Genres and Domains.",
"sec_num": "5.1.1"
},
{
"text": "The majority of BioScope texts (papers and abstracts) belong to the scientific discourse genre. FactBank texts can be divided into broadcast and written news, and Wikipedia texts belong to the encyclopedia genre.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Genres and Domains.",
"sec_num": "5.1.1"
},
{
"text": "As for the domain of the texts, there are three broad domains, namely, biology, news, and encyclopedia. Once again, these domains can be further divided into narrower ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Genres and Domains.",
"sec_num": "5.1.1"
},
{
"text": "Genres of texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2",
"sec_num": null
},
{
"text": "Domains of texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "topics at a fine-grained level, which is shown in Figure 3 . All abstracts and five papers in BioScope are related to the MeSH terms human, blood cell, and transcription factor (hbc in Figure 3 ). Nine BMC Bioinformatics papers come from the bioinformatics domain (bmc in Figure 3) , and ten papers describe some experimental results on the Drosophila species (fly). FactBank news can be classified as stock news, political news, and criminal news. Encyclopedia articles cover a broad range of topics, hence no detailed classification is given here.",
"cite_spans": [],
"ref_spans": [
{
"start": 50,
"end": 58,
"text": "Figure 3",
"ref_id": null
},
{
"start": 185,
"end": 193,
"text": "Figure 3",
"ref_id": null
},
{
"start": 272,
"end": 281,
"text": "Figure 3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "The Normalization of the Corpora. In order to uniformly evaluate our methods in each domain and genre (and each corpus), the evaluation data sets were normalized. This meant that cues had to be annotated in each data set and differentiated for types of semantic uncertainty. This resulted in the reannotation of BioScope, WikiWeasel, and FactBank. 5 In BioScope, the originally annotated cues were separated into epistemic cues and subtypes of hypothetical cues and instances of hypothetical uncertainty not yet marked were also annotated. In FactBank, epistemic and hypothetical cues were annotated: Uncertain events were matched with their uncertainty cues and instances of hypothetical uncertainty that were originally not annotated were also marked in the corpus. In the case of WikiWeasel, these two types of cues were separated from discourse-level cues. One class of hypothetical uncertainty (i.e., dynamic modality) was not annotated in any of the corpora. Although dynamic modality seems to play a role in the news domain, it is less important and less represented in the other two domains we investigated here. The other subclasses are more of general interest for the applications. For example, one of our training corpora comes from the scientific domain, where it is more important to distinguish facts from hypotheses and propositions under investigation (which can be later confirmed or rejected, compare the meta-knowledge annotation scheme developed for biological events [Nawaz, Thompson, and Ananiadou 2010] ), and from propositions that depend on each other (conditions).",
"cite_spans": [
{
"start": 348,
"end": 349,
"text": "5",
"ref_id": null
},
{
"start": 1489,
"end": 1526,
"text": "[Nawaz, Thompson, and Ananiadou 2010]",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "5.1.2",
"sec_num": null
},
{
"text": "Corpora. An analysis of the cue distributions reveals some interesting trends that can be exploited in uncertainty detection across domains and genres. The most frequent cue stems in the (sub)corpora used in our study can be seen in Table 3 and they are responsible for about 74% of epistemic cue occurrences, 55% of doxastic cue occurrences, 70% of investigation cue occurrences, and 91% of condition cue occurrences.",
"cite_spans": [],
"ref_spans": [
{
"start": 233,
"end": 240,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Uncertainty Cues in the",
"sec_num": "5.1.3"
},
{
"text": "As can be seen, one of the most frequent epistemic cues in each corpus is may. If, possible, might, and suggest also occur frequently in our data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Uncertainty Cues in the",
"sec_num": "5.1.3"
},
{
"text": "The distribution of the uncertainty cues was also analyzed from the perspective of uncertainty classes in each corpus, which is presented in Figure 4 . In most of the corpora, epistemic cues are the most frequent (except for FactBank) and they vary the most: Out of the 300 cue stems occurring in the corpora, 206 are epistemic cues. Comparing the domains, it can readily be seen that in biological texts, doxastic uncertainty is not frequent, which is especially true for abstracts, whereas in FactBank and WikiWeasel they cover about 27% of the data. The most frequent doxastic keywords exhibit some domain-specific differences, however: In BioScope, the most frequent ones include putative and hypothesis, which rarely occur in FactBank and WikiWeasel. Nevertheless, cues belonging to the investigation class can be found almost exclusively in scientific texts (89% of them are in BioScope), which can be expected because the aim of scientific publications is to examine whether a hypothesized phenomenon occurs. Among the most Table 3 The most frequent cues in the corpora. epist. = epistemic cue; dox. = doxastic cue; inv. = investigation cue; cond. = condition cue. ",
"cite_spans": [],
"ref_spans": [
{
"start": 141,
"end": 149,
"text": "Figure 4",
"ref_id": null
},
{
"start": 1031,
"end": 1038,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Uncertainty Cues in the",
"sec_num": "5.1.3"
},
{
"text": "Cue type distributions in the corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 4",
"sec_num": null
},
{
"text": "frequent cues, investigate, examine, and study belong to this group. These data reveal that the frequency of doxastic and investigation cues is strongly domain-dependent, and this explains the fact that the investigation vocabulary is very limited in Factbank and WikiWeasel. Only about 10 cue stems belong to this uncertainty class in these corpora. The set of condition cue stems, however, is very small in each corpus; altogether 18 condition cue stems can be found in the data, although if and would are responsible for almost 75% of condition cue occurrences. It should also be mentioned that the percentage of condition cues is higher in FactBank than in the other corpora. Another interesting trend was observed when word forms were considered instead of stemmed forms: Certain verbs in third person singular (e.g., expects or believes) occur mostly in FactBank and WikiWeasel. The reason for this may be that when speaking about someone else's opinion in scientific discourse, the source of the opinion is usually provided in the form of references or citations-usually at the end of the sentence-and due to this, the verb is often used in the passive form, as in Example (9).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 4",
"sec_num": null
},
{
"text": "(9) It is currently believed that both RAG1 and RAG2 proteins were originally encoded by the same transposon recruited in a common ancestor of jawed vertebrates [3,12,13,16] .",
"cite_spans": [],
"ref_spans": [
{
"start": 149,
"end": 173,
"text": "vertebrates [3,12,13,16]",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 4",
"sec_num": null
},
{
"text": "In contrast, impersonal constructions are hardly used in news media, where the objective is to inform listeners about the source of the news presented as well in order to enable them to judge the reliability of a piece of news. Here, a clause including the source and a communication verb is usually attached to the proposition. A genre-related difference between scientific abstracts and full papers is that condition cues can rarely be found in abstracts, although they occur more frequently in papers (with the non-cue usage still being much more frequent). Another difference is the percentage of cues of the investigation type, which may be related to the structure of abstracts. Biological abstracts usually present the problem they examine and describe methods they use. This entails the application of predicates belonging to the investigation class of uncertainty. It can be argued, however, that scientific papers also have these characteristics but abstracts are much shorter than papers (generally, they contain about 10-12 sentences). Hence, investigation cues are responsible for a greater percentage of cues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 4",
"sec_num": null
},
{
"text": "There are some lexical differences among the corpora that are related to domain or genre specificity. For instance, due to their semantics, the words charge, accuse, allege, fear, worry, and rumor are highly unlikely to occur in scientific publications, but they occur relatively often in news texts and in Wikipedia articles. As for lexical divergences between abstracts and papers, many of them are related to verbs of investigation and their different usage. In the corpora, verbs of investigations were marked only if it was not clear whether the event/phenomenon would take place or not. If it has already happened (The police are investigating the crime) or the existence of the thing under investigation can be stated with certainty, independently of the investigation (The top ten organisms were examined), then they are not instances of hypotheses, so they were not annotated. As the data sets make clear, there were some candidates of investigation verbs that occurred in the investigation sense mostly in abstracts but in another sense in papers, especially in the bmc data set (e.g. assess or examine). Evaluate also had a special mathematical sense in bmc papers, which did not occur in abstracts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 4",
"sec_num": null
},
{
"text": "It can also be seen that some of the very frequent cues in papers do not occur (or only relatively rarely) in abstracts. This is especially true for the bmc data set, where can, if, would, could, and will are among the 15 most frequent cues and represent 23.21% of cue occurrences, but only 3.85% in abstracts. It is also apparent that the rate of epistemic cues is lower in bmc papers than in abstracts or other types of papers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 4",
"sec_num": null
},
{
"text": "Genre-dependent characteristics can be analyzed if BioScope abstracts and hbc papers are compared because their fine-grained domain is the same. Thus, it may be assumed that differences between their cues are related to the genre. The sets of cues used are similar, but the sense distributions may differ for certain ambiguous cues. For instance, indicate mostly appears in the 'suggest' sense in abstracts, whereas in papers it is used in the 'signal' sense. Another difference is that the percentage rate of doxastic cues is almost twice as high in papers as in abstracts (10.6% and 5.7%, respectively). Besides these differences, the two data sets are quite similar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 4",
"sec_num": null
},
{
"text": "Domain-related differences can be analyzed when the three subdomains of biological papers are contrasted. As stressed earlier, bmc papers contain fewer instances of epistemic uncertainty, but condition cues occur more frequently in them. Nevertheless, fly and hbc papers are rather similar in these respects but hbc papers contain more investigation cues than the other two subcorpora. As regards lexical issues, the non-cue usage of possible in comparative constructions is more frequent in the bmc data set than in the other papers and many occurrences of if in bmc are related to definitions, which were not annotated as uncertain. On the basis of this information, the fly and the hbc domains seem to be more similar to each other than to the BMC data set from a linguistic point of view.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 4",
"sec_num": null
},
{
"text": "From the perspective of genre and domain adaptation, the following points should be highlighted concerning the distribution of uncertainty cues across corpora. Doxastic uncertainty is of primary importance in the news and encyclopedia domains whereas the investigation class is characteristic of the biological domain. Within the latter, there is a genre-related difference as well: It is the epistemic and investigation classes that are mainly present in abstracts whereas in papers cues belonging to other uncertainty classes can also be found. Thus, when applying techniques developed for biological texts or abstracts to news texts, for example, doxastic uncertainty cues deserve special attention as it might well be the case that there are insufficient training examples for this class of uncertainty cues. The adaptation of an uncertainty cue detector constructed for encyclopedia texts requires the special treatment of investigation cues, however, if, for instance, scientific discourse is the target genre since they are underrepresented in the source genre.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 4",
"sec_num": null
},
{
"text": "As evaluation metrics, we used cue-level and sentence-level F \u03b2=1 scores for the uncertain class (the standard evaluation metrics of Task 1 of the CoNLL-2010 shared task) and denote them by F cue and F sent , respectively. We report cue-level F \u03b2=1 scores on the individual subcategories of uncertainty and the unlabeled (binary) F \u03b2=1 scores as well. A sentence is treated as uncertain (in the gold standard and prediction) iff it contains at least one cue. Note that the cue-level metric is quite strict as it is based on recognized phrases-that is, only cues with perfect boundary matches are true positives. For the sentence-level evaluation we simply labeled those sentences as uncertain that contained at least one recognized cue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5.2"
},
{
"text": "In order to minimize the development cost of a labeled corpus and an uncertainty detector for a new genre/domain, we need to induce an accurate model from a minimal amount of labeled data, or take advantage of existing corpora for different genres and/or domains and use a domain adaptation approach. Experiments investigating the value and sufficiency of existing corpora (which are usually out-of-domain) and simple domain adaptation methods were carried out. For this purpose, we implemented a cue recognition model, which is described in this section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Domain Cue Recognition Model",
"sec_num": "5.3"
},
{
"text": "To train our models, we applied surface level (e.g., capitalization) and shallow syntactic features (part-of-speech tags and chunks) and avoided the use of lexicon-based features listing potential cue words, in order to reduce the domain dependence of the learned models. Now we will introduce our model, which is competitive with the stateof-the-art systems and focus on its domain adaptability. We will also describe the implementation details of the learning model and the features employed. We should add that the optimization of a cue detector was not the main focus of our study, however.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Domain Cue Recognition Model",
"sec_num": "5.3"
},
{
"text": "We extracted two types of features for each token to describe the token itself, together with its local context in a window of limited size (1, 2, or no window, depending on the feature).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Set.",
"sec_num": "5.3.1"
},
{
"text": "The first group consists of features describing the surface form of the tokens. Here we provide the list of the surface features with the corresponding window sizes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Set.",
"sec_num": "5.3.1"
},
{
"text": "r Stems of the tokens by the Porter stemmer in a window of size 2 (current token and two tokens to the left and right).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Set.",
"sec_num": "5.3.1"
},
{
"text": "r Surface pattern of the tokens in a window of size one (current token and 1 token to the left and right). These patterns are similar to the word shape feature described in Sun et al. (2007) . This feature can describe the capitalization and other orthographic features as well. Patterns represent character sequences of the same type with one single character for a given word. There are six different pattern types denoting capitalized and lowercased character sequences with the characters \"A\" and \"a\", number sequences with \"0\", Greek letter sequences with \"G\" and \"g\", Roman numerals with \"R\" and \"r\", and non-alphanumerical characters with \"!\".",
"cite_spans": [
{
"start": 173,
"end": 190,
"text": "Sun et al. (2007)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Set.",
"sec_num": "5.3.1"
},
{
"text": "r Prefixes and suffixes of word forms from three to five characters long.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Set.",
"sec_num": "5.3.1"
},
{
"text": "The second group of features describes the syntactic properties of the token and its local context. The list of the syntactic features with the corresponding window sizes is the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Set.",
"sec_num": "5.3.1"
},
{
"text": "r Part-of-speech (POS) tags of the tokens by the C&C POS-tagger in a window of size 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Set.",
"sec_num": "5.3.1"
},
{
"text": "r Syntactic chunk of the tokens, as given by the C&C chunker, 6 and the chunk code of the tokens in a window of size 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Set.",
"sec_num": "5.3.1"
},
{
"text": "r Concatenated stem, POS, and chunk labels similar to the features used by Tang et al. (2010) . These feature strings were a combination of the stem and the chunk code of the current token, the stem of the current token combined with the POS-codes of the token left and right, and the chunk code of the current token with the stems of the neighboring tokens.",
"cite_spans": [
{
"start": 75,
"end": 93,
"text": "Tang et al. (2010)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Set.",
"sec_num": "5.3.1"
},
{
"text": "The CoNLL-2010 shared task Learning to detect hedges and their scope in natural language text focused on uncertainty detection. Two subtasks were defined at the shared task: The first task sought to recognize sentences that contain some uncertain language in two different domains and the second task sought to recognize lexical cues together with their linguistic scope in biological texts (i.e., the text span in terms of constituency grammar that covers the part of the sentence that is modified by the cue). The lexical cue recognition subproblem of the second task 7 is identical to the problem setting used in this study, with the only major difference being the types of uncertainty addressed: In the CoNLL-2010 task biological texts contained only epistemic, doxastic, and investigation types of uncertainty. Apart from these differences, the CoNLL-2010 shared task offers an excellent testbed for comparing our uncertainty detection model with other state-of-the-art approaches for uncertainty detection and to compare different classification approaches. Here we present our detailed experiments using the CoNLL data sets, analyze the performance of our models, and select the most suitable models for further experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CoNLL-2010 Experiments.",
"sec_num": "5.3.2"
},
{
"text": "CoNLL systems. The uncertainty detection systems that were submitted to the CoNLL shared task can be classified into three major types. The first set of systems treats the problem as a sentence classification task, that is, one to decide whether a sentence contains any uncertain element or not. These models operate at the sentence level and are unsuitable for cue detection. The second group handles the problem as a token Table 4 Results on the original CoNLL-2010 data sets. The first three rows correspond to our baseline, token-based, and sequence labeling models. The BEST/SEQ row shows the results of the best sequence labeling approach of the CoNLL shared task (for both domains), the BEST/TOK rows show the best token-based models, and the BEST/SENT rows show the best sentence-level classifiers (these models did not produce cue-level results). classification task, and classifies each token independently as uncertain (or not).",
"cite_spans": [],
"ref_spans": [
{
"start": 425,
"end": 432,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "CoNLL-2010 Experiments.",
"sec_num": "5.3.2"
},
{
"text": "Contextual information is only included in the form of feature functions. The third group of systems handled the task as a sequential token labeling problem, that is, determined the most likely label sequence of a sentence in one step, taking the information about neighboring labels into account. Sequence labeling and token classification approaches performed best for biological texts and sentence-level models and token classification approaches gave the best results for Wikipedia texts (see Table 6 in Farkas et al. [2010] ). Here we compare a state-of-the-art token classification and sequence labeling approach using a shared feature representation to decide which model to use in further experiments.",
"cite_spans": [
{
"start": 508,
"end": 528,
"text": "Farkas et al. [2010]",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 497,
"end": 504,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "BIOLOGICAL",
"sec_num": null
},
{
"text": "Classifier models. We used a first-order linear chain conditional random fields (CRF) model as a sequence labeler and a Maximum Entropy (Maxent) classifier model as a token classifier, implemented in the Mallet (McCallum 2002) package for training the uncertainty cue detectors. This choice was motivated by the fact that these were the most popular classification approaches among the CoNLL-2010 participants, and that CRF models are known to provide high accuracy for the detection of phrases with accurate boundaries (e.g., in named entity recognition). We trained the CRF and Maxent models with their default settings in Mallet for 200 iterations or until convergence (CRF), and also until convergence (Maxent) in each experimental set-up. As a baseline model, we applied a simple dictionary-based approach which classifies every uni-and bigram as uncertain that is tagged as uncertain in over 50% of the cases in the training data. Hence, it is a similar system to that presented by Tjong Kim Sang (2010) , without tuning the decision threshold for predicting uncertainty.",
"cite_spans": [
{
"start": 211,
"end": 226,
"text": "(McCallum 2002)",
"ref_id": null
},
{
"start": 988,
"end": 1009,
"text": "Tjong Kim Sang (2010)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BIOLOGICAL",
"sec_num": null
},
{
"text": "CoNLL results. An overview of the results achieved on the CoNLL-2010 data sets can be found in Table 4 . A comparison of our models with the CoNLL systems reveals that our uncertainty detection model is very competitive when applied on the biological data set. Our CRF model trained on the official training data set of the shared task achieved a cue-level F-score of 81.4 and sentence-level F-score of 87.0 on the biological evaluation data set. These results would have come first in the shared task, with a marginal difference compared to the top performing participant. In contrast, our model is less competitive on the Wikipedia data set: The Maxent model achieved a cue-level F-score of 22.3 and sentence-level F-score of 58.1 on the Wikipedia evaluation data set, whereas our CRF model was not competitive with the best participating systems. The observation that sequence-labeling models perform worse than token-based approaches on Wikipedia, especially for sentence-level evaluation measures, coincides with the findings of the shared task: The discourse-level uncertainty cues in the Wikipedia data set are rather long and heterogeneous and sequence labeling models often revert to not annotating any token in a sentence when the phrase boundaries are hard to detect. Still, sequence labeling models have an advantage in terms of cue-level accuracy. This is not surprising because CRF is a state-of-the-art model for chunking / sequence labeling tasks.",
"cite_spans": [],
"ref_spans": [
{
"start": 95,
"end": 102,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "BIOLOGICAL",
"sec_num": null
},
{
"text": "We conclude from Table 4 that our model is competitive with the state-of-the-art systems for detecting semantic uncertainty (which is closer to the biological subtask), but it is less suited to recognizing discourse-level uncertainty. In the subsequent experiments we used our CRF model, which performed best in detecting uncertainty cues in natural language sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 17,
"end": 24,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "BIOLOGICAL",
"sec_num": null
},
{
"text": "In supervised machine learning, the task is to learn how to make predictions on previously unseen, new examples based on a statistical model learned from a collection of labeled training examples (i.e., a set of examples coupled with the desired output for them). The classification setting assumes a set of labels L, a set of features X, and a probability distribution p(X) describing the examples in terms of their features. Then the training examples are assumed to be given in the form of {x i , l i } pairs and the goal of classification is to estimate the label distribution p(L|X), which can be used later on to predict the labels for unseen examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation Model.",
"sec_num": "5.3.3"
},
{
"text": "Domain adaptation focuses on the problem where the same (or a closely related) learning task has to be solved in multiple domains which have different characteristics in terms of their features: The set of features X may be different or the probability distributions p(X) describing the inputs may be different. When the target tasks are treated as different (but related), the label distribution p(L|X) is dependent on the domain. That is, given a domain d, the problem can be formalized as modeling p(L|X) d based on X d , p(X) d and a set of examples: 8 In the context of domain adaptation, there is a target domain t and a source domain s, with labeled data available for both, and the goal is to induce a more accurate target domain model p(L|X) t from {x i,t , l i } \u222a {x i,s , l i } than the one learned from {x i,t , l i } only. In practical scenarios, the goal is to exploit the source data to acquire an accurate model from just limited target data which are alone insufficient to train an accurate in-domain model, and thus to port the model to a new domain with moderate annotation costs. The problem is difficult because it is nontrivial for a learning method to account for the different data (and label) distributions between target and source, which causes a remarkable drop in model accuracy when it is applied to classifying examples taken from the target domain.",
"cite_spans": [
{
"start": 555,
"end": 556,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation Model.",
"sec_num": "5.3.3"
},
{
"text": "{x i,d , l i }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation Model.",
"sec_num": "5.3.3"
},
{
"text": "In our experimental context, both topic-and genre-related differences of texts pose an adaptation problem as these factors have an impact on both the vocabulary (p(X)) and the sense distributions of the cues (p(L|X)) found in different texts. There is some confusion in the literature regarding the terminology describing the various domain mismatches in the learning problem. For example, Daum\u00e9 III (2007) describes a domain adaptation method where he assumes that the label distribution is unchanged (we note here that this assumption is not exploited in the method, and that the label distribution changes in our problem), whereas Pan and Yang (2010) uses the term inductive transfer learning to refer to our scenario (in their paper, domain adaptation refers to a different setting). 9 In this study we always use the term domain adaptation to refer to our problem setting, that is, where both p(X) and p(L|X) are assumed to change.",
"cite_spans": [
{
"start": 390,
"end": 406,
"text": "Daum\u00e9 III (2007)",
"ref_id": null
},
{
"start": 634,
"end": 653,
"text": "Pan and Yang (2010)",
"ref_id": "BIBREF29"
},
{
"start": 788,
"end": 789,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation Model.",
"sec_num": "5.3.3"
},
{
"text": "In our experiments, we used various data sets taken from multiple genres and domains (see Section 5.1.1 for an overview) and applied a simple but effective domain adaptation model (Daum\u00e9 III 2007) for training our classifiers. In this model, domain adaptation is carried out by defining each feature over the target and source data sets twice-just once for target domain instances, and once for both the target and source domain instances. Formally, having a target domain t and a source domain s and n features {f 1 , f 2 , . . . f n }, for each f i we have a target-only version f i,t and a shared version f i,t+s . Each target domain example is described by 2n features:",
"cite_spans": [
{
"start": 180,
"end": 196,
"text": "(Daum\u00e9 III 2007)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation Model.",
"sec_num": "5.3.3"
},
{
"text": "{ f 1,t , f 2,t , . . . f n,t , f 1,t+s , f 2,t+s , .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation Model.",
"sec_num": "5.3.3"
},
{
"text": ". . f n,t+s } and source domain examples are described by only the n shared features: { f 1,t+s , f 2,t+s , . . . f n,t+s }. Using the union of the source and target training data sets {x i,t , l i } \u222a {x i,s , l i } and this feature representation, any standard supervised machine learning technique can be used and it becomes possible for the algorithm to learn target-dependent and shared patterns at the same time and handle the changes in the underlying distributions. This easy domain adaptation technique has been found to work well in many NLP-oriented tasks. We used the CRF models introduced herein and in this way, we were able to exploit feature-label correspondences across domains (for features that behave consistently across domains) and also to learn patterns specific to the target domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation Model.",
"sec_num": "5.3.3"
},
{
"text": "We defined several settings (target and source pairs) with varied domain and genre distances and target data set sizes. These experiments allowed us to study the potential of transferring knowledge across existing corpora for the accurate detection of uncertain language in a wide variety of text types. In our experiments, we used all the combinations of genres and domains that we found plausible. News texts (and their subdomains) were not used as source data because FactBank is significantly smaller than the other corpora (WikiWeasel or scientific texts). As the source data set is typically larger than the target data set in practical scenarios, news texts can only be used as target data. Abstracts were only used as source data because information extraction typically addresses full texts whereas abstracts just provide annotated data for development purposes. Besides these restrictions, we experimented with all possible target and source pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Domain and Genre Experiments",
"sec_num": "5.4"
},
{
"text": "We used four different machine-learning settings for each target-source pair in our investigations. In the purely cross-domain (CROSS) setting, the model was trained on the source domain and evaluated on the target (i.e., no labeled target domain data sets were used for training). In the purely in-domain setting (TARGET), we performed Table 5 Experimental results on different target and source domain pairs. The third column contains the ratio of the target train and source data sets' sizes in terms of sentences. DIST shows the distance of the source and target domain/genre ('-' same; '+' fine-grade difference; '++' coarse-grade difference; bio = biological; enc = encyclopedia; sci paper = scientific paper; sci abs = scientific abstract; sci paper hbc = scientific papers on human blood cell experiments; sci paper fly = scientific papers on Drosophila; sci paper bmc = scientific papers on bioinformatics). 10-fold cross-validation on the target data (i.e., no source domain data were used). In the two domain adaptation settings, we again performed 10-fold cross-validation on the target data but exploited the source data set (as described in Section 5.3). Here, we either used each sentence of the source data set (DA/ALL) or only those sentences that contained a cue observed in the target train data set (DA/CUE). Table 5 lists the results obtained on various target and source domains in various machine learning settings and Table 6 contains the absolute differences between a particular result and the in-domain (TARGET) results.",
"cite_spans": [],
"ref_spans": [
{
"start": 337,
"end": 344,
"text": "Table 5",
"ref_id": "TABREF5"
},
{
"start": 1329,
"end": 1336,
"text": "Table 5",
"ref_id": "TABREF5"
},
{
"start": 1442,
"end": 1449,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Cross-Domain and Genre Experiments",
"sec_num": "5.4"
},
{
"text": "Fine-grained semantic uncertainty classification results are summarized in Tables 7 and 8. Table 7 contrasts the coarse-grained F cue with the unlabeled/binary F cue of finegrained experiments, therefore it quantifies the difference in accuracy due to the more difficult classification setting and the increased sparseness of the task. Table 8 shows the per class F cue scores, namely, how accurately our model recognizes the individual uncertainty types.",
"cite_spans": [],
"ref_spans": [
{
"start": 75,
"end": 99,
"text": "Tables 7 and 8. Table 7",
"ref_id": null
},
{
"start": 337,
"end": 344,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Cross-Domain and Genre Experiments",
"sec_num": "5.4"
},
{
"text": "The absolute difference between the F-scores of ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 6",
"sec_num": null
},
{
"text": "Comparison of cue-level binary (F bin ) and unlabeled F-scores (F unl ). Binary F-score corresponds to coarse-grained classification (uncertain vs. certain), and unlabeled F-score is the fine-grained classification converted to binary (disregarding the fine-grained category labels). ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 7",
"sec_num": null
},
{
"text": "The per class cue-level F-scores in fine-grained classification. F crs , F tgt , and F da correspond to the CROSS, TARGET, and DA/CUE settings, respectively (same as previous). The DA/ALL setting is not shown for space reasons and due to its similarity to the DA/CUE results. The size of the target training data sets proved to be an important factor in these investigations. Hence, we performed experiments with different target data set sizes. We utilized the DA/ALL model (which is more robust for extremely small target data sizes [e.g., 100-400 sentences]) and performed the same 10-fold cross validation on the target data set as in Tables 5-8. For each fold of the cross-validation here, however, we just used n sentences (x axis of the figures) from the target training data set and a fixed set of 4,000 source sentences to alleviate the effect of varying data set sizes. Figure 5 depicts the learning curves for two target/source data set pairs.",
"cite_spans": [],
"ref_spans": [
{
"start": 880,
"end": 888,
"text": "Figure 5",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Table 8",
"sec_num": null
},
{
"text": "As Table 5 shows, incorporating labeled data from different genres and/or domains consistently improves the performance. The successful applicability of domain adaptation tells us that the problem of detecting uncertainty has similar characteristics across genres and domains. The uncertainty cue lexicons of different domains and genres indeed share a core vocabulary and despite the differences in sense distributions, labeled data from a different source improves uncertainty classification in a new genre and domain if the different data sets are annotated consistently. This justifies our aim to create a consistent representation of uncertainty that can be applied to multiple domains.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6."
},
{
"text": "The size of the target and source data sets largely influences to what extent external data can improve results. The only case where domain adaptation had only a negligible effect (an F-score gain less than 1%) is where the target data set is itself very large. This is expected as the more target data one has, the less crucial it is to incorporate additional data with some undesirable characteristics (difference in style, domain, certain/uncertain sense distribution, etc.).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation Results",
"sec_num": "6.1"
},
{
"text": "The performance scores for the CROSS setting clearly indicate the domain/genre distance of the data sets: The more distant the domain and genre of the source and target data sets are, the more the CROSS performance (where no labeled target data is used) degrades, compared with the TARGET model. In general, when the distance between both the domain and the genre of texts is substantial (++/++ and +/+ rows in Tables 5 and 6 ), this accounts for a 6-10% decrease in both the sentence and cue-level F-scores. An exception is the case of encyclopedic source and news target domains. Here the performance is very close to the target domain performance. This indicates that these settings are not so different from each other as it might seem at the first glance. The encyclopedic and news genres share quite a lot of commonalities (compare cue distributions in Figure 4 , for instance). We verified this observation by using a knowledge-poor quantitative estimator of similarity between domains (Van Asch and Daelemans 2010): Using cosine as the similarity measure, the newswire and encyclopedia texts are found to be the second most similar domain pair in our experiments, with a score comparable to those obtained for the pairs of scientific article types bmc, hbc, and fly.",
"cite_spans": [],
"ref_spans": [
{
"start": 411,
"end": 426,
"text": "Tables 5 and 6",
"ref_id": "TABREF5"
},
{
"start": 860,
"end": 868,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Domain Adaptation Results",
"sec_num": "6.1"
},
{
"text": "When there is a domain or genre match between source and target (\u2212/+ and +/\u2212 rows in Tables 5 and 6 ), however, and the distance regarding the other is just moderate, the cross-training performance is close to or even better than the target-only results. That is, the larger amount of source training data balances the differences between the domains. These results indicate that the learned uncertainty classifiers can be directly applied to slightly different data sets. This suitability is due to the learned disambiguation models, which generalize well in similar settings. This is contrary to the findings of earlier studies, which built the uncertainty detectors using seed examples and bootstrapping. These models were not designed to learn any disambiguation models for the cue words found, and their performance degraded even for slightly different data (Szarvas 2008) .",
"cite_spans": [
{
"start": 863,
"end": 877,
"text": "(Szarvas 2008)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [
{
"start": 85,
"end": 99,
"text": "Tables 5 and 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Domain Adaptation Results",
"sec_num": "6.1"
},
{
"text": "Comparing the two domain adaptation procedures DA/CUE and DA/ALL, adaptation via transferring only source sentences that contain a target domain cue is, on average, comparable to transferring all the data from the source domain. In other words, when we have a small but sufficient amount of target data available, it is enough to account for source data corresponding to the uncertainty cues we saw in the limited target data set. This observation has several consequences, namely:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation Results",
"sec_num": "6.1"
},
{
"text": "r The source-only cues, or to be more precise, their disambiguation models, are not helpful for the target domains as they cannot be adapted. This is due to the differences in the source and target disambiguation models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation Results",
"sec_num": "6.1"
},
{
"text": "r Similarly, domain adaptation improves the disambiguation models for the observed target cues, rather than introducing new vocabulary into the target domain. This mechanism coincides with our initial goal of using domain adaptation to learn better semantic models. This effect is the opposite of how bootstrapping-based weakly supervised approaches improve the performance in an underresourced domain. This observation suggests a promising future direction of combining the two approaches to maximize the gains while minimizing the annotation costs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation Results",
"sec_num": "6.1"
},
{
"text": "r In a general context, we can effectively extend the data for a given domain if we have robust knowledge of the potential uncertainty vocabulary for that domain. Given the wide variety of the domains and genres of our data sets, it is reasonable to suppose that they represent uncertain language in general quite well, and the joint vocabularies provide a good starting point for a targeted data development for further domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation Results",
"sec_num": "6.1"
},
{
"text": "As regards the fine-grained classification results, Table 7 demonstrates that the fine-grained distinction results in only a small, or no, loss in performance. The coarsegrained model is slightly more accurate than the fine-grained model (counting correctly recognized but misclassified cues as true positives) in most settings. The most significant difference is observed for the target-only settings, where no out-of-domain data are used for the training and thus the data sets are accordingly smaller. A noticeable exception is when scientific abstracts are used for cross training: In those settings the coarse-grained model performs poorly, due to its lower recall, which we attribute to overfitting the special characteristics of abstracts. The fact that in fine-grained classification the CROSS results consistently outperform the TARGET models (see Table 8 ) even for distant domain pairs, also underlines that the increased sparseness caused by the differentiation of the various subtypes of uncertainty is an important factor only for smaller data sets. The improvement by domain adaptation is clearly more prominent in fine-grained than in coarse-grained classification, however: The individual cue types benefit by 5-10% points in terms of the F-score from out-of-domain data and domain adaptation. Moreover, as Table 8 shows, for the domain pairs and fine-grained classes where a nice amount of positive examples are at hand, the per class F cue scores are also around 80% and above. This means that it is possible to accurately identify the individual subtypes of semantic uncertainty, and thus it also proves the feasibility of the subcategorization and annotation scheme proposed in this study (Section 2). Other important observations here are that domain adaptation is even more significant in the more difficult fine-grained classification setting, and that the condition class represents a challenge for our model. The performance for the condition class is lower than that for the other classes, which can only in part be attributed to the fact that this is the least represented subtype in our data sets: as opposed to other cue types, condition cues are typically used in many different contexts and they may belong to other uncertainty classes as well.",
"cite_spans": [],
"ref_spans": [
{
"start": 52,
"end": 59,
"text": "Table 7",
"ref_id": null
},
{
"start": 857,
"end": 864,
"text": "Table 8",
"ref_id": null
},
{
"start": 1324,
"end": 1331,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Domain Adaptation Results",
"sec_num": "6.1"
},
{
"text": "Based on our experiments, we may conclude that a manually annotated training data set consisting of 3,000-5,000 sentences is sufficient for training an accurate cue detector for a new genre/domain. The results of our learning curve experiments ( Figure 5 ) illustrate the situations where only a limited amount of annotated data (fewer than 3,000 sentences) is available for the target domain. The feasibility of decreasing annotation efforts and the real added value of domain adaptation are more prominent in this range. It is easy to see that the TARGET results approach to DA results with more target data. Figure 5 shows that the size of the target training data set where the supervised TARGET setting outperforms the CROSS model (trained on 4,000 source sentences) is around 1,000 sentences. As we mentioned earlier, even distant domain data can improve the cue recognition model in the absence of a sufficient target data set. Figure 5 justifies this observation, as the CROSS and DA settings outperform the TARGET setting on each source-target data set pair. It can also be observed that the doxastic type is more domain-dependent than the others and its results consistently improve by increasing the size of the target domain annotation (which coincides with the cue frequency investigations of Section 5.1.3). In the news target domain, however, the investigation and epistemic classes benefit a lot from a small amount of annotated target data but their performance scores increase just slightly after that. This indicates that most of the important domain-dependent (probably lexical) knowledge could be gathered from 100-400 sentences. In the biological experiments, we may conclude that the investigation class is already covered by the source domain (intuitively, the investigation cues are well represented in the abstracts) and its results are not improved significantly by using more target data. The condition class is underrepresented in both the source and target data sets and hence no reliable observations can be made regarding this subclass (see Table 2 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 246,
"end": 254,
"text": "Figure 5",
"ref_id": "FIGREF1"
},
{
"start": 611,
"end": 619,
"text": "Figure 5",
"ref_id": "FIGREF1"
},
{
"start": 935,
"end": 943,
"text": "Figure 5",
"ref_id": "FIGREF1"
},
{
"start": 2073,
"end": 2080,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "The Required Amount of Annotation",
"sec_num": "6.2"
},
{
"text": "Overall, if we would like to have an uncertainty cue detector for a new genre/domain: (i) We can achieve performance around 60-70% by using cross training depending on the difference between the domains (i.e., without any annotation effort); (ii) By annotating around 3,000 sentences, we can have a performance of 70-80%, depending on the level of difficulty of the texts; (iii) We can get the same 70-80% results with annotating just 1,000 sentences and using domain adaptation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Required Amount of Annotation",
"sec_num": "6.2"
},
{
"text": "As might be expected, most of the erroneous cue predictions were due to vocabulary differences, for example, fear or accuse occurred only in news texts, which is why they were not recognized by models trained on biological or encyclopedia texts. Another example is the case of or, which is a frequent cue in biological texts. Still, it is rarely used as a cue in other domains but without domain adaptation, the model trained on biological texts marks quite a few occurrences of or as cues in the news or encyclopedia domains. Many of these anomalies were eliminated by the application of domain adaptation techniques, however.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interesting Examples and Error Analysis",
"sec_num": "6.3"
},
{
"text": "Many errors were related to multi-class cues. These cues are especially hard to disambiguate because not only can they refer to several classes of uncertainty, but they typically have non-cue usage as well. For instance, the case of would is rather complicated because it can fulfill several functions: (10) EPISTEMIC USAGE ('IT IS HIGHLY PROBABLE'): Further biochemical studies on the mechanism of action of purified kinesin-5 from multiple systems would obviously be fruitful. (Corpus: fly) (11) CONDITIONAL: \"If religion was a thing that money could buy,/The rich would live and the poor would die.\" (Corpus: WikiWeasel) (12) FUTURE IN THE PAST: This Aarup can trace its history back to 1500, but it would be 1860's before it would become a town. (Corpus: WikiWeasel) (13) REPEATED ACTION IN THE PAST ('USED TO'): 'Becker' was the next T.V. Series for Paramount that Farrell would co-star in. (Corpus: WikiWeasel) (14) DYNAMIC MODALITY: Individuals would first have a small lesion at the site of the insect bite, which would eventually leave a small scar. (Corpus: WikiWeasel) (15) PRAGMATIC USAGE: Although some would dispute the fact, the joke related to a peculiar smell that follows his person. (Corpus: WikiWeasel)",
"cite_spans": [
{
"start": 636,
"end": 648,
"text": "IN THE PAST:",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Interesting Examples and Error Analysis",
"sec_num": "6.3"
},
{
"text": "The epistemic uses of would are annotated as epistemic cues whereas its occurrences in conditionals are marked as hypothetical cues. The habitual past meaning is not related to uncertainty, hence it is not annotated. The future in the past meaning (i.e., past tense of will), however, denotes an event of which it is known that happened later, so it is certain. The dynamically modal would is similar to the future will (which is an instance of dynamic modality as well), but it is not annotated in the corpora. The pragmatic use of would does not refer to semantic uncertainty (the semantic value of the sentence would be exactly the same without it or if it is replaced with may, might, will, etc., that is, some will/may/might/\u2205 dispute the fact mean the same). It is rather a stylistic issue to further express uncertainty at the discourse level (i.e., there are some unidentified people who dispute the fact, hence the opinion cannot be associated with any definite source). The last two uses of would are not typically described in grammars of English and seem to be characteristic primarily of the news and encyclopedia domains. Thus it is advisable to explore such cases and treat them with special consideration when adapting an algorithm trained and tested in a specific domain to another domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interesting Examples and Error Analysis",
"sec_num": "6.3"
},
{
"text": "Another interesting example is may in its non-cue usage. Being (one of) the most frequent cues in each subcorpus, its non-cue usage is rather limited but can be found occasionally in FactBank and WikiWeasel. The following instance of may in FactBank was correctly marked as non-cue by the cue detector when trained on Wikipedia texts. On the other hand, it was marked as a cue when trained on biological texts since in this case, there were insufficient training examples of may not being a cue:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interesting Examples and Error Analysis",
"sec_num": "6.3"
},
{
"text": "(16) \"Well may we say 'God save the Queen,' for nothing will save the republic,\" outraged monarchist delegate David Mitchell said. (Corpus: FactBank)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interesting Examples and Error Analysis",
"sec_num": "6.3"
},
{
"text": "A final example to be discussed is concern. This word also has several uses:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interesting Examples and Error Analysis",
"sec_num": "6.3"
},
{
"text": "(17) NOUN MEANING 'COMPANY': The insurance concern said all conversion rights on the stock will terminate on Nov. 30. (Corpus: FactBank) (18) NOUN MEANING 'WORRY': Concern about declines in other markets, especially New York, caused selling pressure. (Corpus: FactBank) (19) PREPOSITION: The company also said it continues to explore all options concerning the possible sale of National Aluminum's 54.5% stake in an aluminum smelter in Hawesville, Ky. (Corpus: FactBank) (20) VERB: Many of the predictions in these two data sets concern protein pairs and proteins that are not present in other data sets. (Corpus: bmc)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interesting Examples and Error Analysis",
"sec_num": "6.3"
},
{
"text": "Among these examples, only the second one should be annotated as uncertain. POStagging seems to provide enough information for excluding the verbal and prepositional uses of the word but in the case of nominal usage, additional information is also required to enable the system to decide whether it is an uncertainty cue or not (in this case, the noun in the 'company' sense cannot have an argument while in the 'worry' sense, it can have [about declines]). Again, the frequency of the two senses depends heavily on the domain of the texts, which should also be considered when adapting the cue detector to a different domain. We should mention that the role of POS-tagging is essential in cue detection because many ambiguities can be resolved on the basis of POS-tags. Hence, POS-tagging errors can lead to a serious decline in performance. We think that an analysis of similar examples can further support domain adaptation and cue detection across genres and domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interesting Examples and Error Analysis",
"sec_num": "6.3"
},
{
"text": "In this article, we introduced an uncertainty cue detection model that can perform well across different domains and genres. Even though several types of uncertainty exist, available corpora and resources focus only on some of the possible types and thereby only cover particular aspects of the phenomenon. This means that uncertainty models found in the literature are heterogeneous, and the results of experiments on different corpora are hardly comparable. These facts motivated us to offer a unified model of semantic uncertainty enhanced by linguistic and computer science considerations. In accordance with this classification, we reannotated three corpora from several domains and genres using our uniform annotation guidelines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7."
},
{
"text": "Our results suggest that simple cross training can be employed and it achieves a reasonable performance (60-70% cue-level F-score) when no annotated data is at hand for a new domain. When some annotated data is available (here some means fewer than 3,000 annotated sentences for the target domain), domain adaptation techniques are the best choice: (i) they lead to a significant improvement compared to simple cross training, and (ii) they can provide a reasonable performance with significantly less annotation. In our experiments, the annotation of 3,000 sentences and training only on these is roughly equivalent to the annotation of 1,000 sentences using external data and domain adaptation. If the size of the training data set is sufficiently large (larger than 5,000 sentences) the effect of incorporating additional data-having some undesirable characteristics-is not crucial.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7."
},
{
"text": "Comparing different domain adaptation techniques, we found that similar results could be attained when the source domain was filtered for sentences that contained cues in the target domain. This tells us that models learn to better disambiguate the cues seen in the target domain instead of finding new, unseen cues. In this sense, this approach can be regarded as a complementary method to weakly supervised techniques for lexicon extraction. A promising way to further minimize annotation costs while maximizing performance would be the integration of the two approaches, which we plan to investigate in the near future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7."
},
{
"text": "In our study, we did not pay attention to dynamic modality (due to the lack of annotated resources), but the detection of such phenomena is also desirable. For instance, dynamically modal events cannot be treated as certain-that is, the event of buying cannot be assigned the same truth value in They agreed to buy the company and They bought the company. Whereas the second sentence expresses a fact, the first one informs us about the intention of buying the company, which will be certainly carried out in a world where moral or business laws are observed but at the moment it cannot be stated whether the transaction takes place (i.e., that it is certain). Hence, in the future, we also intend to integrate the identification of dynamically modal cues into our uncertainty cue detector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7."
},
{
"text": "Only 3 out of the more than 20 participants of the related CoNLL-2010 shared task(Farkas et al. 2010) managed to exploit out-of-domain data to improve their results, and only by a negligible margin. 2 The most successful CoNLL systems were based on these approaches, but different feature representations make direct comparisons difficult.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The entire typology of semantic uncertainty phenomena and a test battery for their classification are described in a supplementary file. Together with the corpora and the experimental software, they are available at http://www.inf.u-szeged.hu/rgai/uncertainty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See http://en.wikipedia.org/wiki/Wikipedia:Embrace_weasel_words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The corpora are available at http://www.inf.u-szeged.hu/rgai/uncertainty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "POS-tagging and chunking were performed on all corpora using the C&C Tools(Curran, Clark, and Bos 2007). 7 As an intermediate level, participants of the first task could submit the lexical cues found in sentences for evaluation, without their scope, which gave some insight into the nature of cue detection on the Wikipedia corpus (where scope annotation does not exist) as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The literature also describes the case when the set of labels depends on the domain, but we omit this case to simplify our notation and discussion. For details, seePan and Yang (2010).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "More on this can be found inPan and Yang (2010) and at http://nlpers.blogspot.com/2007/11/ domain-adaptation-vs-transfer-learning.html.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "ConText: An algorithm for identifying contextual features from clinical text",
"authors": [
{
"first": "Kathy",
"middle": [],
"last": "Baker",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Bloodgood",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Ed",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Lori",
"middle": [],
"last": "Levin",
"suffix": ""
},
{
"first": "Marjorie",
"middle": [],
"last": "Mcshane",
"suffix": ""
},
{
"first": "Teruko",
"middle": [],
"last": "Mitamura",
"suffix": ""
},
{
"first": "Sergei",
"middle": [],
"last": "Nirenburg",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Piatko",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
},
{
"first": "Gramm",
"middle": [],
"last": "Richardson",
"suffix": ""
},
{
"first": ".",
"middle": [
";"
],
"last": "Chapman",
"suffix": ""
},
{
"first": "Wendy",
"middle": [
"W"
],
"last": "",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chu",
"suffix": ""
},
{
"first": "John",
"middle": [
"N"
],
"last": "Dowling",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the ACL Workshop on BioNLP",
"volume": "",
"issue": "",
"pages": "81--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baker, Kathy, Michael Bloodgood, Mona Diab, Bonnie Dorr, Ed Hovy, Lori Levin, Marjorie McShane, Teruko Mitamura, Sergei Nirenburg, Christine Piatko, Owen Rambow, and Gramm Richardson. 2010. Modality Annotation Guidelines. Technical Report 4, Human Language Technology Center of Excellence, Baltimore, MD. Chapman, Wendy W., David Chu, and John N. Dowling. 2007. ConText: An algorithm for identifying contextual features from clinical text. In Proceedings of the ACL Workshop on BioNLP 2007, pages 81-88, Prague, Czech Republic.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "HedgeHunter: A system for hedge detection and uncertainty classification",
"authors": [
{
"first": "David",
"middle": [],
"last": "Clausen",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning (CoNLL-2010): Shared Task",
"volume": "",
"issue": "",
"pages": "120--125",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clausen, David. 2010. HedgeHunter: A system for hedge detection and uncertainty classification. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning (CoNLL-2010): Shared Task, pages 120-125, Uppsala.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Using hedges to enhance a disease outbreak report text mining system",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Conway",
"suffix": ""
},
{
"first": "Son",
"middle": [],
"last": "Doan",
"suffix": ""
},
{
"first": "Nigel",
"middle": [],
"last": "Collier",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the BioNLP 2009 Workshop",
"volume": "",
"issue": "",
"pages": "142--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Conway, Mike, Son Doan, and Nigel Collier. 2009. Using hedges to enhance a disease outbreak report text mining system. In Proceedings of the BioNLP 2009 Workshop, pages 142-143, Boulder, CO.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Linguistically motivated large-scale NLP with C&C and Boxer",
"authors": [
{
"first": "James",
"middle": [],
"last": "Curran",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Bos",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions",
"volume": "",
"issue": "",
"pages": "256--263",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Curran, James, Stephen Clark, and Johan Bos. 2007. Linguistically motivated large-scale NLP with C&C and Boxer. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 33-36, Prague. Daum\u00e9 III, Hal. 2007. Frustratingly easy domain adaptation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 256-263, Prague.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Domain adaptation for statistical classifiers",
"authors": [
{
"first": "Iii",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Artificial Intelligence Research",
"volume": "26",
"issue": "",
"pages": "101--126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daum\u00e9 III, Hal and Daniel Marcu. 2006. Domain adaptation for statistical classifiers. Journal of Artificial Intelligence Research, 26:101-126.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The use of hedging across different disciplines and rhetorical sections of research articles",
"authors": [
{
"first": "Reza",
"middle": [],
"last": "Falahati",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 22nd NorthWest Linguistics Conference (NWLC22)",
"volume": "",
"issue": "",
"pages": "99--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Falahati, Reza. 2006. The use of hedging across different disciplines and rhetorical sections of research articles. In Proceedings of the 22nd NorthWest Linguistics Conference (NWLC22), pages 99-112, Burnaby.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automatic construction of rule-based ICD-9-CM coding systems",
"authors": [
{
"first": "Rich\u00e1rd",
"middle": [],
"last": "Farkas",
"suffix": ""
},
{
"first": "Gy\u00f6rgy",
"middle": [],
"last": "Szarvas",
"suffix": ""
}
],
"year": 2008,
"venue": "BMC Bioinformatics",
"volume": "9",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Farkas, Rich\u00e1rd and Gy\u00f6rgy Szarvas. 2008. Automatic construction of rule-based ICD-9-CM coding systems. BMC Bioinformatics, 9:1-9.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The CoNLL-2010 Shared Task: Learning to detect hedges and their scope in natural language text",
"authors": [
{
"first": "Rich\u00e1rd",
"middle": [],
"last": "Farkas",
"suffix": ""
},
{
"first": "Veronika",
"middle": [],
"last": "Vincze",
"suffix": ""
},
{
"first": "Gy\u00f6rgy",
"middle": [],
"last": "M\u00f3ra",
"suffix": ""
},
{
"first": "J\u00e1nos",
"middle": [],
"last": "Csirik",
"suffix": ""
},
{
"first": "Gy\u00f6rgy",
"middle": [],
"last": "Szarvas",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning (CoNLL-2010): Shared Task",
"volume": "",
"issue": "",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Farkas, Rich\u00e1rd, Veronika Vincze, Gy\u00f6rgy M\u00f3ra, J\u00e1nos Csirik, and Gy\u00f6rgy Szarvas. 2010. The CoNLL-2010 Shared Task: Learning to detect hedges and their scope in natural language text. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning (CoNLL-2010): Shared Task, pages 1-12, Uppsala.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Hedge detection using the RelHunter approach",
"authors": [
{
"first": "Eraldo",
"middle": [
"R"
],
"last": "Fernandes",
"suffix": ""
},
{
"first": "E",
"middle": [
"M"
],
"last": "Carlos",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Crestana",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ruy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Milidi\u00fa",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning (CoNLL-2010): Shared Task",
"volume": "",
"issue": "",
"pages": "64--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fernandes, Eraldo R., Carlos E. M. Crestana, and Ruy L. Milidi\u00fa. 2010. Hedge detection using the RelHunter approach. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning (CoNLL-2010): Shared Task, pages 64-69, Uppsala.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A General natural-language text processor for clinical radiology",
"authors": [
{
"first": "Carol",
"middle": [],
"last": "Friedman",
"suffix": ""
},
{
"first": "Philip",
"middle": [
"O"
],
"last": "Alderson",
"suffix": ""
},
{
"first": "H",
"middle": [
"M"
],
"last": "John",
"suffix": ""
},
{
"first": "James",
"middle": [
"J"
],
"last": "Austin",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"B"
],
"last": "Cimino",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 1994,
"venue": "Journal of the American Medical Informatics Association",
"volume": "1",
"issue": "2",
"pages": "161--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Friedman, Carol, Philip O. Alderson, John H. M. Austin, James J. Cimino, and Stephen B. Johnson. 1994. A General natural-language text processor for clinical radiology. Journal of the American Medical Informatics Association, 1(2):161-174.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Finding hedges by chasing weasels: Hedge detection using Wikipedia tags and shallow linguistic features",
"authors": [
{
"first": "Viola",
"middle": [],
"last": "Ganter",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the ACL-IJCNLP 2009 Conference Short Papers",
"volume": "",
"issue": "",
"pages": "173--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ganter, Viola and Michael Strube. 2009. Finding hedges by chasing weasels: Hedge detection using Wikipedia tags and shallow linguistic features. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, pages 173-176, Suntec.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A hedgehop over a max-margin framework using hedge cues",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Georgescul",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning (CoNLL-2010): Shared Task",
"volume": "",
"issue": "",
"pages": "26--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Georgescul, Maria. 2010. A hedgehop over a max-margin framework using hedge cues. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning (CoNLL-2010): Shared Task, pages 26-31, Uppsala.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Hedging in academic writing and EAP textbooks",
"authors": [
{
"first": "Ken",
"middle": [],
"last": "Hyland",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "13",
"issue": "",
"pages": "239--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hyland, Ken. 1994. Hedging in academic writing and EAP textbooks. English for Specific Purposes, 13(3):239-256.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Writing without conviction? Hedging in scientific research articles",
"authors": [
{
"first": "Ken",
"middle": [],
"last": "Hyland",
"suffix": ""
}
],
"year": 1996,
"venue": "Applied Linguistics",
"volume": "17",
"issue": "4",
"pages": "433--454",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hyland, Ken. 1996. Writing without conviction? Hedging in scientific research articles. Applied Linguistics, 17(4):433-454.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Boosters, hedging and the negotiation of academic knowledge",
"authors": [
{
"first": "Ken",
"middle": [],
"last": "Hyland",
"suffix": ""
}
],
"year": 1998,
"venue": "Text",
"volume": "18",
"issue": "3",
"pages": "349--382",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hyland, Ken. 1998. Boosters, hedging and the negotiation of academic knowledge. Text, 18(3):349-382.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Lehet\u0151s\u00e9g\u00e9s sz\u00fcks\u00e9gszer\u0171s\u00e9g",
"authors": [
{
"first": "Ferenc",
"middle": [],
"last": "Kiefer",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kiefer, Ferenc. 2005. Lehet\u0151s\u00e9g\u00e9s sz\u00fcks\u00e9gszer\u0171s\u00e9g [Possibility and necessity].",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Recognizing speculative language in biomedical research articles: A linguistically motivated perspective",
"authors": [
{
"first": "",
"middle": [],
"last": "Tinta Kiad\u00f3",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Budapest",
"suffix": ""
},
{
"first": "Halil",
"middle": [],
"last": "Kilicoglu",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Bergler",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing",
"volume": "",
"issue": "",
"pages": "119--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tinta Kiad\u00f3, Budapest. Kilicoglu, Halil and Sabine Bergler. 2008. Recognizing speculative language in biomedical research articles: A linguistically motivated perspective. In Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing, pages 46-53, Columbus, OH. Kilicoglu, Halil and Sabine Bergler. 2009. Syntactic dependency based heuristics for biological event extraction. In Proceedings of the BioNLP 2009 Workshop Companion Volume for Shared Task, pages 119-127, Boulder, CO.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Overview of BioNLP'09",
"authors": [
{
"first": "Jin",
"middle": [
"-"
],
"last": "Kim",
"suffix": ""
},
{
"first": "Tomoko",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Ohta",
"suffix": ""
},
{
"first": "Yoshinobu",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Kano",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kim, Jin-Dong, Tomoko Ohta, Sampo Pyysalo, Yoshinobu Kano, and Jun'ichi Tsujii. 2009. Overview of BioNLP'09",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Shared Task on Event Extraction",
"authors": [],
"year": null,
"venue": "Proceedings of the BioNLP 2009 Workshop Companion Volume for Shared Task",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shared Task on Event Extraction. In Proceedings of the BioNLP 2009 Workshop Companion Volume for Shared Task, pages 1-9, Boulder, OH.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Corpus annotation for mining biomedical events from literature",
"authors": [
{
"first": "Jin",
"middle": [
"-"
],
"last": "Kim",
"suffix": ""
},
{
"first": "Tomoko",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Ohta",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2008,
"venue": "BMC Bioinformatics",
"volume": "9",
"issue": "10",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kim, Jin-Dong, Tomoko Ohta, and Jun'ichi Tsujii. 2008. Corpus annotation for mining biomedical events from literature. BMC Bioinformatics, 9(Suppl 10).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Exploiting rich features for detecting hedges and their scope",
"authors": [
{
"first": "Xinxin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jianping",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Xuan",
"middle": [
"Wang"
],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning (CoNLL-2010): Shared Task",
"volume": "",
"issue": "",
"pages": "78--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, Xinxin, Jianping Shen, Xiang Gao, and Xuan Wang. 2010. Exploiting rich features for detecting hedges and their scope. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning (CoNLL-2010): Shared Task, pages 78-83, Uppsala.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The language of bioscience: Facts, speculations, and statements in between",
"authors": [
{
"first": "Marc",
"middle": [],
"last": "Light",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Ying Qiu",
"suffix": ""
},
{
"first": "Padmini",
"middle": [],
"last": "Srinivasan",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the HLT-NAACL 2004 Workshop: Biolink 2004, Linking Biological Literature, Ontologies and Databases",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Light, Marc, Xin Ying Qiu, and Padmini Srinivasan. 2004. The language of bioscience: Facts, speculations, and statements in between. In Proceedings of the HLT-NAACL 2004 Workshop: Biolink 2004, Linking Biological Literature, Ontologies and Databases, pages 17-24, Boston, Massachusetts, USA.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Biomedical event annotation with CRFs and precision grammars",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Mackinlay",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Martinez",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing: Shared Task, BioNLP '09",
"volume": "",
"issue": "",
"pages": "77--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "MacKinlay, Andrew, David Martinez, and Timothy Baldwin. 2009. Biomedical event annotation with CRFs and precision grammars. In Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing: Shared Task, BioNLP '09, pages 77-85, Uppsala. McCallum, Andrew Kachites. 2002. MALLET: A Machine Learning for Language Toolkit. Available at http://mallet.cs.umass.edu.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Weakly supervised learning for hedge classification in scientific literature",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Medlock",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the ACL",
"volume": "",
"issue": "",
"pages": "992--999",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Medlock, Ben and Ted Briscoe. 2007. Weakly supervised learning for hedge classification in scientific literature. In Proceedings of the ACL, pages 992-999, Prague.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Learning the scope of hedge cues in biomedical texts",
"authors": [
{
"first": "Roser",
"middle": [],
"last": "Morante",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the BioNLP 2009 Workshop",
"volume": "",
"issue": "",
"pages": "28--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morante, Roser and Walter Daelemans. 2009. Learning the scope of hedge cues in biomedical texts. In Proceedings of the BioNLP 2009 Workshop, pages 28-36, Boulder, CO.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Annotating modality and negation for a machine reading evaluation",
"authors": [
{
"first": "Roser",
"middle": [],
"last": "Morante",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of CLEF 2011",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morante, Roser and Walter Daelemans. 2011. Annotating modality and negation for a machine reading evaluation. In Proceedings of CLEF 2011, Amsterdam, Netherlands.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Evaluating a meta-knowledge annotation scheme for bio-events",
"authors": [
{
"first": "Roser",
"middle": [],
"last": "Morante",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Van Asch",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Daelemans",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nawaz",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Raheel",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Thompson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ananiadou",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Workshop on Negation and Speculation in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1398--1407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morante, Roser, Vincent Van Asch, and Walter Daelemans. 2010. Memory-based resolution of in-sentence scopes of hedge cues. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning (CoNLL-2010): Shared Task, pages 40-47, Uppsala, Sweden. Nawaz, Raheel, Paul Thompson, and Sophia Ananiadou. 2010. Evaluating a meta-knowledge annotation scheme for bio-events. In Proceedings of the Workshop on Negation and Speculation in Natural Language Processing, pages 69-77, Uppsala. Ozg\u00fcr, Arzucan and Dragomir R. Radev. 2009. Detecting speculations and their scopes in scientific text. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 1398-1407, Singapore.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Modality and the English Modals",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Robert",
"suffix": ""
}
],
"year": 1979,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Palmer, Frank Robert. 1979. Modality and the English Modals. Longman, London.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Mood and Modality",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Robert",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Palmer, Frank Robert. 1986. Mood and Modality. Cambridge University Press, Cambridge.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A survey on transfer learning",
"authors": [
{
"first": "Sinno",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Jialin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2010,
"venue": "IEEE Transactions on Knowledge and Data Engineering",
"volume": "22",
"issue": "10",
"pages": "1345--1359",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pan, Sinno Jialin and Qiang Yang. 2010. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10):1345-1359.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Combining manual rules and supervised learning for hedge cue and scope detection",
"authors": [
{
"first": "Marek",
"middle": [],
"last": "Rei",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning (CoNLL-2010): Shared Task",
"volume": "",
"issue": "",
"pages": "56--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rei, Marek and Ted Briscoe. 2010. Combining manual rules and supervised learning for hedge cue and scope detection. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning (CoNLL-2010): Shared Task, pages 56-63, Uppsala.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Exploring epistemic modality in academic discourse using corpora",
"authors": [
{
"first": "Vassiliki",
"middle": [],
"last": "Rizomilioti",
"suffix": ""
}
],
"year": 2006,
"venue": "Information Technology in Languages for Specific Purposes",
"volume": "7",
"issue": "",
"pages": "53--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rizomilioti, Vassiliki. 2006. Exploring epistemic modality in academic discourse using corpora. In Elisabet Arn\u00f3 Macia, Antonia Soler Cervera, and Carmen Rueda Ramos, editors, Information Technology in Languages for Specific Purposes, volume 7 of Educational Linguistics. Springer US, New York, pages 53-71.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Epistemic modality: From uncertainty to certainty in the context of information seeking as interactions with texts",
"authors": [
{
"first": "Victoria",
"middle": [
"L"
],
"last": "Rubin",
"suffix": ""
}
],
"year": 2010,
"venue": "Information Processing & Management",
"volume": "46",
"issue": "5",
"pages": "533--540",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rubin, Victoria L. 2010. Epistemic modality: From uncertainty to certainty in the context of information seeking as interactions with texts. Information Processing & Management, 46(5):533-540.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Certainty identification in texts: Categorization model and manual tagging results",
"authors": [
{
"first": "Victoria",
"middle": [
"L"
],
"last": "Rubin",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Elizabeth",
"suffix": ""
},
{
"first": "Noriko",
"middle": [],
"last": "Liddy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kando",
"suffix": ""
}
],
"year": 2005,
"venue": "Computing Attitude and Affect in Text: Theory and Applications (the Information Retrieval Series)",
"volume": "",
"issue": "",
"pages": "61--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rubin, Victoria L., Elizabeth D. Liddy, and Noriko Kando. 2005. Certainty identification in texts: Categorization model and manual tagging results. In James G. Shanahan, Yan Qu, and Janyce Wiebe, editors, Computing Attitude and Affect in Text: Theory and Applications (the Information Retrieval Series), Springer Verlag, New York, pages 61-76.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Exploiting CCG structures with tree kernels for speculation detection",
"authors": [
{
"first": "Liliana",
"middle": [],
"last": "S\u00e1nchez",
"suffix": ""
},
{
"first": "Baoli",
"middle": [],
"last": "Mamani",
"suffix": ""
},
{
"first": "Carl",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning (CoNLL-2010): Shared Task",
"volume": "",
"issue": "",
"pages": "126--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S\u00e1nchez, Liliana Mamani, Baoli Li, and Carl Vogel. 2010. Exploiting CCG structures with tree kernels for speculation detection. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning (CoNLL-2010): Shared Task, pages 126-131, Uppsala.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A Factuality Profiler for Eventualities in Text",
"authors": [
{
"first": "Roser",
"middle": [],
"last": "Saur\u00ed",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saur\u00ed, Roser. 2008. A Factuality Profiler for Eventualities in Text. Ph.D. thesis, Brandeis University, Waltham, MA.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "FactBank: A corpus annotated with event factuality. Language Resources and Evaluation",
"authors": [
{
"first": "Roser",
"middle": [],
"last": "Saur\u00ed",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "43",
"issue": "",
"pages": "227--268",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saur\u00ed, Roser and James Pustejovsky. 2009. FactBank: A corpus annotated with event factuality. Language Resources and Evaluation, 43:227-268.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Multi-dimensional classification of biomedical text: Toward automated, practical provision of high-utility text to diverse users",
"authors": [
{
"first": "",
"middle": [],
"last": "Settles",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Burr",
"suffix": ""
},
{
"first": "Lewis",
"middle": [],
"last": "Craven",
"suffix": ""
},
{
"first": "Canada",
"middle": [],
"last": "Vancouver",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shatkay",
"suffix": ""
},
{
"first": "Fengxia",
"middle": [],
"last": "Hagit",
"suffix": ""
},
{
"first": "Andrey",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "W. John",
"middle": [],
"last": "Rzhetsky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wilbur",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the NIPS Workshop on Cost-Sensitive Learning",
"volume": "24",
"issue": "",
"pages": "2086--2093",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Settles, Burr, Mark Craven, and Lewis Friedland. 2008. Active learning with real annotation costs. In Proceedings of the NIPS Workshop on Cost-Sensitive Learning, pages 1-10, Vancouver, Canada. Shatkay, Hagit, Fengxia Pan, Andrey Rzhetsky, and W. John Wilbur. 2008. Multi-dimensional classification of biomedical text: Toward automated, practical provision of high-utility text to diverse users. Bioinformatics, 24(18):2086-2093.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Using maximum entropy model to extract protein-protein interaction information from biomedical literature",
"authors": [
{
"first": "Chengjie",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Xiaolong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Guan",
"suffix": ""
}
],
"year": 2007,
"venue": "Advanced Intelligent Computing Theories and Applications. With Aspects of Theoretical and Methodological Issues",
"volume": "",
"issue": "",
"pages": "730--737",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sun, Chengjie, Lei Lin, Xiaolong Wang, and Yi Guan. 2007. Using maximum entropy model to extract protein-protein interaction information from biomedical literature. In De-Shuang Huang, Donald C. Wunsch, Daniel S. Levine, and Kang-Hyun Jo, editors, Advanced Intelligent Computing Theories and Applications. With Aspects of Theoretical and Methodological Issues. Springer Verlag, Heidelberg, pages 730-737.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Hedge classification in biomedical texts with a weakly supervised selection of keywords",
"authors": [
{
"first": "Gy\u00f6rgy",
"middle": [],
"last": "Szarvas",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "281--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Szarvas, Gy\u00f6rgy. 2008. Hedge classification in biomedical texts with a weakly supervised selection of keywords. In Proceedings of ACL-08: HLT, pages 281-289, Columbus, OH.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Uncertainty detection as approximate max-margin sequence labelling",
"authors": [
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Sumithra",
"middle": [],
"last": "Velupillai",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Hassel",
"suffix": ""
},
{
"first": "Gunnar",
"middle": [],
"last": "Eriksson",
"suffix": ""
},
{
"first": "Hercules",
"middle": [],
"last": "Dalianis",
"suffix": ""
},
{
"first": "Jussi",
"middle": [],
"last": "Karlgren",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning (CoNLL-2010): Shared Task",
"volume": "",
"issue": "",
"pages": "84--91",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T\u00e4ckstr\u00f6m, Oscar, Sumithra Velupillai, Martin Hassel, Gunnar Eriksson, Hercules Dalianis, and Jussi Karlgren. 2010. Uncertainty detection as approximate max-margin sequence labelling. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning (CoNLL-2010): Shared Task, pages 84-91, Uppsala.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "A cascade method for detecting hedges and their scope in natural language text",
"authors": [
{
"first": "",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Xiaolong",
"middle": [],
"last": "Buzhou",
"suffix": ""
},
{
"first": "Xuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Shixi",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fan",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning (CoNLL-2010): Shared Task",
"volume": "",
"issue": "",
"pages": "13--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tang, Buzhou, Xiaolong Wang, Xuan Wang, Bo Yuan, and Shixi Fan. 2010. A cascade method for detecting hedges and their scope in natural language text. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning (CoNLL-2010): Shared Task, pages 13-17, Uppsala.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Categorising modality in biomedical texts",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Thompson",
"suffix": ""
},
{
"first": "Giulia",
"middle": [],
"last": "Venturi",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Mcnaught",
"suffix": ""
},
{
"first": "Simonetta",
"middle": [],
"last": "Montemagni",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the LREC 2008 Workshop on Building and Evaluating Resources for Biomedical Text Mining",
"volume": "",
"issue": "",
"pages": "27--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thompson, Paul, Giulia Venturi, John McNaught, Simonetta Montemagni, and Sophia Ananiadou. 2008. Categorising modality in biomedical texts. In Proceedings of the LREC 2008 Workshop on Building and Evaluating Resources for Biomedical Text Mining, pages 27-34, Marrakech, Morocco.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "A baseline approach for detecting sentences containing uncertainty",
"authors": [
{
"first": "Tjong",
"middle": [],
"last": "Kim Sang",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning (CoNLL-2010): Shared Task",
"volume": "",
"issue": "",
"pages": "148--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tjong Kim Sang, Erik. 2010. A baseline approach for detecting sentences containing uncertainty. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning (CoNLL-2010): Shared Task, pages 148-150, Uppsala.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Machine learning and rule-based approaches to assertion classification",
"authors": [
{
"first": "\u00d6zlem",
"middle": [],
"last": "Uzuner",
"suffix": ""
},
{
"first": "Xiaoran",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Tawanda",
"middle": [],
"last": "Sibanda",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of the American Medical Informatics Association",
"volume": "16",
"issue": "1",
"pages": "109--115",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Uzuner,\u00d6zlem, Xiaoran Zhang, and Tawanda Sibanda. 2009. Machine learning and rule-based approaches to assertion classification. Journal of the American Medical Informatics Association, 16(1):109-115.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Analyzing text in search of bio-molecular events: A high-precision machine learning framework",
"authors": [
{
"first": "",
"middle": [],
"last": "Van Asch",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2010 Workshop on Domain Adaptation for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "128--136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Van Asch, Vincent and Walter Daelemans. 2010. Using domain similarity for performance estimation. In Proceedings of the 2010 Workshop on Domain Adaptation for Natural Language Processing, pages 31-36, Uppsala. Van Landeghem, Sofie, Yvan Saeys, Bernard De Baets, and Yves Van de Peer. 2009. Analyzing text in search of bio-molecular events: A high-precision machine learning framework. In Proceedings of the BioNLP 2009 Workshop Companion Volume for Shared Task, pages 128-136, Boulder, CO.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Detecting uncertainty in biomedical literature: A simple disambiguation approach using sparse random indexing",
"authors": [
{
"first": "Erik",
"middle": [],
"last": "Velldal",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of SMBM 2010",
"volume": "",
"issue": "",
"pages": "75--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Velldal, Erik. 2010. Detecting uncertainty in biomedical literature: A simple disambiguation approach using sparse random indexing. In Proceedings of SMBM 2010, pages 75-83, Cambridge.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Resolving speculation: MaxEnt cue classification and dependency-based scope rules",
"authors": [
{
"first": "Erik",
"middle": [],
"last": "Velldal",
"suffix": ""
},
{
"first": "Lilja",
"middle": [],
"last": "\u00d8vrelid",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning (CoNLL-2010): Shared Task",
"volume": "",
"issue": "",
"pages": "48--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Velldal, Erik, Lilja \u00d8vrelid, and Stephan Oepen. 2010. Resolving speculation: MaxEnt cue classification and dependency-based scope rules. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning (CoNLL-2010): Shared Task, pages 48-55, Uppsala.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "The BioScope Corpus: Biomedical texts annotated for uncertainty, negation and their scopes",
"authors": [
{
"first": "Veronika",
"middle": [],
"last": "Vincze",
"suffix": ""
},
{
"first": "Gy\u00f6rgy",
"middle": [],
"last": "Szarvas",
"suffix": ""
},
{
"first": "Rich\u00e1rd",
"middle": [],
"last": "Farkas",
"suffix": ""
},
{
"first": "Gy\u00f6rgy",
"middle": [],
"last": "M\u00f3ra",
"suffix": ""
},
{
"first": "J\u00e1nos",
"middle": [],
"last": "Csirik",
"suffix": ""
}
],
"year": 2008,
"venue": "BMC Bioinformatics",
"volume": "9",
"issue": "11",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincze, Veronika, Gy\u00f6rgy Szarvas, Rich\u00e1rd Farkas, Gy\u00f6rgy M\u00f3ra, and J\u00e1nos Csirik. 2008. The BioScope Corpus: Biomedical texts annotated for uncertainty, negation and their scopes. BMC Bioinformatics, 9(Suppl 11):S9.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Fine-grained Subjectivity and Sentiment Analysis: Recognizing the Intensity, Polarity, and Attitudes of Private States",
"authors": [
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ann",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wilson, Theresa Ann. 2008. Fine-grained Subjectivity and Sentiment Analysis: Recognizing the Intensity, Polarity, and Attitudes of Private States. Ph.D. thesis, University of Pittsburgh, PA.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Hedge detection and scope finding by sequence labeling with normalized feature selection",
"authors": [
{
"first": "Shaodian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Bao-Liang",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning (CoNLL-2010): Shared Task",
"volume": "",
"issue": "",
"pages": "92--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang, Shaodian, Hai Zhao, Guodong Zhou, and Bao-Liang Lu. 2010. Hedge detection and scope finding by sequence labeling with normalized feature selection. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning (CoNLL-2010): Shared Task, pages 92-99, Uppsala.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "DYNAMIC: I have to go. (5) DOXASTIC: He believes that the Earth is flat. (6) INVESTIGATION: We examined the role of NF-kappa B in protein activation. (7) CONDITION: If it rains, we'll stay in."
},
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Learning curves: Results achieved with different target train sizes. The left and right figures show two selected source/target pairs. The upper figures depict coarse-grained classification results (F cue ); DA, CROSS, and TARGET with the same settings as in Table 5. The lower figures show the per class F cue of the DA/ALL model in the fine-grained classification."
},
"TABREF0": {
"html": null,
"text": "Uncertainty cues.",
"num": null,
"content": "<table><tr><td>Adjectives / adverbs</td><td/><td/></tr><tr><td/><td colspan=\"2\">probable, likely, possible, unsure, possibly, perhaps, etc. epistemic</td></tr><tr><td>Auxiliaries</td><td/><td/></tr><tr><td/><td>may, might, can, would, should, could, etc.</td><td>semantic</td></tr><tr><td>Verbs</td><td/><td/></tr><tr><td>speculative:</td><td>suggest, question, seem, appear, favor, etc.</td><td>epistemic</td></tr><tr><td>psych:</td><td>think, believe, etc.</td><td>doxastic</td></tr><tr><td>analytic:</td><td>investigate, analyze, examine, etc.</td><td>investigation</td></tr><tr><td>prospective:</td><td>plan, want, order, allow, etc.</td><td>dynamic</td></tr><tr><td>Conjunctions</td><td/><td/></tr><tr><td/><td>if, whether, etc.</td><td>investigation</td></tr><tr><td>Nouns</td><td/><td/></tr><tr><td>nouns derived</td><td>speculation, proposal, consideration, etc.</td><td>same as the verb</td></tr><tr><td>from uncertain verb:</td><td/><td/></tr><tr><td>other</td><td>rumor, idea, etc.</td><td>doxastic</td></tr><tr><td>uncertain nouns:</td><td/><td/></tr></table>",
"type_str": "table"
},
"TABREF1": {
"html": null,
"text": "Data on the corpora. sent. = sentence; epist. = epistemic cue; dox. = doxastic cue; inv. = investigation cue; cond. = condition cue.",
"num": null,
"content": "<table><tr><td>Data Set</td><td colspan=\"6\">#sent. #epist. #dox. #inv. #cond. Total</td></tr><tr><td>BioScope papers</td><td>7676</td><td>1373</td><td>220</td><td>295</td><td>187</td><td>2075</td></tr><tr><td colspan=\"2\">BioScope abstracts 11797</td><td>2478</td><td>200</td><td>784</td><td>24</td><td>3486</td></tr><tr><td>BioScope total</td><td>19473</td><td>3851</td><td>420</td><td>1079</td><td>211</td><td>5561</td></tr><tr><td>WikiWeasel</td><td>20756</td><td>1171</td><td>909</td><td>94</td><td>491</td><td>3265</td></tr><tr><td>FactBank</td><td>3123</td><td>305</td><td>201</td><td>36</td><td>178</td><td>720</td></tr><tr><td>Total</td><td>43352</td><td>5927</td><td>1530</td><td>1209</td><td>880</td><td>9546</td></tr></table>",
"type_str": "table"
},
"TABREF5": {
"html": null,
"text": "relative to the baseline TARGET setting.",
"num": null,
"content": "<table><tr><td>CROSS</td><td>TARGET</td><td>DA/ALL</td><td>DA/CUE</td></tr></table>",
"type_str": "table"
}
}
}
}