ACL-OCL / Base_JSON /prefixN /json /N07 /N07-1040.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N07-1040",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:47:59.308161Z"
},
"title": "Whose idea was this, and why does it matter? Attributing scientific work to citations",
"authors": [
{
"first": "Advaith",
"middle": [],
"last": "Siddharthan",
"suffix": "",
"affiliation": {
"laboratory": "Natural Language and Information Processing Group University of Cambridge Computer Laboratory",
"institution": "",
"location": {}
},
"email": ""
},
{
"first": "Simone",
"middle": [],
"last": "Teufel",
"suffix": "",
"affiliation": {
"laboratory": "Natural Language and Information Processing Group University of Cambridge Computer Laboratory",
"institution": "",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Scientific papers revolve around citations, and for many discourse level tasks one needs to know whose work is being talked about at any point in the discourse. In this paper, we introduce the scientific attribution task, which links different linguistic expressions to citations. We discuss the suitability of different evaluation metrics and evaluate our classification approach to deciding attribution both intrinsically and in an extrinsic evaluation where information about scientific attribution is shown to improve performance on Argumentative Zoning, a rhetorical classification task.",
"pdf_parse": {
"paper_id": "N07-1040",
"_pdf_hash": "",
"abstract": [
{
"text": "Scientific papers revolve around citations, and for many discourse level tasks one needs to know whose work is being talked about at any point in the discourse. In this paper, we introduce the scientific attribution task, which links different linguistic expressions to citations. We discuss the suitability of different evaluation metrics and evaluate our classification approach to deciding attribution both intrinsically and in an extrinsic evaluation where information about scientific attribution is shown to improve performance on Argumentative Zoning, a rhetorical classification task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In the recent past, there has been a focus on information management from scientific literature. In the genetics domain, for instance, information extraction of genes and gene-protein interactions helps geneticists scan large amounts of information (e.g., as explored in the TREC Genomics track (Hersh et al., 2004) ). Elsewhere, citation indexes (Garfield, 1979) provide bibliometric data about the frequency with which particular papers are cited. The success of citation indexers such as CiteSeer (Giles et al., 1998) and Google Scholar relies on the robust detection of formal citations in arbitrary text. In bibliographic information retrieval, anchor text, i.e., the context of a citation can be used to characterise (index) the cited paper using terms outside of that paper (Bradshaw, 2003) ; O'Connor (1982) presents an approach for identifying the area around citations where the text focuses on that citation. And automatic citation classification (Nanba and Okumura, 1999; Teufel et al., 2006) determines the function that a citation plays in the discourse.",
"cite_spans": [
{
"start": 295,
"end": 315,
"text": "(Hersh et al., 2004)",
"ref_id": "BIBREF5"
},
{
"start": 347,
"end": 363,
"text": "(Garfield, 1979)",
"ref_id": "BIBREF2"
},
{
"start": 500,
"end": 520,
"text": "(Giles et al., 1998)",
"ref_id": "BIBREF3"
},
{
"start": 781,
"end": 797,
"text": "(Bradshaw, 2003)",
"ref_id": "BIBREF0"
},
{
"start": 800,
"end": 815,
"text": "O'Connor (1982)",
"ref_id": "BIBREF10"
},
{
"start": 958,
"end": 983,
"text": "(Nanba and Okumura, 1999;",
"ref_id": "BIBREF9"
},
{
"start": 984,
"end": 1004,
"text": "Teufel et al., 2006)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For such information access and retrieval purposes, the relevance of a citation within a paper is often crucial. One can estimate how important a citation is by simply counting how often it occurs in the paper. But as Kim and Webber (2006) argue, this ignores many expressions in text which refer to the cited author's work but which are not as easy to recognise as citations. They address the resolution of instances of the third person personal pronoun \"they\" in astronomy papers: it can either refer to a citation or to some entities that are part of research within the paper (e.g., planets or galaxies). Several applications should profit in principle from detecting connections between referring expressions and citations. For instance, in citation function classification, the task is to find out if a citation is described as flawed or as useful. Consider:",
"cite_spans": [
{
"start": 218,
"end": 239,
"text": "Kim and Webber (2006)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most computational models of discourse are based primarily on an analysis of the intentions of the speakers [Cohen and Perrault, 1979] [Allen and Perrault, 1980] [Grosz and Sidner, 1986] WEAK . The speaker will form intentions based on his goals and then act on these intentions, producing utterances. The hearer will then reconstruct a model of the speaker's intentions upon hearing the utterance. This approach has many strong points, but does not provide a very satisfactory account of the adherence to discourse conventions in dialogue.",
"cite_spans": [
{
"start": 108,
"end": 134,
"text": "[Cohen and Perrault, 1979]",
"ref_id": null
},
{
"start": 135,
"end": 161,
"text": "[Allen and Perrault, 1980]",
"ref_id": null
},
{
"start": 162,
"end": 186,
"text": "[Grosz and Sidner, 1986]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The three citations above are described as flawed (detectable by \"does not provide a very satisfactory account\"), and thus receive the label Weak. However, in order to detect this, one must first realise that \"this approach\" refers to the three cited papers. A contrasting hypothesis could be that the citations are used (thus deserving the label Use; the cue phrase \"based on\" might make us think so (as in the context \"our work is based on\"). This, however, can be ruled out if we know that \"the speaker\" is not referring to some aspect of the current paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We define an attribution task where possible referents are members of the reference list (i.e., each cited paper), the Current-Paper, and a back-off category No-Specific-Paper for markables that are not attributable to any specific paper(s). Our markables are as follows: all definite descriptions (e.g., \"the hearer\", and including demonstrative noun phrases such as \"these intentions\"), all \"work\" nouns 1 , and all pronouns (possessive, personal and demonstrative); c.f., underlined strings in the above example. Our notion of attribution link encompasses two relations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The scientific attribution task",
"sec_num": "2"
},
{
"text": "1. Anaphoric: The referents are entire research papers, or the papers' authors 2. Subpart: The referents are some component of an approach/argument/claim in the research paper",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The scientific attribution task",
"sec_num": "2"
},
{
"text": "There are two tasks: attributing a linguistic expression to the right paper (including the current paper) -a task we call scientific attribution -and deciding whether or not the expression is anaphoric to the entirety of the paper, or just to some subpart of it. Kim and Webber (2006) solve the problem of distinguishing between these relations for one case. They decide whether the pronoun \"they\" anaphorically refers to the authors of a cited paper, or whether it refers to some entity that is discussed in (a subpart of) a paper (e.g., \"galaxies\"). In this paper, we tackle the other problem of scientific attribution.",
"cite_spans": [
{
"start": 263,
"end": 284,
"text": "Kim and Webber (2006)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The scientific attribution task",
"sec_num": "2"
},
{
"text": "We do not distinguish between the two types of links stated above, but only identify which citation(s) a linguistic expression is attributable to. For tasks of interest to us, it is not enough to only consider anaphoric references to entire papers; authors often make statements comparing/using/criticising aspects or subparts of cited work. We therefore consider a far wider range of markables than Kim and Webber's single pronoun \"they\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The scientific attribution task",
"sec_num": "2"
},
{
"text": "Our attribution task differs from the traditional anaphora resolution task in that we have a fixed list of possible referents (the reference list items, Current-Paper or No-Specific-Paper) that are known upfront. Also, we do not form co-reference chains; we attribute a referring expression directly to one or more referents. Ours is therefore a multi-label classification task, where the citations, Current-Paper and No-Specific-Paper are the labels, and where one or more labels are assigned to each markable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The scientific attribution task",
"sec_num": "2"
},
{
"text": "We evaluate intrinsically by comparing to human-annotated attribution, and extrinsically by showing that automatically acquired knowledge about scientific attribution improves performance on a discourse classification task-Argumentative Zoning (Teufel and Moens, 2002) , where sentences are labelled as one of {Own, Other, Background, Textual, Aim, Basis, Contrast} according to their role in the author's argument.",
"cite_spans": [
{
"start": 244,
"end": 268,
"text": "(Teufel and Moens, 2002)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The scientific attribution task",
"sec_num": "2"
},
{
"text": "We describe our data in \u00a73 and methodology in \u00a74, discuss evaluation metrics in \u00a75, and evaluate intrinsically in \u00a76 and extrinsically in \u00a77.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The scientific attribution task",
"sec_num": "2"
},
{
"text": "We used data from the CmpLg (Computation and Language archive; 320 conference articles in computational linguistics). The articles are in XML format.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "We produced an annotated corpus (10 articles, 4290 data points, i.e., markables) based on written guidelines. The task was found to be quite intuitive by our annotators, and this was reflected in high agreement -Krippendorff's alpha 2 of more than 0.8 (2 annotators, 3 papers, 1429 data points) on the attribution task. The distribution of classes was, as expected, quite skewed: 69% of markables are attributable to Current-Paper, 7% to no specific paper and 24% to specific references (on average, 1.7 per reference). Details about the annotation process and human agreement figures can be found in Siddharthan and Teufel (2007) .",
"cite_spans": [
{
"start": 617,
"end": 630,
"text": "Teufel (2007)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "We frame the attribution problem as a classification task: Given a markable (the definite description/pronoun/work noun under consideration), a binary yes/no decision is made for each cited paper, and a binary yes/no decision is made for whether the markable is attributable to the current paper. The list of labels for the markable is compiled by including all the citations for which the machine learner returns yes, and Current-Paper if the learner returns yes. If the list is empty (learner returns no for everything), the label is No-Specific-Paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Learning Approach",
"sec_num": "4"
},
{
"text": "Since the model for whether a markable is attributable to the current work is likely to be different from the model for whether it is attributable to a citation, we trained separate models for the two problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Learning Approach",
"sec_num": "4"
},
{
"text": "For each data point to be classified (called NP below), we create a machine learning instance for each reference list item by automatically computing the following features from POStagged text:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deciding attribution to a citation",
"sec_num": "4.1"
},
{
"text": "1. Properties of data point (NP) and the closest Citation instance (CIT) of the reference list item:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deciding attribution to a citation",
"sec_num": "4.1"
},
{
"text": "(a) Type of NP (Definite Description/Work Noun/Pronoun) (b) CIT is a self Citation or not (c) CIT is syntactic (in running text) or parenthetical (d) Is CIT Hobbs' prediction (searching left-right starting from current sentence and then considering previous sentences, is CIT the first citation or reference to current work found)?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deciding attribution to a citation",
"sec_num": "4.1"
},
{
"text": "2. Distance measures:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deciding attribution to a citation",
"sec_num": "4.1"
},
{
"text": "(a) Dist. between NP and CIT measured in words (b) Dist. between NP and CIT measured in sentences (c) Dist. between NP and CIT measured in paragraphs (d) Is CIT after NP in the discourse (cataphor)? (e) Distance between CIT and the closest first person pronoun or \"this paper\" in words We have a chicken and egg problem with calculating the distance of a reference to current work in 2(e). Unlike citations, these are not unambiguously marked in the text. We calculate distance from the closest first person pronoun (even though these could possibly refer to a self citation, rather than current work) or the phrase \"this paper\", which can again refer to other citations but predominantly refers to current work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deciding attribution to a citation",
"sec_num": "4.1"
},
{
"text": "We use the same features for the second classifier that makes the decision on whether the data point refers to Current-Paper, with the following changes: Features 1(b,c) are removed as they are meaningless; 1(d) checks Hobbs' prediction for a first person pronoun/\"this paper\", rather than CIT; in 2(a-d), the distance is measured between the closest first person pronoun/\"this paper\" and the markable, rather than a citation and the markable; similarly, in 3(b,c) we count instances of first person pronoun/\"this paper\"; for 2(e), we now calculate the distance of the closest citation instance. In short, the same features are used, but current work and citations are swapped.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deciding attribution to current work",
"sec_num": "4.2"
},
{
"text": "We consider two evaluation metrics. The first is the scoring system used for the co-reference task in the Message Understanding Conferences MUC-6 and MUC-7. The second is Krippendorff's \u03b1. We briefly discuss both below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5"
},
{
"text": "The MUC-6/MUC-7 Co-reference evaluation metric (Vilain et al., 1995) works by comparing co-reference classes across two annotated files. Calling one annotation the \"model\" and the other the \"system\", for each co-reference class S in the model, c(S) is the minimal number of co-reference links needed to generate the class (this is one less than the cardinality of the class; c(S) = |S| \u2212 1). m(S) is the number of \"missing\" links in the system annotation relative to the co-reference class as marked up in the model. In other words, this is the minimum number of co-reference links that need to be added to the system annotation to fully generate the co-reference class S in the model. Recall error is then RE(S) = m(S)/c(S) and Recall is",
"cite_spans": [
{
"start": 47,
"end": 68,
"text": "(Vilain et al., 1995)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The MUC-6/MUC-7 Metric",
"sec_num": "5.1"
},
{
"text": "R(S) = 1 \u2212 RE = c(S)\u2212m(S) c(S)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The MUC-6/MUC-7 Metric",
"sec_num": "5.1"
},
{
"text": ". Recall for the entire file (or set of files) is calculated by summing over all co-reference classes in the model:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The MUC-6/MUC-7 Metric",
"sec_num": "5.1"
},
{
"text": "R = i c(S i ) \u2212 m(S i ) i c(S i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The MUC-6/MUC-7 Metric",
"sec_num": "5.1"
},
{
"text": "Precision (P ) is calculated by swapping the model and system and the f-measure (F = 2R \u00d7 P/(R + P )) is symmetric with respect to both annotations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The MUC-6/MUC-7 Metric",
"sec_num": "5.1"
},
{
"text": "We follow Passonneau (2004) and Poesio and Artstein (2005) in using Krippendorff (1980) 's \u03b1 metric to compute agreement between annotations. The advantage of \u03b1 over the more commonly used \u03ba metric is that \u03b1 allows for partial agreement when annotators assign multiple labels to the same markable; in this case calculating agreement on a markable requires a more graded agreement calculation than the \"1 if sets are identical and 0 otherwise\" provided for by \u03ba. Krippendorff's \u03b1 measures disagreement, and allows for the use of distance metrics to calculate partial disagreement. Following Passonneau, we present results using four distance metrics:",
"cite_spans": [
{
"start": 10,
"end": 27,
"text": "Passonneau (2004)",
"ref_id": "BIBREF11"
},
{
"start": 32,
"end": 58,
"text": "Poesio and Artstein (2005)",
"ref_id": "BIBREF12"
},
{
"start": 68,
"end": 87,
"text": "Krippendorff (1980)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Krippendorff 's Alpha",
"sec_num": "5.2"
},
{
"text": "1. (N)ominal: Two sets have distance N = 0 if they are identical and N = 1 if they are not. \u03b1 calculated using the nominal distance metric is equivalent to \u03ba. 2. (J)accard: Two sets A and B have distance J = 1 \u2212 |A \u2229 B|/|A \u222a B|. In other words, the distance between two sets is larger, the smaller their intersection and the larger their union.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Krippendorff 's Alpha",
"sec_num": "5.2"
},
{
"text": "3. (D)ice: Two sets A and B have distance D = 1 \u2212 2 \u00d7 |A \u2229 B|/(|A| + |B|). In practice, the Dice distance metric behaves similarly to the Jaccard metric, but tends to be smaller, resulting in slightly higher \u03b1. 4. (M)ASI: This is the Jaccard distance J weighted by a monotonicity distance m where, m = 0 if two sets are identical; m = 0.33 if one is a subset of the other; m = 0.67 if the intersection and the two set differences are all non-null; m = 1 if the two sets are disjoint. Formally, the MASI metric is M = m \u00d7 J.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Krippendorff 's Alpha",
"sec_num": "5.2"
},
{
"text": "As an example, consider two sets {a, b, c} and {b, c, d}. The distances between these sets are N = 1, J = 1\u22122/4 = 0.5, D = 1\u22122\u00d72/(3+3) = 0.33 and M = 0.67 \u00d7 0.5 = 0.33.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Krippendorff 's Alpha",
"sec_num": "5.2"
},
{
"text": "Krippendorff's \u03b1 is defined as \u03b1 = 1 \u2212 D o /D e , where D o is the observed disagreement and D e is the disagreement that is expected by chance:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Krippendorff 's Alpha",
"sec_num": "5.2"
},
{
"text": "D o = 1 c(c \u2212 1) j k k n jk n jk d kk D e = 1 c(c \u2212 1) k k n k n k d kk",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Krippendorff 's Alpha",
"sec_num": "5.2"
},
{
"text": "In the above formulae, c is the number of coders, n jk is the number of times item j is classed as category k, n k is the number of times any item is classed as category k and d kk is the distance between categories k and k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Krippendorff 's Alpha",
"sec_num": "5.2"
},
{
"text": "Like \u03ba, Krippendorff's \u03b1 is 1 when there is perfect agreement, 0 when the observed agreement is only what was expected by chance, negative when observed agreement is less than expected by chance and positive when observed agreement is greater than expected by chance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Krippendorff 's Alpha",
"sec_num": "5.2"
},
{
"text": "We ran a machine learning experiment using 10-fold cross-validation and the memorybased learner IBk 3 (with k=6), using the Weka toolkit (Witten and Frank, 2000) . The performance is shown in Tables 1 and 2. To position these results we compare them with three baseline lower bounds and the human performance upper bound in Table 3 . We use three baselines: As Table 3 shows, our machine learning approach performs much better than the baselines on all the agreement metrics, and is indeed closer to human performance than to any of the baselines. The MUC evaluation appears to produce highly inflated results on our task -when there is a small set of co-reference classes and one of these classes contains 70% of data points, it takes only a small number of missing links to correct annotations. This results in unreasonably high values, particularly for the majority class baseline of labelling every data point as Current-Paper. We believe that the \u03b1 metrics provide a much more realistic estimate of the difficulty of the task and the relative performances of different approaches. Table 4 shows the performance of the machine learner for each of the three types of linguistic expressions considered. Pronouns are the easiest to resolve, with on average 90% resolved correctly (an agreement with the human gold standard of \u03b1 = .71). This drops to 85% (\u03b1 = .68) for definite descriptions and demonstratives, and further to 78% (\u03b1 = .63) for re- While all the features contributed to the reported results, the most important features (in terms of information gain) for deciding attribution to a citation were the paragraph level citation count 3(b), the distance features 2(a,b,c,d ), the rank 3(a) and the Hobbs' prediction 1(d). The most important features for deciding attribution to the current paper were the distance features 2(a,c,e), the rank 3(a) and the Hobbs' prediction 1(d).",
"cite_spans": [
{
"start": 137,
"end": 161,
"text": "(Witten and Frank, 2000)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 324,
"end": 331,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 361,
"end": 368,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 1086,
"end": 1093,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 1665,
"end": 1683,
"text": "features 2(a,b,c,d",
"ref_id": null
}
],
"eq_spans": [],
"section": "Intrinsic Evaluation Results",
"sec_num": "6"
},
{
"text": "To demonstrate the use of automatic scientific attribution classification, we studied its utility for one well known discourse annotation task: Argumentative Zoning (Teufel and Moens, 2002) . Argumentative Zoning (AZ) is the task of applying one of seven discourse level tags (Figure 1) to each sentence in a scientific paper.",
"cite_spans": [
{
"start": 165,
"end": 189,
"text": "(Teufel and Moens, 2002)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 276,
"end": 286,
"text": "(Figure 1)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "7"
},
{
"text": "These categories model several aspects of scientific papers: from the distinction of segments by who an idea is attributed to (Own -Other -Background), to the judgement of how the au- to the rhetorical status of high-level discourse goals (statement of Aim; overview of section structure (Textual)). Some of these categories (Background, Other and Own) occur in zones that span many sentences. Other categories typically occur in short zones, often just a single sentence (Textual, Aim, Contrast, Basis).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "7"
},
{
"text": "In all work to date, classification of sentences into one of the AZ categories has been performed on the basis of features extracted from within the sentence, and a few contextual features such as section heading and location in document. Scientific attribution links previously unresolved noun phrases or pronouns in the sentence to citations. As this provides the machine learner with more information, AZ results should improve.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "7"
},
{
"text": "The evaluation corpus used is the one from Teufel and Moens (2002) . It consists of 80 conference papers in computational linguistics, containing around 12000 sentences. Each of these is manually tagged as one of {OWN, OTH, BKG, BAS, AIM, CTR, TXT}. The reliability observed is reasonable (Kappa=0.71)).",
"cite_spans": [
{
"start": 43,
"end": 66,
"text": "Teufel and Moens (2002)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "AZ Data",
"sec_num": "7.1"
},
{
"text": "Following Teufel and Moens (2002) , we used supervised ML using features extracted by shallow processing (POS tagging and pattern matching):",
"cite_spans": [
{
"start": 10,
"end": 33,
"text": "Teufel and Moens (2002)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "7.2"
},
{
"text": "\u2022 Lexical (cue phrase) features consist of three features: the first models occurrence of about 1700 manually identified scientific cue phrases (such as \"in this paper\"). The cue phrases are classified into semantic groups. The second models the main verb of the sentence, by lookup in a verb lexicon organised by 13 main clusters of verb types (e.g. \"change verbs\"), and the third models the likely subject of the sentence, by classifying them either as the authors, or other researchers, or none of the above, using an extensive lexicon of regular expressions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "7.2"
},
{
"text": "\u2022 Content word features model occurrence and density of content words in the sentences, where content words are either defined as non-stoplist words in the subsection heading preceding the sentence, or as words with a high TF*IDF score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "7.2"
},
{
"text": "\u2022 Linguistic features include (complex) tense, voice, and presence of an auxiliary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "7.2"
},
{
"text": "\u2022 Citation features detect properties of formal citations in text, such as the occurrence of authors' names in text, the position of a citation in text, and whether the citation is a self citation (i.e., includes any of the authors of the paper itself).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "7.2"
},
{
"text": "\u2022 Location features: Rhetorical roles are expected at certain places in the document, for instance, background sentences are more likely to occur at the beginning of the text, and goal statements often occur after about a fifth of the paper. We model this by splitting the text into ten segments and assigning each sentence to the segment it is located in. We also use the section heading as a contextual feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "7.2"
},
{
"text": "Some categories tend to occur in blocks (e.g., Own, Other, Background), and the context in terms of the label of the previous sentence has good predictive value. We model this (the Table 5 : Improvement on AZ from using automatic scientific attribution classification.",
"cite_spans": [],
"ref_spans": [
{
"start": 181,
"end": 188,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "7.2"
},
{
"text": "so-called History feature) by running the classifier twice, and including the prediction for the previous sentence as a feature the second time. Due to practical considerations, we obtained our linguistic features using the RASP part of speech tagger (Briscoe and Carroll, 1995) , when in previous work we used the LT TTT (Grover et al., 2000) . We would not expect this to influence results much, however. Another difference is that we use around 1700 additional cue phrases acquired from previous work on another discourse task 4 (Teufel et al., 2006) .",
"cite_spans": [
{
"start": 251,
"end": 278,
"text": "(Briscoe and Carroll, 1995)",
"ref_id": "BIBREF1"
},
{
"start": 322,
"end": 343,
"text": "(Grover et al., 2000)",
"ref_id": "BIBREF4"
},
{
"start": 532,
"end": 553,
"text": "(Teufel et al., 2006)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "7.2"
},
{
"text": "In addition to these features, we use four features obtained from the scientific attribution task described in this paper:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "7.2"
},
{
"text": "\u2022 Whether there is any reference to current work in the sentence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scientific Attribution Features:",
"sec_num": null
},
{
"text": "\u2022 Whether there is any reference to any specific citation in the sentence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scientific Attribution Features:",
"sec_num": null
},
{
"text": "\u2022 Whether there is any reference in the sentence to work that is in neither the current paper nor any specific citation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scientific Attribution Features:",
"sec_num": null
},
{
"text": "\u2022 Which of these, if any, is in subject position",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scientific Attribution Features:",
"sec_num": null
},
{
"text": "Our aim is to explore whether these features obtained from the scientific attribution task influence machine learning performance on AZ.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scientific Attribution Features:",
"sec_num": null
},
{
"text": "We ran five different machine learners with and without the four scientific attribution features (c.f., \u00a77.2). Note that our labelled data for the attribution task does not overlap with the 80 papers in the AZ corpus, and all attribution predictions used in features for this AZ experiment are obtained entirely from unseen (and indeed unlabelled) data based on the model learnt on 10 papers (c.f., \u00a76). The learners we used (with default Weka settings) are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AZ results",
"sec_num": "7.3"
},
{
"text": "\u2022 NB: Naive Bayes learner",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AZ results",
"sec_num": "7.3"
},
{
"text": "\u2022 HNB: Hidden Naive Bayes learner",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AZ results",
"sec_num": "7.3"
},
{
"text": "\u2022 IBk: Memory based learner",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AZ results",
"sec_num": "7.3"
},
{
"text": "\u2022 J48: Decision tree based learner",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AZ results",
"sec_num": "7.3"
},
{
"text": "\u2022 STACKING: combining NB and J48 classifiers with the stacking method",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AZ results",
"sec_num": "7.3"
},
{
"text": "As mentioned under History feature above, we run each learner twice, the second time including the machine learning prediction for the previous sentence (as we found in Teufel and Moens (2002) for NB, we noticed a slight improvement in performance when using the history feature (between .005 and .01 on both \u03ba and Macro-F for all learners)). We found an improvement from including the four reference features with all the learners, as shown in Table 5 .",
"cite_spans": [
{
"start": 169,
"end": 192,
"text": "Teufel and Moens (2002)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 445,
"end": 452,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "AZ results",
"sec_num": "7.3"
},
{
"text": "For a more detailed view of where the improvement comes from, refer to Table 6 , which shows precision, recall and f-measure per category for our best learner. The biggest improvements from using attribution features are for the categories Other, Aim and Bas. The improvement in Other was to be be expected, as this zone is directly related to the attribution classification. The large improvements in Aim and Bas is good news, as these are amongst our most informative rhetorical categories for downstream tasks. Our best results of Kappa=0.48 and Macro-F=0.53 are better than the best previously published results on task (Kappa=0.45 and Macro-F=0.50 in Teufel and Moens (2002) ).",
"cite_spans": [
{
"start": 656,
"end": 679,
"text": "Teufel and Moens (2002)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 71,
"end": 78,
"text": "Table 6",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "AZ results",
"sec_num": "7.3"
},
{
"text": "Our results improve on the results of Teufel and Moens (2002) (reproduced in Table 7 ) -both overall and for each individual category.",
"cite_spans": [
{
"start": 38,
"end": 61,
"text": "Teufel and Moens (2002)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 77,
"end": 84,
"text": "Table 7",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "AZ results",
"sec_num": "7.3"
},
{
"text": "We have described a new reference task -deciding scientific attribution, and demonstrated high human agreement (\u03b1 > 0.8) on this task. Our machine learning solution using shallow features achieves an agreement of \u03b1 M = 0.68 with the human gold standard, increasing to \u03b1 M = 0.71 if only pronouns need to be resolved. We have also demonstrated that information about scientific attribution improves results for a discourse classification task (Argumentative Zoning). We believe that similar improvements can be achieved on other discourse annotation tasks in the scientific literature domain. In particular, we plan to investigate the use of scientific attribution information for the citation function classification task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "We use a list of around 40 research methodology related nouns fromTeufel and Moens (2002), such as e.g., \"study, account, investigation, result\" etc. These are nouns we are particularly interested in.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "see description in \u00a75.2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Memory based learning gave better results on this task than other learners (NB, HNB, IBk, J48, cf. \u00a7 7.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "These cues are acquired manually from files that are not part of the AZ evaluation corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was funded by the EPSRC project SciBorg (EP/C010035/1, Extracting the Science from Scientific Publications).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Reference directed indexing: Redeeming relevance for subject search in citation indexes",
"authors": [
{
"first": "S",
"middle": [],
"last": "Bradshaw",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of ECDL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Bradshaw. 2003. Reference directed indexing: Re- deeming relevance for subject search in citation indexes. In Proc. of ECDL.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Developing and evaluating a probabilistic LR parser of part-ofspeech and punctuation labels",
"authors": [
{
"first": "T",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Carroll",
"suffix": ""
}
],
"year": 1995,
"venue": "Proc. of IWPT-95",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Briscoe and J. Carroll. 1995. Developing and evaluating a probabilistic LR parser of part-of- speech and punctuation labels. In Proc. of IWPT- 95, Prague / Karlovy Vary, Czech Republic.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Citation Indexing: Its Theory and Application in Science, Technology and Humanities",
"authors": [
{
"first": "E",
"middle": [],
"last": "Garfield",
"suffix": ""
}
],
"year": 1979,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Garfield. 1979. Citation Indexing: Its Theory and Application in Science, Technology and Humani- ties. J. Wiley, New York, NY.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Citeseer: An automatic citation indexing system",
"authors": [
{
"first": "C",
"middle": [
"L"
],
"last": "Giles",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Bollacker",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Lawrence",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. of the Third ACM Conference on Digital Libraries",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. L. Giles, K. Bollacker, and S. Lawrence. 1998. Citeseer: An automatic citation indexing system. In Proc. of the Third ACM Conference on Digital Libraries.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "LT TTT -A flexible tokenisation tool",
"authors": [
{
"first": "C",
"middle": [],
"last": "Grover",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Matheson",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mikheev",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of LREC-00",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grover, C. Matheson, A. Mikheev, and M. Moens. 2000. LT TTT -A flexible tokenisation tool. In Proc. of LREC-00, Athens, Greece.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Trec 2004 genomics track overview",
"authors": [
{
"first": "R",
"middle": [],
"last": "Hersh",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Bhuptiraju",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Ross",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kraemer",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of TREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hersh, R. Bhuptiraju, L. Ross, P. Johnson, A. Cohen, and D. Kraemer. 2004. Trec 2004 ge- nomics track overview. In Proc. of TREC.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Resolving Pronoun References",
"authors": [
{
"first": "J",
"middle": [],
"last": "Hobbs",
"suffix": ""
}
],
"year": 1986,
"venue": "Readings in Natural Language",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Hobbs. 1986. Resolving Pronoun References. In Readings in Natural Language, Grosz, B., Sparck- Jones, K. and Webber, B. (eds.) Morgan Kauf- man.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatic reference resolution in astronomy articles",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of 20th International CODATA Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Kim and B. Webber. 2006. Automatic refer- ence resolution in astronomy articles. In Proc. of 20th International CODATA Conference, Beijing, China.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Content Analysis: An introduction to its methodology",
"authors": [
{
"first": "K",
"middle": [],
"last": "Krippendorff",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Krippendorff. 1980. Content Analysis: An in- troduction to its methodology. Sage Publications, Beverly Hills.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Towards multipaper summarization using reference information",
"authors": [
{
"first": "H",
"middle": [],
"last": "Nanba",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Okumura",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. of IJCAI-99",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Nanba and M. Okumura. 1999. Towards multi- paper summarization using reference information. In Proc. of IJCAI-99.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Citing statements: Computer recognition and use to improve retrieval. Information Processing and Management",
"authors": [
{
"first": "J",
"middle": [],
"last": "O'connor",
"suffix": ""
}
],
"year": 1982,
"venue": "",
"volume": "18",
"issue": "",
"pages": "125--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. O'Connor. 1982. Citing statements: Computer recognition and use to improve retrieval. Informa- tion Processing and Management, 18(3):125-131.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Computing reliability for coreference annotation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Passonneau",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of LREC-04",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Passonneau. 2004. Computing reliability for coreference annotation. In Proc. of LREC-04, Lis- bon, Portugal.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Whose idea was this? Deciding attribution in scientific literature",
"authors": [
{
"first": "M",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Artstein",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of the 6th Discourse Anaphora and Anaphor Resolution Colloquium (DAARC'07)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Poesio and R. Artstein. 2005. Annotating (anaphoric) ambiguity. In Proc. of the Corpus Linguistics Conference, Birmingham, UK. A. Siddharthan and S. Teufel. 2007. Whose idea was this? Deciding attribution in scien- tific literature. In Proc. of the 6th Discourse Anaphora and Anaphor Resolution Colloquium (DAARC'07), Lagos, Portugal.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Summarising scientific articles -experiments with relevance an d rhetorical status",
"authors": [
{
"first": "S",
"middle": [],
"last": "Teufel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational Linguistics",
"volume": "28",
"issue": "4",
"pages": "409--446",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Teufel and M. Moens. 2002. Summarising sci- entific articles -experiments with relevance an d rhetorical status. Computational Linguistics, 28(4):409-446.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Automatic classification of citation function",
"authors": [
{
"first": "S",
"middle": [],
"last": "Teufel",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Siddharthan",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Tidhar",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of EMNLP-06",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Teufel, A. Siddharthan, and D. Tidhar. 2006. Au- tomatic classification of citation function. In Proc. of EMNLP-06, Sydney, Australia.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A model-theoretic coreference scoring scheme",
"authors": [
{
"first": "M",
"middle": [],
"last": "Vilain",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Burger",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Aberdeen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Connolly",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Hirschman",
"suffix": ""
}
],
"year": 1995,
"venue": "Proc. of the 6th Message Understanding Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Vilain, J. Burger, J. Aberdeen, D. Connolly, and L. Hirschman. 1995. A model-theoretic corefer- ence scoring scheme. In Proc. of the 6th Message Understanding Conference, San Francisco.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations",
"authors": [
{
"first": "I",
"middle": [],
"last": "Witten",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Witten and E. Frank. 2000. Data Mining: Prac- tical Machine Learning Tools and Techniques with Java Implementations. Morgan Kaufmann.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Rank of CIT (how many other reference list items are closer) (b) Number of times CIT is cited in the paragraph (c) Number of times CIT is cited in the whole paper (d) Current Section heading (this feature has 5 values: Introduction, Methods, Results, Conclusions, Unrecognised) 4. Agreement: (a) Agreement Number (He/She & single author non-self citation) (b) Agreement Person (First & Current/Self Citation, Third and Not-Current)",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "AZ Annotation scheme thors relate to other work (Contrast -Basis)",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF1": {
"num": null,
"text": "Agreement with Human Gold Standard",
"content": "<table><tr><td>\u2022 BASE M (Major Class): All data points are</td></tr><tr><td>labelled CURRENT-WORK</td></tr><tr><td>\u2022 BASE P (Previous): Data points are tagged</td></tr><tr><td>with the most recent label</td></tr><tr><td>\u2022 BASE H (Hobbs' Prediction): Data points</td></tr><tr><td>are tagged with the label found by Hobbs'</td></tr><tr><td>(1986) search (Search left to right in each</td></tr><tr><td>sentence, starting from current sentence,</td></tr><tr><td>then considering previous sentences)</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF3": {
"num": null,
"text": "",
"content": "<table><tr><td/><td colspan=\"6\">: Evaluation using MUC-6/7 software</td></tr><tr><td>Algo</td><td>\u03b1-N</td><td>\u03b1-J</td><td>\u03b1-D</td><td>\u03b1-M</td><td colspan=\"2\">%Agr * muc-f</td></tr><tr><td colspan=\"2\">BaseM .002</td><td>.001</td><td>.001</td><td>.001</td><td>69%</td><td>.934</td></tr><tr><td colspan=\"6\">BaseP -.101 -.083 -.081 -.077 19%</td><td>.894</td></tr><tr><td colspan=\"2\">BaseH .387</td><td>.397</td><td>.399</td><td>.407</td><td>72%</td><td>.910</td></tr><tr><td>IBk</td><td colspan=\"6\">.654 .669 .673 .677 85% .913</td></tr><tr><td colspan=\"2\">Hum * * .806</td><td>.808</td><td>.808</td><td>.809</td><td>91%</td><td>.965</td></tr><tr><td colspan=\"7\">* % Agreement, the conservative estimate measured using the Nominal metric</td></tr><tr><td colspan=\"7\">* * Agreement between two human annotators over a subset of the corpus (3 files, 1429 data points)</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF4": {
"num": null,
"text": "",
"content": "<table><tr><td>: Comparison with Baselines and Human</td></tr><tr><td>Performance (Averaged results)</td></tr><tr><td>maining work nouns (i.e., those not already in a</td></tr><tr><td>definite noun phrase).</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF6": {
"num": null,
"text": "Results for different markable types",
"content": "<table><tr><td>Category</td><td>Description</td></tr><tr><td colspan=\"2\">Background Generally accepted background knowl-</td></tr><tr><td/><td>edge</td></tr><tr><td>Other</td><td>Specific other work</td></tr><tr><td>Own</td><td>Own work: method, results, future</td></tr><tr><td/><td>work</td></tr><tr><td>Aim</td><td>Specific research goal</td></tr><tr><td>Textual</td><td>Textual section structure</td></tr><tr><td>Contrast</td><td>Contrast, comparison, weakness of</td></tr><tr><td/><td>other solution</td></tr><tr><td>Basis</td><td>Other work provides basis for own work</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF9": {
"num": null,
"text": "Best AZ results using Stacked classifier: with and without Attribution Features.",
"content": "<table/>",
"type_str": "table",
"html": null
},
"TABREF11": {
"num": null,
"text": "Teufel and Moens (2002)'s best AZ results (Naive Bayes Classifier).",
"content": "<table/>",
"type_str": "table",
"html": null
}
}
}
}