ACL-OCL / Base_JSON /prefixW /json /W99 /W99-0311.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W99-0311",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:09:10.260830Z"
},
"title": "Discourse-level argumentation in scientific articles: human and automatic annotation",
"authors": [
{
"first": "Simone",
"middle": [],
"last": "Teufel",
"suffix": "",
"affiliation": {
"laboratory": "HCRC Language Technology Group",
"institution": "University of Edinburgh",
"location": {}
},
"email": "s.teufel@ed.ac.uk"
},
{
"first": "Marc",
"middle": [],
"last": "Moens",
"suffix": "",
"affiliation": {
"laboratory": "HCRC Language Technology Group",
"institution": "University of Edinburgh",
"location": {}
},
"email": "m.moens@ed.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we present a rhetorically defined annotation scheme which is part of our corpus-based method for the summarisation of scientific articles. The annotation scheme consists of seven non-hierarchical labels which model prototypical academic argumentation and expected intentional 'moves'. In a large-scale experiments with three expert coders, we found the scheme stable and reproducible. We have built a resource consisting of 80 papers annotated by the scheme, and we show that this kind of resource can be used to train a system to automate the annotation work.",
"pdf_parse": {
"paper_id": "W99-0311",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we present a rhetorically defined annotation scheme which is part of our corpus-based method for the summarisation of scientific articles. The annotation scheme consists of seven non-hierarchical labels which model prototypical academic argumentation and expected intentional 'moves'. In a large-scale experiments with three expert coders, we found the scheme stable and reproducible. We have built a resource consisting of 80 papers annotated by the scheme, and we show that this kind of resource can be used to train a system to automate the annotation work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Work on summarisation has suffered from a lack of appropriately annotated corpora that can be used for building, training and evaluating summarisation systems. Typically, corpus work in this area has taken as its starting point texts target summaries: abstracts written by the researchers, supplied by the original authors or provided by professional abstractors. Training a summarisation system then involves learning the properties of sentences in those abstracts and using this knowledge to extract similax abstract-worthy sentences from unseen texts. In this scenario, system performance or development progress can be evaluated by taking texts in a test sample and comparing the sentences extracted from these texts with the sentences in the target abstract.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "But this approach has a number of shortcomings. First, sentence extraction on its own is a very general methodology, which can produce extracts that are incoherent or under-informative especially when used for high-compression summarisation (i.e. reducing a document to a small percentage of its original size). It is difficult to overcome this prob-lem, because once sentences have been extracted from the source text, the context that is needed for their interpretation is not available anymore and cannot be used to produce more coherent abstracts (Spgrck Jones, 1998) .",
"cite_spans": [
{
"start": 551,
"end": 571,
"text": "(Spgrck Jones, 1998)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our proposed solution to this problem is to extract sentences but also to classify them into one of a small number of possible argumentative roles, reflecting whether the sentence expresses a main goal of the source text, a shortcoming in someone else's work, etc. The summarisation system can then use this information to generate template-like abstracts: Main goal of the text:... ; Builds on work by:... ; Contrasts with:... ; etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Second, the question of what constitutes a useful gold standard has not yet been solved satisfactorily. Researchers developing corpus resources for summarisation work have often defined their own gold standard, relying on their own intuitions (see, e.g. Luhn, 1958; Edmundson, 1969) or have used abstracts supplied by authors or by professional abstractors as their gold standard (e.g. Kupiec et al., 1995; Mani and Bloedorn, 1998) . Neither approach is very satisfactory. Relying only on your own intuitions inevitably creates a biased resource; indeed, Rath et al. (1961) report low agreement between human judges carrying out this kind of task. On the other hand, using abstracts as targets is not necessarily a good gold standard for comparison of the systems' results, although abstracts are the only kind of gold standard that comes for free with the papers. Even if the abstracts are written by professional abstractors, there are considerable differences in length, structure, and information content. This is due to differences in the common abstract presentation style in different disciplines and to the projected use of the abstracts (cf. Liddy, 1991) . In the case of our corpus, an additional problem was the fact that the abstracts are written by the authors themselves and thus susceptible to differences in individual writing style.",
"cite_spans": [
{
"start": 254,
"end": 265,
"text": "Luhn, 1958;",
"ref_id": "BIBREF8"
},
{
"start": 266,
"end": 282,
"text": "Edmundson, 1969)",
"ref_id": "BIBREF2"
},
{
"start": 386,
"end": 406,
"text": "Kupiec et al., 1995;",
"ref_id": "BIBREF6"
},
{
"start": 407,
"end": 431,
"text": "Mani and Bloedorn, 1998)",
"ref_id": "BIBREF9"
},
{
"start": 555,
"end": 573,
"text": "Rath et al. (1961)",
"ref_id": null
},
{
"start": 1146,
"end": 1163,
"text": "(cf. Liddy, 1991)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For the task of summarisation and relevance decision between similar papers, however, it is essential that the information contained in the gold standard is comparable between papers. In our approach, the vehicle for comparability of information is similarity in argumentative roles of the associated sentences. We argue that it is more difficult to find the kind of information that preserves similarity of argumentative roles, and that it is not guaranteed that it will occur in the abstract.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": ": .... A related problem concerns fair evaluation Of the extraction methodology. The evaluation of extracted material necessarily consists of a comparison of sentences, whereas one would really want to compare the informational content of the extracted sentences and the target abstract. Thus it will often be the case that a system extracts a sentence which in that form does not appear in the supplied abstract (resulting in a low performance score) but which is nevertheless an abstract-worthy sentence. The mismatch often arises simply because a similar idea is expressed in the supplied abstract in a very different form. But comparison of content is difficult to perform: it would require sentences to be mapped into some underlying meaning representations and then comparing these to the representations of the sentences in the gold standard. As this is technically not feasible, system performance is typically performed against a fixed gold standard (e.g. the aforementioned abstracts), which is ultimately undesirable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our proposed solution to this problem is to build a corpus which details not only what the abstractworthy sentences are but also what their argumentative role is. This corpus can then be used as a resource to build a system to similarly classify sentences in unseen texts, and to evaluate that system. This paper reports on the development of a set of such argumentative roles that we have been using in our work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In particular, we employ human intuition to annotate argumentatively defined information. We ask our annotators to classify every sentence in the source text in terms of its argumentative role (e.g. that it expresses the main goal of the source text, or identifies open problems in earlier work, etc). Under this scenario, system evaluation is no longer a comparison of extracted sentences against a supplied abstract, or against a single sentence that was chosen as expressing (e.g.) the main goal of the source text. Instead, every sentence in the source text which expresses the main goal will have been identified, and the system's performance is evaluated against that classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Of course, having someone annotate text in this way may still lead to a biased or careless annotation. We therefore needed an annotation scheme which is simple enough to be usable in a stable and intuitive way for several annotators. This paper also reports on how we tested the stability of the annotation scheme we developed. A second design criterion for our annotation scheme was that we wanted the roles to be annotated automatically. This paper reports on preliminary results which show that the annotation process can indeed be automated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To summarise, we have argued that discourse structure information will improve summarisation. Other researchers (Ono et al., 1994; Marcu, 1997) have argued similarly, although most previous work on discourse-based summarisation follows a different discourse model, namely Rhetorical Structure Theory (Mann and Thompson, 1987) . In contrast to RST, we stress the importance of rhetorical moves which are global to the argumentation of the paper, as opposed to more local RST-type relations. Our categories are not hierarchical, and they are much less fine-grained than RST-relations. As mentioned above, we wanted them to a) provide context information for flexible summarisation, b) provide a higher degree of comparability between papers, and c) provide a fairer evaluation of superficially different sentences.",
"cite_spans": [
{
"start": 112,
"end": 130,
"text": "(Ono et al., 1994;",
"ref_id": "BIBREF13"
},
{
"start": 131,
"end": 143,
"text": "Marcu, 1997)",
"ref_id": "BIBREF11"
},
{
"start": 300,
"end": 325,
"text": "(Mann and Thompson, 1987)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the rest of this paper, we will first describe how we chose the categories (section 2). Second, we had to construct training and evaluation material such that we could be sure that the proposed categorisation yielded a reliable resource of annotated text to train a system against, a gold standard. The human annotation experiments are reported in section 3. Finally, in section 4, we describe some of the automated annotation work which we have started recently and which uses a corpus annotated according to our scheme as its training material.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The annotation scheme",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "The domain in which we work is that of scientifc research articles, in particular computational linguistics articles. We settled on this domain for a number of reasons. scheme which we hope to be applicable in a range of disciplines. Despite its heterogeneity, our collection of papers does exhibit predictable rhetorical patterns of scientific argumentation. To analyse these patterns we used STales' (1990) CARS (Creating a Research space) model as our starting point.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "The annotation scheme we designed is summarised in Figure 1 . The seven categories describe argumentative roles with respect to the overall communicative act of the paper. They are to be read as mutually exclusive labels, one of which is attributed to each sentence in a text. There are two kinds of categories in this scheme: basic categories and nonbasic categories. Basic categories are defined by attribution of intellectual ownership; they distinguish between:",
"cite_spans": [],
"ref_spans": [
{
"start": 51,
"end": 59,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "\u2022 statements which are presented as generally accepted (BACKGROUND);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "\u2022 statements which are attributed to other, specific pieces of research outside the given paper, including the authors' own previous work (OTHER);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "\u2022 statements which describe the authors' own new contributions (OWN).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "The four additional (non-basic) categories are more directly based on STales' theory. The most important of these is AIM, as this move on its own is already a good characterisation of the entire paper, and thus very useful for the generation of abstracts. The other categories are TEXTUAL, which provides information about section structure that might prove helpful for subsequent search steps. There are two moves having to do with the author's attitude towards previous research, namely BASIS and CONTRAST. We expect this kind of information to be useful for the creation of typed links for bibliometric search tools and for the automatic determination of rival approaches in the field and intellectual ancestry of methodologies (cf. Garfield's (1979) classification of the function of citation within researchers' papers).",
"cite_spans": [
{
"start": 736,
"end": 753,
"text": "Garfield's (1979)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "The structure in Figure 2 , for example, displays a common rhetorical pattern of scientific argumentation which we found in many introductions. A BACKGROUND segment, in which the history and the importance of the task is discussed, is followed by a longer sequence of OTHER sentences, in which specific prior work is described in a neutral way. This discussion usually terminates in a criticism of the prior work, thus giving a motivation for the own work presented in the paper. The next sentence typically states the specific goal or contribution of the paper, often in a formulaic way (Myers, 1992) . Such regularities, where the segments are contiguous, non-overlapping and non-hierarchical, can be expressed well with our category labels. Whereas non-basic categories are typically short segments of one or two sentences, the basic categories form much larger segments of sentences with the same rhetorical role.",
"cite_spans": [
{
"start": 588,
"end": 601,
"text": "(Myers, 1992)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 17,
"end": 25,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "3 Human Annotation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "To ensure that our coding scheme leads to less biased annotation than some of the other resources available for building summarisation systems, and to ensure that other researchers besides ourselves can use it to replicate our results on different types of texts, we wanted to examine two properties of our scheme: stability and reproducibility (Krippendorff, 1980) . Stability is the extent to which an annotator will produce the same classifications at different times. Reproducibility is the extent to which different annotators will produce the same classification. We use the Kappa coefficient (Siegel and Castellan, 1988) to measure stability and reproducibility. The rationale for using Kappa is explained in (Carletta, 1996) . The studies used to evaluate stability and reproducibility we describe in more detail in (Teufel et al., To Appear) . In brief, 48 papers were annotated by three extensively trained annotators. The training period was four weeks consisting of 5 hours of annotation per week. There were written instructions (guidelines) of 17 pages. Skim-reading and annotation of an average length (3800 word) paper typically took 20-30 minutes. The studies show that the training material is reliable. In particular, the basic annotation scheme is stable (K=.82, .81, .76; N=1220; k=2 for all three annotators) and reproducible (K=.71, N=4261, k=3), where k denotes the number of annotators, N the number of sentences annotated, and K gives the Kappa value. The full annotation scheme is stable (K=.83, .79, .81; N=1248; k-2 for all three annotators) and reproducible (K=.78, N=4031, k=3). Overall, reproducibility and stability for trained annotators does not quite reach the levels found for, for instance, the best dialogue act coding schemes, which typically reach Kappa values of around K=.80 (Carletta et al., 1997; Jurafsky et al., 1997) . Our annotation requires more subjective judgements and is possibly more cognitively complex. Our reproducibility and stability results are in the range which Krippendorff (1980) describes as giving marginally significant results for reasonable size data sets when correlating two coded variables which would show a clear correlation if there were perfect agreement. As our requirements are less stringent than Krippendorff's, we find the level of agreement which we achieved acceptable. Figure 3 , which gives the overall distribution of categories, shows that OWN is by far the most frequent category. Figure 4 reports how well the four non-basic categories could be distinguished from all other categories, measured by Krippendorff's diagnostics for category distinctions (i.e. collapsing all other distinctions). When compared to the overall reproducibility of .71, we notice that the annotators were good at distinguishing AIM and TEX-TUAL, and less good at determining BASIS and CON-TRAST. This might have to do with the location of those types of sentences in the paper: AIM and TEX-TUAL are usually found at the beginning or end of the introduction section, whereas CONTRAST, and even more so BASIS, are usually interspersed within longer stretches of OWN. As a result, these categories are more exposed to lapses of attention during annotation.",
"cite_spans": [
{
"start": 345,
"end": 365,
"text": "(Krippendorff, 1980)",
"ref_id": "BIBREF5"
},
{
"start": 599,
"end": 627,
"text": "(Siegel and Castellan, 1988)",
"ref_id": "BIBREF16"
},
{
"start": 716,
"end": 732,
"text": "(Carletta, 1996)",
"ref_id": "BIBREF1"
},
{
"start": 824,
"end": 850,
"text": "(Teufel et al., To Appear)",
"ref_id": null
},
{
"start": 1818,
"end": 1841,
"text": "(Carletta et al., 1997;",
"ref_id": "BIBREF0"
},
{
"start": 1842,
"end": 1864,
"text": "Jurafsky et al., 1997)",
"ref_id": "BIBREF4"
},
{
"start": 2025,
"end": 2044,
"text": "Krippendorff (1980)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 2354,
"end": 2362,
"text": "Figure 3",
"ref_id": null
},
{
"start": 2470,
"end": 2478,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Annotating full texts",
"sec_num": "3.1"
},
{
"text": "The fact that the annotators are good at determining AIM sentences is an important result: as AIM sentences constitute the best characterisation of the research paper for the summarisation task at a very high compression to 1.8% of the original text length, we are particularly interested in having them annotated consistently in our training material. This result is clearly in contrast to studies which conclude that humans are not very reliable at this kind of task (Rath et al., 1961) . We attribute this difference to a difference in our instructions. Whereas the subjects in Rath et al.'s experiment were asked to look for the most relevant sentences, our annotators had to look for specific argumentative roles which seems to have eased the task. In addition, our guidelines give very specific instructions for ambiguous cases.",
"cite_spans": [
{
"start": 469,
"end": 488,
"text": "(Rath et al., 1961)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "OTHER",
"sec_num": null
},
{
"text": "These reproducibility values are important because they can act as a good evaluation measure as it factors random agreement out, unlike percentage agreement. It also provides a realistic upper bound on performance: if the machine is treated as another coder, and if reproducibity does not decrease then the machine has reached the theoretically best result, considering the cognitive difficulty of the task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OTHER",
"sec_num": null
},
{
"text": "Annotating texts with our scheme is timeconsuming, so we wanted to determine if there was a more efficient way of obtaining hand-coded training material, namely by annotating only parts of the source texts. For example, the abstract, introductions and conclusions of source texts are often like \"condensed\" versions of the contents of the entire paper and might be good areas to restrict annotation to. Alternatively, it might be a good idea to restrict annotation to the first 20% or the last 10% of any given text. Yet another possibility for restricting the range of sentences to be annotated is based on the 'alignment' idea introduced in (Kupiec et al., 1995) : a simple surface measure determines sentences in the document that are maximally similar to sentences in the abstract.",
"cite_spans": [
{
"start": 643,
"end": 664,
"text": "(Kupiec et al., 1995)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotating parts of texts",
"sec_num": "3.2"
},
{
"text": "Obviously, any of these strategies of area restriction would give us fewer gold standard sentences per paper, so we would have to make sure that we still had enough candidate sentences for all seven categories. On the other hand, because these areas could well be the most clearly written and informationally rich sections, it might be the case that the quality of the resulting gold standard is higher. In this case we would expect the reliability of the coding in these areas to be higher in comparison to the reliability achieved overall, which in turn would result in higher accuracy when this task is done automatically. We did extensive experiments on this. Figure 5 shows reliability values for each of the annotated portions of text, and Figure 6 shows the composi-tion in terms of our labels for each of the annotated portions of text. The implications for corpus preparation for abstract generation experiments can be summarised as follows. If one wants to avoid manually annotating entire papers but still make all argumentative distinctions, one can restrict the annotation to sentences appearing in the introduction section, even though annotators will find them slightly harder to classify (K=.69), or to all alignable abstract sentences, even if there are not many alignable abstract sentences detectable overall (around 50% of the sentences in the abstract), or to conclusion sentences, even if the coverage of argumentative categories is very restricted in the conclusions (mostly AIM and OWN sentences).",
"cite_spans": [],
"ref_spans": [
{
"start": 664,
"end": 672,
"text": "Figure 5",
"ref_id": null
},
{
"start": 746,
"end": 754,
"text": "Figure 6",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Annotating parts of texts",
"sec_num": "3.2"
},
{
"text": "We also examined a fall-back option of just annotating the first 10% or last 5% of a paper (as not all papers in our collection have an explicitly marked introduction and conclusion section), but the reliability results of this were far less good (K=.66 and K=.63, respectively).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotating parts of texts",
"sec_num": "3.2"
},
{
"text": "Automatic annotation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4",
"sec_num": null
},
{
"text": "All the annotation work is obviously in aid of development work, in particular for the training of a system. We will provide a brief description of training results so as to show the practical viability of the proposed corpus preparation method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4",
"sec_num": null
},
{
"text": "Our training material is a collection of 80 conference papers and their summaries, taken from the Computation and Language E-Print Archive (http://xxx. lanl. gov/cmp-lg/). The training material contains 330,000 word tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "The data is automatically preprocessed into xml format, and the following structural information is marked up: title, summary, headings, paragraph structure and sentences, citations in running text, and reference list at the end of the paper. If one of the paper's authors also appears on the author list of a cited paper, then that citation is marked as self citation. Tables, equations, figures, captions, cross references are removed and replaced by place holders. Sentence boundaries are automatically detected, and the text is POS-tagged according to the UPenn tagset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "Annotation of rhetorical roles for all 80 papers (around 12,000 sentences) was provided by one of our human judges during the annotation study mentioned above. (Kupiec et al., 1995) use supervised learning to automatically adjust feature weights. Each document sentence receives scores for each of the features, resuiting in an estimate for the sentence's probability to also occur in the summary. This probability is calculated for each feature value as a combination of the probability of the feature-value pair occurring in a sentence which is in the summary (successful case) and the probability that the feature-value pair occurs unconditionally.",
"cite_spans": [
{
"start": 160,
"end": 181,
"text": "(Kupiec et al., 1995)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "We extend Kupiec et al.'s estimation of the probability that a sentence is contained in the abstract, to the probability that it has rhetorical role R (cf. ., Fk): Probability that sentence s in the source text has rhetorical role R, given its feature values; relative frequency of role R (constant); probability of feature-value pair occurring in a sentence which is in rhetorical class R; probability that the feature-value pair occurs unconditionally; number of feature-value pairs; j-th feature-value pair. Evaluation of the method relies on crossvalidation: the model is trained on a training set of documents, leaving one document out at a time (the test document). The model is then used to assign each sentence a probability for each category R, and the category with the highest probability is chosen as answer for the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The method",
"sec_num": "4.2"
},
{
"text": "The features we use in training (see Figure 8) are different from Kupiecet al.'s because we do not estimate overall importance in one step, but instead guess argumentative status first and determine importance later.",
"cite_spans": [],
"ref_spans": [
{
"start": 37,
"end": 46,
"text": "Figure 8)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "4.3"
},
{
"text": "Many of our features can be read off directly from the way the corpus is encoded: our preprocessors determine sentence-boundaries and parse the reference list at the end. This gives us a good handle on structural and locational features, as well as on features related to citations. The syntactic features rely on determining the first finite verb in the sentence, which is done symbolically using POS-information. Heuristics are used to determine the tense and possible negation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.3"
},
{
"text": "Cit-2 Syn-1 Syn-2 Syn-3 Syn-4 Sere-1 Sem-2 Sem-3 Cont-1 Cont-2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.3"
},
{
"text": "The semantic features rely on template matching. In the feature Sem-1, a hand-crafted lexicon is used to classify the verb into one of 20 Action Classes (cf. Figure 9 , left half), if it is one of the 388 verbs contained in the lexicon. The feature Sem-2 encodes whether the agent of the action is most likely to refer to the authors, or to other agents, e.g. other researchers (177 templates). Heuristic rules determine that the agent is the subject in an active sentence, or the head of the by-phrase (if present) in a passive sentence. Sere-3 encodes various other formulaic expressions (indicator phrases (Paice, 1981) , meta-comments (Zukerman, 1991) ) in order to exploit explicit rhetoric phrases the authors might have used, cf. Figure 9 , right half (414 templates).",
"cite_spans": [
{
"start": 609,
"end": 622,
"text": "(Paice, 1981)",
"ref_id": "BIBREF14"
},
{
"start": 639,
"end": 655,
"text": "(Zukerman, 1991)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 158,
"end": 166,
"text": "Figure 9",
"ref_id": null
},
{
"start": 737,
"end": 745,
"text": "Figure 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "4.3"
},
{
"text": "The content features use the tf/idf method and title and header information for finding contentful words or phrases. In contrast to all other features they do not attempt to model the form or metadiscourse contained in the sentences but instead model their domain (object-level) contents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.3"
},
{
"text": "When the Naive Bayesian Model is added to the pool of coders, the reproducibility drops from K=.71 to K=.55. This reproducibility value is equivalent to the value achieved by 6 human annotators with no prior training, as found in an earlier experiment (Teufel et al., To Appear) . Compared to one of the annotators, Kappa is K=.37, which corresponds to percentage accuracy of 71.2%. This number cannot be directly compared to experiments like Kupiec et al.'s because in their experiment a compression of around 3% was achieved whereas we classify each sentence into one of the categories.",
"cite_spans": [
{
"start": 252,
"end": 278,
"text": "(Teufel et al., To Appear)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "Further analysis of our results shows the system performs well on the frequent category OWN, cf. the confusion matrix in Fig. reftab :confusion. Indeed, as Figure 3 shows, OWN is so frequent that choosing OWN all the time gives us a seemingly hardto-beat baseline with a high percentage agreement of 69% (Baseline 1). However, the Kappa statistic, which controls for expected random agreement, reveals just how bad that baseline really is: Kappa is K=-.12 (machine vs. one annotator). Random choice of categories according to the distribution of categories (Baseline 2) is a better baseline; Kappa Figure 10: Confusion matrix: human vs. automatic annotation for this baseline is K=0. AIM categories can be determined with a precision of 48% and a recall of 56% (cf. Figure 11) . These values are more directly comparable to Kupiec et al.'s results of 44% co-selection of extracted sentences with alignable summary sentences. We assume that most of the sentences extracted by their method would have fallen into the AIM category. The other easily determinable category for the automatic method is TEXTUAL (p----55%; r=52%), whereas the results for the other non-basic categories are relatively lower -mirroring the results for humans.",
"cite_spans": [],
"ref_spans": [
{
"start": 121,
"end": 132,
"text": "Fig. reftab",
"ref_id": null
},
{
"start": 156,
"end": 164,
"text": "Figure 3",
"ref_id": null
},
{
"start": 766,
"end": 776,
"text": "Figure 11)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "As far as the individual features are concerned, we found the strongest heuristics to be location, type of header, citations, and the semantic classes (indicator phrases, agents and actions); syntactic and contentbased heuristics are the weakest. The first column in Figure 12 gives the predictiveness of the feature Figure 11 : Precision and recall per category on its own, in terms of kappa between machine and one annotator. Some of the weaker features are not predictive enough on their own to break the dominance of the prior; in that case, they behave just like Baseline 1 (K=-.12).",
"cite_spans": [],
"ref_spans": [
{
"start": 267,
"end": 276,
"text": "Figure 12",
"ref_id": "FIGREF0"
},
{
"start": 317,
"end": 326,
"text": "Figure 11",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "The second column gives kappa for experiments using all features except the given feature, i.e. the results if this feature is left out of the pool of fea- While not entirely satisfactory, these results might be taken as an indication that we have indeed managed to identify the right kinds of features for argumentative sentence classification. Taking the context into account should further increase results, as preliminary experiments with n-gram modelling have shown. In these experiments, we replaced the prior P(s E R) in Figure 7 with a n-gram based probability of that role occurring in the given context.",
"cite_spans": [],
"ref_spans": [
{
"start": 528,
"end": 536,
"text": "Figure 7",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "In this paper we have presented an annotation scheme for corpus based summarisation. In tests, we have found this annotation scheme to be stable and reproducible. On the basis of this scheme, we have created a new kind of resource for training summarisation systems: a corpus annotated with labels which indicate the argumentative role of each sentence in the text. Results of our training work show that the annotation work can be automated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The reliability of a dialogue structure coding scheme",
"authors": [
{
"first": "Jean",
"middle": [],
"last": "Carletta",
"suffix": ""
},
{
"first": "Amy",
"middle": [],
"last": "Isard",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Isard",
"suffix": ""
},
{
"first": "Jacqueline",
"middle": [
"C"
],
"last": "Kowtko",
"suffix": ""
},
{
"first": "Gwyneth",
"middle": [],
"last": "Doherty-Sneddon",
"suffix": ""
},
{
"first": "Anne",
"middle": [
"H"
],
"last": "Anderson",
"suffix": ""
}
],
"year": 1997,
"venue": "Computatiorml Linguistics",
"volume": "23",
"issue": "1",
"pages": "13--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean Carletta, Amy Isard, Stephen Isard, Jacqueline C. Kowtko, Gwyneth Doherty-Sneddon, and Anne H. Anderson. 1997. The reliability of a dialogue structure coding scheme. Computatiorml Linguistics, 23(1):13-31.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Assessing agreement on classification tasks: the kappa statistic",
"authors": [
{
"first": "Jean",
"middle": [],
"last": "Carletta",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "2",
"pages": "249--254",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean Carletta. 1996. Assessing agreement on classifica- tion tasks: the kappa statistic. Computational Lin- guistics, 22(2):249-254.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "New methods in automatic extracting",
"authors": [
{
"first": "H",
"middle": [
"P"
],
"last": "Edmundson",
"suffix": ""
}
],
"year": 1969,
"venue": "Journal of the Association for Computing Machinery",
"volume": "16",
"issue": "2",
"pages": "264--285",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. P. Edmundson. 1969. New methods in automatic extracting. Journal of the Association for Computing Machinery, 16(2):264-285.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Citation indezing: its theory and application in science, thechnology and humanities",
"authors": [
{
"first": "E",
"middle": [],
"last": "Garfield",
"suffix": ""
}
],
"year": 1979,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Garfield. 1979. Citation indezing: its theory and ap- plication in science, thechnology and humanities. Wi- ley, New York.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Switchboard SWBD-DAMSL Shallow-Discourse-Function Annotation Coders Manual. University of Colorado, Institute of Cognitive Science",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "Debra",
"middle": [],
"last": "Biasca",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Jurafsky, Elizabeth Shriberg, and Debra Bi- asca, 1997. Switchboard SWBD-DAMSL Shallow- Discourse-Function Annotation Coders Manual. Uni- versity of Colorado, Institute of Cognitive Science. TR-97-02.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Content analysis: an introduction to its methodology",
"authors": [
{
"first": "Klaus",
"middle": [],
"last": "Krippendorff",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "5",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klaus Krippendorff. 1980. Content analysis: an intro- duction to its methodology. Sage Commtext series; 5. Sage, Beverly Hills London.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A trainable document summarizer",
"authors": [
{
"first": "Julian",
"middle": [],
"last": "Kupiec",
"suffix": ""
},
{
"first": "Jan",
"middle": [
"O"
],
"last": "Pedersen",
"suffix": ""
},
{
"first": "Francine",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 18th ACM-SIGIR Conference",
"volume": "",
"issue": "",
"pages": "68--73",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julian Kupiec, Jan O. Pedersen, and Francine Chen. 1995. A trainable document summarizer. In Pro- ceedings of the 18th ACM-SIGIR Conference, pages 68-73.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The discourse-level structure of empirical abstracts: an exploratory study. Information Processing and Management",
"authors": [
{
"first": "Elizabeth",
"middle": [],
"last": "Duross",
"suffix": ""
},
{
"first": "Liddy",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "27",
"issue": "",
"pages": "55--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elizabeth DuRoss Liddy. 1991. The discourse-level structure of empirical abstracts: an exploratory study. Information Processing and Management, 27(1):55- 81.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The automatic creation of literature abstracts",
"authors": [
{
"first": "H",
"middle": [
"P"
],
"last": "Luhn",
"suffix": ""
}
],
"year": 1958,
"venue": "IBM Journal of Research and Development",
"volume": "2",
"issue": "2",
"pages": "159--165",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. P. Luhn. 1958. The automatic creation of literature abstracts. IBM Journal of Research and Development, 2(2):159-165.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Machine learning of generic and user-focused summarization",
"authors": [
{
"first": "Inderjeet",
"middle": [],
"last": "Mani",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Bloedorn",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Fifteenth National Conference on AI (AAAI-98)",
"volume": "",
"issue": "",
"pages": "821--826",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Inderjeet Mani and Eric Bloedorn. 1998. Machine learn- ing of generic and user-focused summarization. In Proceedings of the Fifteenth National Conference on AI (AAAI-98), pages 821-826.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Rhetorical structure theory: description and construction of text structures",
"authors": [
{
"first": "C",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Sandra",
"middle": [
"A"
],
"last": "Mann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Thompson",
"suffix": ""
}
],
"year": 1987,
"venue": "Natural Langua9 e Generation: New Results in Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "85--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William C. Mann and Sandra A. Thompson. 1987. Rhetorical structure theory: description and construc- tion of text structures. In G. Kempen, editor, Natural Langua9 e Generation: New Results in Artificial In- telligence, Psychology and Linguistics, pages 85-95, Dordrecht. Nijhoff.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "From discourse structures to text summaries",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the ACL/EACL Workshop on Intelligent Scalable Text Summarization",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Marcu. 1997. From discourse structures to text summaries. In Proceedings of the ACL/EACL Work- shop on Intelligent Scalable Text Summarization.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "In this paper we report... -speech acts and scientific facts",
"authors": [
{
"first": "Greg",
"middle": [],
"last": "Myers",
"suffix": ""
}
],
"year": 1992,
"venue": "Journal of Pragmatics",
"volume": "17",
"issue": "4",
"pages": "295--313",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Greg Myers. 1992. In this paper we report... -speech acts and scientific facts. Journal of Pragmatics, 17(4):295-313.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Abstract generation based on rhetorical structure extraction",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Ono",
"suffix": ""
},
{
"first": "Ka~uo",
"middle": [],
"last": "Sumita",
"suffix": ""
},
{
"first": "Seijii",
"middle": [],
"last": "Miike",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 15th International conference on Computational Linguistics (COLING-94)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Ono, Ka~uo Sumita, and Seijii Miike. 1994. Ab- stract generation based on rhetorical structure extrac- tion. In Proceedings of the 15th International confer- ence on Computational Linguistics (COLING-94).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The automatic generation of literary abstracts: an approach based on the identification of self-indicating phrases",
"authors": [
{
"first": "D",
"middle": [],
"last": "Chris",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Paice",
"suffix": ""
}
],
"year": 1981,
"venue": "Information Retrieval Research",
"volume": "",
"issue": "",
"pages": "172--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris D. Paice. 1981. The automatic generation of lit- erary abstracts: an approach based on the identifi- cation of self-indicating phrases. In Robert Norman Oddy, S. E. Robertson, C. J. van Rijsbergen, and P. W. Williams, editors, Information Retrieval Re- search, pages 172-191. Butterworth, London.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The formation of abstracts by the selection of sentences",
"authors": [
{
"first": "G",
"middle": [],
"last": "Path",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Resnick",
"suffix": ""
},
{
"first": "T",
"middle": [
"R"
],
"last": "Savage",
"suffix": ""
}
],
"year": 1961,
"venue": "American Documentation",
"volume": "12",
"issue": "2",
"pages": "139--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G.J Path, A. Resnick, and T. R. Savage. 1961. The formation of abstracts by the selection of sentences. American Documentation, 12(2):139-143.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Nonparamettic statistics for the Behavioral Sciences",
"authors": [
{
"first": "Sidney",
"middle": [],
"last": "Siegel",
"suffix": ""
},
{
"first": "N",
"middle": [
"J"
],
"last": "Castellan",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sidney Siegel and N.J. Jr. Castellan. 1988. Nonparamet- tic statistics for the Behavioral Sciences. McGraw- Hill, second edition.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Automatic summarising: factors and directions",
"authors": [
{
"first": "Karen Sp~rck",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 1998,
"venue": "AAAI Spring Symposium on Intelligent Text Summarization",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karen Sp~rck Jones. 1998. Automatic summarising: factors and directions. In AAAI Spring Symposium on Intelligent Text Summarization.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Genre analysis: English in academic and research settings",
"authors": [
{
"first": "John",
"middle": [],
"last": "Swales",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Swales. 1990. Genre analysis: English in academic and research settings. Cambridge University Press.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "To Appear. An annotation scheme for discourse-level argumentation in research articles",
"authors": [
{
"first": "Simone",
"middle": [],
"last": "Teufel",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Carletta",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the Ninth Conference of the European Chapter of the Association for Computational Linguistics (EA CL-99)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simone Teufel, Jean Carletta, and Marc Moens. To Ap- pear. An annotation scheme for discourse-level argu- mentation in research articles. In Proceedings of the Ninth Conference of the European Chapter of the As- sociation for Computational Linguistics (EA CL-99).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Using meta-comments to generate fluent text in a technical domain",
"authors": [
{
"first": "Ingrid",
"middle": [],
"last": "Zukerman",
"suffix": ""
}
],
"year": 1991,
"venue": "Computational Intelligence: Special Issue on Natural Language Generation",
"volume": "7",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ingrid Zukerman. 1991. Using meta-comments to gener- ate fluent text in a technical domain. Computational Intelligence: Special Issue on Natural Language Gen- eration, 7(4):276.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Overview of the a~notation scheme FULL SCHEME",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "Typical rhetorical pattern in a research paper introduction",
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"num": null,
"text": "Reproducibility diagnostics: non-basic categories",
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"num": null,
"text": "Figure 5: Reproducibility by annotated area",
"type_str": "figure"
},
"FIGREF4": {
"uris": null,
"num": null,
"text": "Label distribution by annotated area",
"type_str": "figure"
},
"FIGREF6": {
"uris": null,
"num": null,
"text": "Naive Bayesian classifier",
"type_str": "figure"
},
"FIGREF7": {
"uris": null,
"num": null,
"text": "contain a citation or the name of an author contained in the reference list? Does the sentence contain a self citation? Tense (associated with first finite verb in sentence) Modal Auxiliaries Negation Action type of first verb in sentence Type of Agent Type of formulaic expression occurring in sentenceDoes the sentence contain keywords as determined by the tf/idf measure? Does the sentence contain words also occurring in the title or headlines?",
"type_str": "figure"
},
"FIGREF8": {
"uris": null,
"num": null,
"text": "Baseline 2 (random by distr.): K=0 Figure 12: Disambiguation potential of individual heuristics tures. These numbers show that some of the weaker features contribute some predictive power in combination with others.",
"type_str": "figure"
},
"TABREF0": {
"html": null,
"content": "<table><tr><td>AIM</td><td>Sentences best portraying the particular (main) research goal of</td></tr><tr><td/><td>the article</td></tr><tr><td>TEXTUAL</td><td>Explicit statements about the textual section structure of the</td></tr><tr><td/><td>paper</td></tr><tr><td>CONTRAST</td><td>Sentences contrasting own work to other work; sentences point-</td></tr><tr><td/><td>ing out weaknesses in other research; sentences stating that the</td></tr><tr><td/><td>research task of the current paper has never been done before;</td></tr><tr><td/><td>direct comparisons</td></tr><tr><td/><td>One reason is that it is a domain</td></tr><tr><td/><td>we are familiar with, which helps for intermediate</td></tr><tr><td/><td>evaluation of the annotation work. The other rea-</td></tr><tr><td/><td>son is that computational linguistics is also a rather</td></tr><tr><td/><td>heterogeneous domain: the papers in our collection</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Sentences describing some (generally accepted) background knowledge OTHER Sentences describing aspects of some specific other research in a neutral way (excluding contrastive or BASIS statements) OWN Sentences describing any aspect of the own work presented in this paper -except what is covered by AIM or TEXTUAL, e.g. details of solution (methodology), limitations, and further work."
},
"TABREF3": {
"html": null,
"content": "<table><tr><td/><td colspan=\"2\">Action Types</td><td/><td/><td colspan=\"2\">Formulaic Expression Types</td></tr><tr><td>AFFECT</td><td/><td/><td/><td colspan=\"2\">GENERAL-AGENT</td><td>linguists</td></tr><tr><td colspan=\"2\">ARGUMENTATION</td><td/><td/><td colspan=\"2\">SPECIFIC-AGENT</td><td>according to &lt; REF'~</td></tr><tr><td>AWARENESS</td><td/><td/><td/><td colspan=\"2\">GAP-INTRODUCTION</td><td>to our knowledge</td></tr><tr><td colspan=\"2\">BETTER,SOLUTION</td><td/><td/><td/><td>AIM</td><td>main contribution of this</td></tr><tr><td>CHANGE</td><td/><td/><td/><td colspan=\"2\">TEXTSTRUCTURE</td><td>in section &lt; CREF/&gt;</td></tr><tr><td colspan=\"2\">COMPARISON CONTINUATION</td><td/><td/><td/><td>DEIXIS CONTINUATION</td><td>in this paper following the argument in</td></tr><tr><td>CONTRAST</td><td/><td/><td/><td/><td>SIMILARITY</td><td>bears similarity to</td></tr><tr><td colspan=\"2\">FUTURE.INTEREST INTEREST NEED PRESENTATION</td><td/><td/><td/><td>COMPARISON CONTRAST METHOD PREVIOUS_CONTEXT</td><td>when compared to our however a novel method for XX-ing elsewhere, we have</td></tr><tr><td>PROBLEM</td><td/><td/><td/><td/><td>FUTURE</td><td>avenue for improvement</td></tr><tr><td colspan=\"2\">RESEARCH SIMILAR SOLUTION TEXTSTRUCTURE</td><td/><td/><td/><td>AFFECT PROBLEM SOLUTION POSITIVE.ADJECTIVE</td><td>hopefully drawback insight appealing</td></tr><tr><td>USE</td><td/><td/><td/><td/><td>NEGATIVE. ADJECTIVE</td><td>unsatisfactory</td></tr><tr><td>COPULA</td><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">POSSESSION</td><td/><td/><td/><td/></tr><tr><td/><td/><td colspan=\"5\">Figure 9: Types of actions and formulaic expressions</td><td>.i ~-, ..</td></tr><tr><td/><td/><td/><td/><td>MACHINE</td><td/></tr><tr><td>HUMAN</td><td>AIM CONTRAST TEXTUAL OWN BACKGROUND BASIS OTHER Total</td><td>AIM 115 11 13 75 11 10 7 242</td><td>CONTRAST 4 79 4 61 20 10 35 213</td><td>TEXTUAL I0 5 115 61 3 5 10 209</td><td colspan=\"2\">OWN BACKGROUND BASIS OTHER Total 46 15 13 4 207 280 92 40 89 596 71 5 3 12 223 7666 168 125 279 8435 286 295 21 84 720 40 4 102 55 226 1120 203 173 466 2014 9509 782 477 989 12421</td></tr></table>",
"type_str": "table",
"num": null,
"text": "we hop. _. . _~e to improve these results we argue against an application of we know of no other attempts... our system outperforms that of ... we extend < CITE/> 's algorithm we tested_ our system against... we follow X in postulating that our approach differs from X's ... we inten..d to improve our results... we are concerned with ... this approach, however, lacks... we present here a method for... thi~-~ses the problem of how to... we collected our data from... our approach resembles that of X... we solve this problem by... the paper is organized as follows... we employ X's method... our goal i...ss to... our approach has three advantages..."
}
}
}
}