ACL-OCL / Base_JSON /prefixJ /json /J17 /J17-3005.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "J17-3005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:45:51.510387Z"
},
"title": "Parsing Argumentation Structures in Persuasive Essays",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Stab",
"suffix": "",
"affiliation": {
"laboratory": "Technische Universit\u00e4t Darmstadt, Ubiquitous Knowledge Processing (UKP) Lab",
"institution": "",
"location": {
"addrLine": "Hochschulstrasse 10",
"postCode": "D-64289",
"settlement": "Darmstadt",
"country": "Germany"
}
},
"email": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": "",
"affiliation": {
"laboratory": "Technische Universit\u00e4t Darmstadt, Ubiquitous Knowledge Processing (UKP) Lab",
"institution": "",
"location": {
"addrLine": "Hochschulstrasse 10",
"postCode": "D-64289",
"settlement": "Darmstadt",
"country": "Germany"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this article, we present a novel approach for parsing argumentation structures. We identify argument components using sequence labeling at the token level and apply a new joint model for detecting argumentation structures. The proposed model globally optimizes argument component types and argumentative relations using Integer Linear Programming. We show that our model significantly outperforms challenging heuristic baselines on two different types of discourse. Moreover, we introduce a novel corpus of persuasive essays annotated with argumentation structures. We show that our annotation scheme and annotation guidelines successfully guide human annotators to substantial agreement.",
"pdf_parse": {
"paper_id": "J17-3005",
"_pdf_hash": "",
"abstract": [
{
"text": "In this article, we present a novel approach for parsing argumentation structures. We identify argument components using sequence labeling at the token level and apply a new joint model for detecting argumentation structures. The proposed model globally optimizes argument component types and argumentative relations using Integer Linear Programming. We show that our model significantly outperforms challenging heuristic baselines on two different types of discourse. Moreover, we introduce a novel corpus of persuasive essays annotated with argumentation structures. We show that our annotation scheme and annotation guidelines successfully guide human annotators to substantial agreement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Argumentation aims at increasing or decreasing the acceptability of a controversial standpoint (van Eemeren, Grootendorst, and Snoeck Henkemans 1996, page 5) . It is a routine that is omnipresent in our daily verbal communication and thinking. Wellreasoned arguments are not only important for decision making and learning but also play a crucial role in drawing widely accepted conclusions.",
"cite_spans": [
{
"start": 95,
"end": 157,
"text": "(van Eemeren, Grootendorst, and Snoeck Henkemans 1996, page 5)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Computational argumentation is a recent research field in computational linguistics that focuses on the analysis of arguments in natural language texts. Novel methods have broad application potential in various areas such as legal decision support (Mochales-Palau and Moens 2009) , information retrieval (Carstens and Toni 2015) , policy making (Sardianos et al. 2015) , and debating technologies (Levy et al. 2014; Rinott et al. 2015) . Recently, computational argumentation has been receiving increased attention in computer-assisted writing (Song et al. 2014; Stab et al. 2014) because it allows the creation of writing support systems that provide feedback about written arguments.",
"cite_spans": [
{
"start": 248,
"end": 279,
"text": "(Mochales-Palau and Moens 2009)",
"ref_id": "BIBREF61"
},
{
"start": 304,
"end": 328,
"text": "(Carstens and Toni 2015)",
"ref_id": "BIBREF17"
},
{
"start": 345,
"end": 368,
"text": "(Sardianos et al. 2015)",
"ref_id": "BIBREF81"
},
{
"start": 397,
"end": 415,
"text": "(Levy et al. 2014;",
"ref_id": "BIBREF52"
},
{
"start": 416,
"end": 435,
"text": "Rinott et al. 2015)",
"ref_id": "BIBREF79"
},
{
"start": 544,
"end": 562,
"text": "(Song et al. 2014;",
"ref_id": "BIBREF86"
},
{
"start": 563,
"end": 580,
"text": "Stab et al. 2014)",
"ref_id": "BIBREF87"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Argumentation structures are closely related to discourse structures such as those defined by Rhetorical Structure Theory (RST) (Mann and Thompson 1987) , the Penn Discourse Treebank (PDTB) (Prasad et al. 2008) , or Segmented Discourse Representation Theory (SDRT) (Asher and Lascarides 2003) . The internal structure of an argument consists of several argument components. It includes a claim and one or more premises (Govier 2010) . The claim is a controversial statement and the central component of an argument, and premises are reasons for justifying (or refuting) the claim. Moreover, arguments have directed argumentative relations, describing the relationships one component has with another. Each such relation indicates that the source component is either a justification for or a refutation of the target component.",
"cite_spans": [
{
"start": 128,
"end": 152,
"text": "(Mann and Thompson 1987)",
"ref_id": "BIBREF56"
},
{
"start": 190,
"end": 210,
"text": "(Prasad et al. 2008)",
"ref_id": "BIBREF75"
},
{
"start": 265,
"end": 292,
"text": "(Asher and Lascarides 2003)",
"ref_id": "BIBREF4"
},
{
"start": 419,
"end": 432,
"text": "(Govier 2010)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The identification of argumentation structures involves several subtasks like separating argumentative from non-argumentative text units (Moens et al. 2007; Florou et al. 2013) , classifying argument components into claims and premises (Mochales-Palau and Moens 2011; Rooney, Wang, and Browne 2012; Stab and Gurevych 2014b) , and identifying argumentative relations (Mochales-Palau and Moens 2009; Peldszus 2014; Stab and Gurevych 2014b) . However, an approach that covers all subtasks is still missing. Furthermore, most approaches operate locally and do not optimize the global argumentation structure. Recently, Peldszus and Stede (2015) proposed an approach based on Minimum Spanning Trees, which jointly models argumentation structures. However, it links all argument components in a single tree structure. Consequently, it is not capable of splitting a text containing more than one argument. In addition to the lack of end-to-end approaches for parsing argumentation structures, there are relatively few corpora annotated with argumentation structures at the discourse-level. Apart from our previous corpus (Stab and Gurevych 2014a) , the few existing corpora lack non-argumentative text units (Peldszus 2014) , are not annotated with claims and premises (Kirschner, Eckle-Kohler, and Gurevych 2015) , or the reliability is unknown (Reed et al. 2008) .",
"cite_spans": [
{
"start": 137,
"end": 156,
"text": "(Moens et al. 2007;",
"ref_id": "BIBREF64"
},
{
"start": 157,
"end": 176,
"text": "Florou et al. 2013)",
"ref_id": "BIBREF33"
},
{
"start": 236,
"end": 267,
"text": "(Mochales-Palau and Moens 2011;",
"ref_id": "BIBREF62"
},
{
"start": 268,
"end": 298,
"text": "Rooney, Wang, and Browne 2012;",
"ref_id": "BIBREF80"
},
{
"start": 299,
"end": 323,
"text": "Stab and Gurevych 2014b)",
"ref_id": null
},
{
"start": 366,
"end": 397,
"text": "(Mochales-Palau and Moens 2009;",
"ref_id": "BIBREF61"
},
{
"start": 398,
"end": 412,
"text": "Peldszus 2014;",
"ref_id": "BIBREF69"
},
{
"start": 413,
"end": 437,
"text": "Stab and Gurevych 2014b)",
"ref_id": null
},
{
"start": 615,
"end": 640,
"text": "Peldszus and Stede (2015)",
"ref_id": "BIBREF71"
},
{
"start": 1114,
"end": 1139,
"text": "(Stab and Gurevych 2014a)",
"ref_id": null
},
{
"start": 1201,
"end": 1216,
"text": "(Peldszus 2014)",
"ref_id": "BIBREF69"
},
{
"start": 1262,
"end": 1306,
"text": "(Kirschner, Eckle-Kohler, and Gurevych 2015)",
"ref_id": "BIBREF46"
},
{
"start": 1339,
"end": 1357,
"text": "(Reed et al. 2008)",
"ref_id": "BIBREF78"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Our primary motivation for this work is to create argument analysis methods for argumentative writing support systems and to achieve a better understanding of argumentation structures. Therefore, our first research question is whether human annotators can reliably identify argumentation structures in persuasive essays and whether it is possible to create annotated data of high quality. The second research question addresses the automatic recognition of argumentation structure. We investigate if, and how accurately, argumentation structures can be identified by computational techniques. The contributions of this article are the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "r An annotation scheme for modeling argumentation structures derived from argumentation theory. Our annotation scheme models the argumentation structure of a document as a connected tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "r A novel corpus of 402 persuasive essays annotated with discourse-level argumentation structures. We show that human annotators can apply our annotation scheme to persuasive essays with substantial agreement. This corpus and the annotation guidelines are freely available. 1 r An end-to-end argumentation structure parser that identifies argument components at the token level and globally optimizes component types and argumentative relations.",
"cite_spans": [
{
"start": 274,
"end": 275,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The remainder of this article is structured as follows: In Section 2, we review related work in computational argumentation and discuss the difference to traditional discourse analysis. In Section 3, we derive our annotation scheme from argumentation theory. Section 4 presents the results of an annotation study and the corpus creation. In Section 5, we introduce the argumentation structure parser. We show that our model significantly outperforms challenging heuristic baselines on two different types of discourse. We discuss our results in Section 6, and provide our conclusions in Section 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Existing work in computational argumentation addresses a variety of different tasks. These include, for example, approaches for identifying reasoning type (Feng and Hirst 2011) , argumentation style (Oraby et al. 2015) , the stance of the author (Hasan and Ng 2014; Somasundaran and Wiebe 2009) , the acceptability of arguments (Cabrio and Villata 2012) , and appropriate support types (Park and Cardie 2014) . Most relevant to our work, however, are approaches on argument mining that focus on the identification of argumentation structures in natural language texts. We categorize related approaches into the following three subtasks: r Structure identification focuses on linking arguments or argument components. Its objective is to recognize different types of argumentative relations such as support or attack relations. Moens et al. (2007) identified argumentative sentences in various types of text such as newspapers, parliamentary records, and online discussions. They experimented with various different features and achieved an accuracy of 0.738 with word pairs, text statistics, verbs, and keyword features. Florou et al. (2013) classified text segments as argumentative or non-argumentative using discourse markers and several features extracted from the tense and mood of verbs. They report an F1 score of 0.764. Levy et al. (2014) proposed a pipeline including three consecutive steps for identifying contextdependent claims in Wikipedia articles. Their first component detects topic-relevant sentences including a claim. The second component detects the boundaries of each claim. The third component ranks the identified claims for identifying the most relevant claims for the given topic. They report a mean precision of 0.09 and a mean recall of 0.73 averaged over 32 topics for retrieving 200 claims. Goudas et al. (2014) presented a two-step approach for identifying argument components and their boundaries in social media texts. First, they classified each sentence as argumentative or non-argumentative and achieved 0.774 accuracy. Second, they segmented each argumentative sentence using a Conditional Random Field (CRF). Their best model achieved 0.424 accuracy.",
"cite_spans": [
{
"start": 155,
"end": 176,
"text": "(Feng and Hirst 2011)",
"ref_id": "BIBREF30"
},
{
"start": 199,
"end": 218,
"text": "(Oraby et al. 2015)",
"ref_id": "BIBREF67"
},
{
"start": 246,
"end": 265,
"text": "(Hasan and Ng 2014;",
"ref_id": "BIBREF41"
},
{
"start": 266,
"end": 294,
"text": "Somasundaran and Wiebe 2009)",
"ref_id": "BIBREF86"
},
{
"start": 328,
"end": 353,
"text": "(Cabrio and Villata 2012)",
"ref_id": "BIBREF14"
},
{
"start": 386,
"end": 408,
"text": "(Park and Cardie 2014)",
"ref_id": "BIBREF68"
},
{
"start": 827,
"end": 846,
"text": "Moens et al. (2007)",
"ref_id": "BIBREF64"
},
{
"start": 1121,
"end": 1141,
"text": "Florou et al. (2013)",
"ref_id": "BIBREF33"
},
{
"start": 1328,
"end": 1346,
"text": "Levy et al. (2014)",
"ref_id": "BIBREF52"
},
{
"start": 1821,
"end": 1841,
"text": "Goudas et al. (2014)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "The objective of the component classification task is to identify the type of argument components. Kwon et al. (2007) proposed two consecutive steps for identifying different types of claims in online comments. First, they classified sentences as claims and obtained an F1 score of 0.55 with a boosting algorithm. Second, they classified each claim as either support, oppose, or propose. Their best model achieved an F1 score of 0.67. Rooney, Wang, and Browne (2012) applied kernel methods for classifying text units as either claims, premises, or non-argumentative. They obtained an accuracy of 0.65. Mochales-Palau and Moens (2011) classified sentences in legal decisions as claim or premise. They achieved an F1 score of 0.741 for claims and 0.681 for premises using a Support Vector Machine (SVM) with domain-dependent key phrases, text statistics, verbs, and the tense of the sentence. In our previous work, we used a multiclass SVM for labeling text units of student essays as major claim, claim, premise, or nonargumentative (Stab and Gurevych 2014b) . We obtained an F1 score of 0.726 using structural, lexical, syntactic, indicator, and contextual features. Recently, Nguyen and Litman (2015) found that argument and domain words from unlabeled data increase F1 score to 0.76 in the same experimental setup, and Lippi and Torroni (2015) achieved an F1 score of 0.714 for identifying sentences containing a claim in student essays using partial tree kernels.",
"cite_spans": [
{
"start": 99,
"end": 117,
"text": "Kwon et al. (2007)",
"ref_id": "BIBREF50"
},
{
"start": 435,
"end": 466,
"text": "Rooney, Wang, and Browne (2012)",
"ref_id": "BIBREF80"
},
{
"start": 621,
"end": 633,
"text": "Moens (2011)",
"ref_id": "BIBREF62"
},
{
"start": 1032,
"end": 1057,
"text": "(Stab and Gurevych 2014b)",
"ref_id": null
},
{
"start": 1177,
"end": 1201,
"text": "Nguyen and Litman (2015)",
"ref_id": "BIBREF65"
},
{
"start": 1321,
"end": 1345,
"text": "Lippi and Torroni (2015)",
"ref_id": "BIBREF54"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Component Classification",
"sec_num": "2.2"
},
{
"text": "Approaches on structure identification can be divided into macro-level approaches and micro-level approaches. Macro-level approaches such as presented by Cabrio and Villata (2012) , Ghosh et al. (2014) , or Boltu\u017ei\u0107 and\u0160najder (2014) address relations between complete arguments and ignore the microstructure of arguments. More relevant to our work, however, are micro-level approaches, which focus on relations between argument components. Mochales-Palau and Moens (2009) introduced one of the first approaches for identifying the microstructure of arguments. Their approach is based on a manually created Context-Free Grammar and recognizes argument structures as trees. However, it is tailored to legal argumentation and does not recognize implicit argumentative relations (i.e., relations that are not indicated by discourse markers). In previous work, we considered the identification of argument structures as a binary classification task of ordered argument component pairs (Stab and Gurevych 2014b) . We classified each pair as support or not-linked using an SVM with structural, lexical, syntactic, and indicator features. Our best model achieved an F1 score of 0.722. However, the approach recognizes argumentative relations locally and does not consider contextual information. Peldszus (2014) modeled the targets of argumentative relations along with additional information in a single tagset. His tagset includes, for instance, several labels denoting whether an argument component at position n is argumentatively related to preceding argument components n \u2212 1, n \u2212 2, and so forth, or following argument components n + 1, n + 2, and so on. Although his approach achieved a promising accuracy of 0.48, it is only applicable to short texts. Peldszus and Stede (2015) presented the first approach that globally optimizes argumentative relations. They jointly modeled several aspects of argumentation structures using a Minimum Spanning Tree model and achieved an F1 score of 0.720. They found that the function (support or attack) and the role (opponent and proponent) of argument components are the most useful dimensions for improving the identification of argumentative relations. However, the texts in their corpus were created artificially using a guideline that promotes having one opposing argument component in each text (cf. Section 2.4). Therefore, it is unclear whether the results can be reproduced with real data, which may exhibit arguments with fewer opposing argument components (Wolfe and Britt 2009) . Moreover, their approach links all argument components in a single tree structure. Thus, it is not capable of separating several arguments and recognizing unlinked components.",
"cite_spans": [
{
"start": 154,
"end": 179,
"text": "Cabrio and Villata (2012)",
"ref_id": "BIBREF14"
},
{
"start": 182,
"end": 201,
"text": "Ghosh et al. (2014)",
"ref_id": "BIBREF36"
},
{
"start": 460,
"end": 472,
"text": "Moens (2009)",
"ref_id": "BIBREF61"
},
{
"start": 981,
"end": 1006,
"text": "(Stab and Gurevych 2014b)",
"ref_id": null
},
{
"start": 1754,
"end": 1779,
"text": "Peldszus and Stede (2015)",
"ref_id": "BIBREF71"
},
{
"start": 2507,
"end": 2529,
"text": "(Wolfe and Britt 2009)",
"ref_id": "BIBREF95"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Structure Identification",
"sec_num": "2.3"
},
{
"text": "Existing corpora in computational argumentation cover numerous aspects of argumentation analysis. There are, for instance, corpora that address argumentation strength (Persing and Ng 2015) , factual knowledge (Beigman Klebanov and Higgins 2012), various properties of arguments (Walker et al. 2012) , argumentative relations between complete arguments at the macro-level (Cabrio and Villata 2014; Boltu\u017ei\u0107 and\u0160najder 2014) , different types of argument components (Mochales-Palau and Ieven 2009; Kwon et al. 2007; Habernal and Gurevych 2017) , and argumentation structures over several documents (Aharoni et al. 2014) . However, corpora annotated with argumentation structures at the level of discourse are still rare.",
"cite_spans": [
{
"start": 167,
"end": 188,
"text": "(Persing and Ng 2015)",
"ref_id": "BIBREF72"
},
{
"start": 278,
"end": 298,
"text": "(Walker et al. 2012)",
"ref_id": "BIBREF92"
},
{
"start": 371,
"end": 396,
"text": "(Cabrio and Villata 2014;",
"ref_id": "BIBREF15"
},
{
"start": 397,
"end": 422,
"text": "Boltu\u017ei\u0107 and\u0160najder 2014)",
"ref_id": null
},
{
"start": 464,
"end": 495,
"text": "(Mochales-Palau and Ieven 2009;",
"ref_id": "BIBREF60"
},
{
"start": 496,
"end": 513,
"text": "Kwon et al. 2007;",
"ref_id": "BIBREF50"
},
{
"start": 514,
"end": 541,
"text": "Habernal and Gurevych 2017)",
"ref_id": "BIBREF39"
},
{
"start": 596,
"end": 617,
"text": "(Aharoni et al. 2014)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Existing Corpora Annotated with Argumentation Structures",
"sec_num": "2.4"
},
{
"text": "One prominent resource is AraucariaDB (Reed et al. 2008) . It includes heterogenous text types such as newspaper editorials, parliamentary records, judicial summaries, and online discussions. It also includes annotations describing the type of reasoning according to Walton's argumentation schemes (Walton, Reed, and Macagno 2008) and implicit argument components that were added by the annotators during the analysis. However, the reliability of the annotations is unknown. Furthermore, recent releases of AraucariaDB are not appropriate for training end-to-end argumentation structure parsers because they do not include non-argumentative text units. Kirschner, Eckle-Kohler, and Gurevych (2015) annotated argumentation structures in Introduction and Discussion sections of 24 German scientific articles. Their annotation scheme includes four argumentative relations (support, attack, detail, and sequence). However, the corpus does not contain annotations for argument component types.",
"cite_spans": [
{
"start": 38,
"end": 56,
"text": "(Reed et al. 2008)",
"ref_id": "BIBREF78"
},
{
"start": 298,
"end": 330,
"text": "(Walton, Reed, and Macagno 2008)",
"ref_id": "BIBREF93"
},
{
"start": 653,
"end": 697,
"text": "Kirschner, Eckle-Kohler, and Gurevych (2015)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Existing Corpora Annotated with Argumentation Structures",
"sec_num": "2.4"
},
{
"text": "Peldszus and Stede (2015) created a small corpus of 112 German microtexts with controlled linguistic and rhetoric complexity. Each document contains a single argument and does not include more than five argument components. Their annotation scheme models supporting and attacking relations as well as additional information like proponent and opponent. They obtained an inter-annotator agreement (IAA) of \u03ba = 0.83 2 with three expert annotators. Recently, they translated the corpus to English, resulting in the first parallel corpus for computational argumentation. However, the corpus does not include non-argumentative text units. Therefore, the corpus is only of limited use for training end-to-end argumentation structure parsers. Because of the writing guidelines used (Peldszus and Stede 2013, page 197) , it also exhibits an unusually high proportion of attack relations. In particular, 97 of the 112 arguments (86.6%) include at least one attack relation. This proportion is rather unnatural, since authors tend to support their standpoint instead of considering opposing views (Wolfe and Britt 2009) .",
"cite_spans": [
{
"start": 775,
"end": 810,
"text": "(Peldszus and Stede 2013, page 197)",
"ref_id": null
},
{
"start": 1087,
"end": 1109,
"text": "(Wolfe and Britt 2009)",
"ref_id": "BIBREF95"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Existing Corpora Annotated with Argumentation Structures",
"sec_num": "2.4"
},
{
"text": "Existing corpora annotated with argumentation structures at the discourse-level (#Doc = number of documents; #Comp = number of argument components; NoArg = presence of nonargumentative text units).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 1",
"sec_num": null
},
{
"text": "Genre #Doc #Comp NoArg Granularity IAA (Reed et al. 2008) various \u223c700 \u223c2,000 yes clause unknown (Stab and Gurevych 2014a) student essays 90 1,552 yes clause \u03b1 U = 0.72 (Peldszus and Stede 2015) microtexts 112 576 no clause \u03ba = 0.83 (Kirschner et al. 2015) scientific articles 24 \u223c2,700 yes sentence \u03ba = 0.43",
"cite_spans": [
{
"start": 39,
"end": 57,
"text": "(Reed et al. 2008)",
"ref_id": "BIBREF78"
},
{
"start": 169,
"end": 194,
"text": "(Peldszus and Stede 2015)",
"ref_id": "BIBREF71"
},
{
"start": 233,
"end": 256,
"text": "(Kirschner et al. 2015)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Source",
"sec_num": null
},
{
"text": "In previous work, we created a corpus of 90 persuasive essays, which we selected randomly from essayforum.com (Stab and Gurevych 2014a) . We annotated the corpus in two consecutive steps: First, we identified argument components at the clause level and obtained an agreement of \u03b1 U = 0.72 between three annotators. Second, we annotated argumentative support and attack relations between argument components and achieved an agreement of \u03ba = 0.8. Because the corpus also includes non-argumentative text units, it allows for training end-to-end argumentation structure parsers that separate argumentative from non-argumentative text units. Apart from this corpus, we are only aware of one additional study on argumentation structures in persuasive essays. Botley (2014) analyzed 10 essays using argument diagramming for studying differences in argumentation strategies. Unfortunately, the corpus is too small for computational purposes and the reliability of the annotations is unknown. Table 1 provides an overview of existing corpora annotated with argumentation structures at the discourse-level.",
"cite_spans": [
{
"start": 110,
"end": 135,
"text": "(Stab and Gurevych 2014a)",
"ref_id": null
},
{
"start": 753,
"end": 766,
"text": "Botley (2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 984,
"end": 991,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Source",
"sec_num": null
},
{
"text": "The identification of argumentation structures is closely related to discourse analysis. Similar to the identification of argumentation structures, discourse analysis aims at identifying elementary discourse units and discourse relations between them. Existing approaches on discourse analysis mainly differ in the discourse theory utilized. RST (Mann and Thompson 1987) , for instance, models discourse structures as trees by iteratively linking adjacent discourse units (Feng and Hirst 2014; Hernault et al. 2010) whereas approaches based on PDTB (Prasad et al. 2008 ) identify more shallow structures by linking two adjacent sentences or clauses (Lin, Ng, and Kan 2014) . RST and PDTB are limited to discourse relations between adjacent discourse units, but SDRT (Asher and Lascarides 2003) also allows long distance relations (Afantenos and Asher 2014; Afantenos et al. 2015) . However, similar to argumentation structure parsing, the main challenge of discourse analysis is to identify implicit discourse relations (Braud and Denis 2014, page 1694) . Marcu and Echihabi (2002) proposed one of the first approaches for identifying implicit discourse relations. In order to collect large amounts of training data, they exploited several discourse markers like \"because\" or \"but\". After removing the discourse markers, they found that word pair features are useful for identifying implicit discourse relations. Pitler, Louis, and Nenkova (2009) proposed an approach for identifying four implicit types of discourse relations in the PDTB and achieved F1 scores between 0.22 and 0.76. They found that using features tailored to each individual relation leads to the best results. Lin, Kan, and Ng (2009) showed that production rules collected from parse trees yield good results and Louis et al. (2010) found that features based on named entities do not perform as well as lexical features.",
"cite_spans": [
{
"start": 346,
"end": 370,
"text": "(Mann and Thompson 1987)",
"ref_id": "BIBREF56"
},
{
"start": 472,
"end": 493,
"text": "(Feng and Hirst 2014;",
"ref_id": "BIBREF31"
},
{
"start": 494,
"end": 515,
"text": "Hernault et al. 2010)",
"ref_id": "BIBREF42"
},
{
"start": 549,
"end": 568,
"text": "(Prasad et al. 2008",
"ref_id": "BIBREF75"
},
{
"start": 649,
"end": 672,
"text": "(Lin, Ng, and Kan 2014)",
"ref_id": "BIBREF53"
},
{
"start": 766,
"end": 793,
"text": "(Asher and Lascarides 2003)",
"ref_id": "BIBREF4"
},
{
"start": 830,
"end": 856,
"text": "(Afantenos and Asher 2014;",
"ref_id": "BIBREF0"
},
{
"start": 857,
"end": 879,
"text": "Afantenos et al. 2015)",
"ref_id": "BIBREF1"
},
{
"start": 1020,
"end": 1030,
"text": "(Braud and",
"ref_id": "BIBREF11"
},
{
"start": 1031,
"end": 1053,
"text": "Denis 2014, page 1694)",
"ref_id": null
},
{
"start": 1056,
"end": 1081,
"text": "Marcu and Echihabi (2002)",
"ref_id": "BIBREF57"
},
{
"start": 1413,
"end": 1446,
"text": "Pitler, Louis, and Nenkova (2009)",
"ref_id": "BIBREF74"
},
{
"start": 1680,
"end": 1703,
"text": "Lin, Kan, and Ng (2009)",
"ref_id": "BIBREF53"
},
{
"start": 1783,
"end": 1802,
"text": "Louis et al. (2010)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Analysis",
"sec_num": "2.5"
},
{
"text": "Approaches to discourse analysis usually aim at identifying various different types of discourse relations. However, only a subset of these relations is relevant for argumentation structure parsing. For example, Peldszus and Stede (2013) proposed support, attack, and counter-attack relations for modeling argumentation structures, whereas our work focuses on support and attack relations. This difference is also illustrated by the work of Biran and Rambow (2011) . They selected a subset of 12 relations from the RST Discourse Treebank (Carlson, Marcu, and Okurowski 2001) and argue that only a subset of RST relations is relevant for identifying justifications.",
"cite_spans": [
{
"start": 212,
"end": 237,
"text": "Peldszus and Stede (2013)",
"ref_id": "BIBREF70"
},
{
"start": 441,
"end": 464,
"text": "Biran and Rambow (2011)",
"ref_id": "BIBREF7"
},
{
"start": 538,
"end": 574,
"text": "(Carlson, Marcu, and Okurowski 2001)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Analysis",
"sec_num": "2.5"
},
{
"text": "The study of argumentation is a comprehensive and interdisciplinary research field. It involves philosophy, communication science, logic, linguistics, psychology, and computer science. The first approaches to studying argumentation date back to the ancient Greek sophists and evolved in the 6th and 5th centuries BCE (van Eemeren, Grootendorst, and Snoeck Henkemans 1996) . In particular, the influential works of Aristotle on traditional logic, rhetoric, and dialectics set an important milestone and are a cornerstone of modern argumentation theory. Because of the diversity of the field, there are numerous proposals for modeling argumentation. Bentahar, Moulin, and B\u00e9langer (2010) categorize argumentation models into three types: (1) monological models, (2) dialogical models, and (3) rhetorical models. Monological models address the internal microstructure of arguments. They focus on the function of argument components, the links between them, and the reasoning type. Most monological models stem from the field of informal logic and focus on arguments as product (O'Keefe 1977; Johnson 2000) . On the other hand, dialogical models focus on the process of argumentation and ignore the microstructure of arguments. They model the external macrostructure and address relations between arguments from several interlocutors. Finally, rhetorical models consider neither the micro-nor the macrostructure but rather the way arguments are used as a means of persuasion. They consider the audience's perception and aim at studying rhetorical schemes that are successful in practice. In this article, we focus on the monological perspective, which is well-suited for developing computational methods (Peldszus and Stede 2013) .",
"cite_spans": [
{
"start": 317,
"end": 371,
"text": "(van Eemeren, Grootendorst, and Snoeck Henkemans 1996)",
"ref_id": null
},
{
"start": 1074,
"end": 1088,
"text": "(O'Keefe 1977;",
"ref_id": "BIBREF66"
},
{
"start": 1089,
"end": 1102,
"text": "Johnson 2000)",
"ref_id": null
},
{
"start": 1700,
"end": 1725,
"text": "(Peldszus and Stede 2013)",
"ref_id": "BIBREF70"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Argumentation: Theoretical Background",
"sec_num": "3."
},
{
"text": "The laying out of argument structure is a widely used method in informal logic (Copi and Cohen 1990; Govier 2010) . This technique, referred to as argument diagramming, aims at transferring natural language arguments into a structured representation for evaluating them in subsequent analysis steps (Henkemans 2000, page 447). Although argumentation theorists consider argument diagramming a manual activity, the diagramming conventions also serve as a good foundation for developing novel argument mining models (Peldszus and Stede 2013) . An argument diagram is a node-link diagram whereby each node represents an argument component (i.e., a statement represented in natural language) and each link represents a directed argumentative relation indicating that the source component is a justification (or refutation) of the target component. Figure 1 shows some common argument structures. A basic argument includes a claim supported by a single premise. It can be considered the minimal form that an argument can take. A convergent argument comprises two premises that support the (Beardsley 1950) . Complementarily, Thomas (1973) defined linked arguments (Figure 1e ). Like convergent arguments, a linked argument includes two premises. However, neither of the two premises independently supports the claim. The premises are only relevant to the claim in conjunction. More complex arguments can combine any of the elementary structures illustrated in Figure 1 .",
"cite_spans": [
{
"start": 79,
"end": 100,
"text": "(Copi and Cohen 1990;",
"ref_id": "BIBREF26"
},
{
"start": 101,
"end": 113,
"text": "Govier 2010)",
"ref_id": "BIBREF38"
},
{
"start": 513,
"end": 538,
"text": "(Peldszus and Stede 2013)",
"ref_id": "BIBREF70"
},
{
"start": 1083,
"end": 1099,
"text": "(Beardsley 1950)",
"ref_id": "BIBREF5"
},
{
"start": 1119,
"end": 1132,
"text": "Thomas (1973)",
"ref_id": "BIBREF89"
}
],
"ref_spans": [
{
"start": 843,
"end": 851,
"text": "Figure 1",
"ref_id": "FIGREF2"
},
{
"start": 1158,
"end": 1168,
"text": "(Figure 1e",
"ref_id": "FIGREF2"
},
{
"start": 1454,
"end": 1462,
"text": "Figure 1",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Argument Diagramming",
"sec_num": "3.1"
},
{
"text": "On closer inspection, however, there are several ambiguities when applying argument diagramming to real texts: First, the distinction between convergent and linked structures is often ambiguous in real argumentation structures (Henkemans 2000; Freeman 2011). Second, it is unclear if the argumentation structure is a graph or a tree. Third, the argumentative type of argument components is ambiguous in serial structures. We discuss each of these questions in the following sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Diagramming",
"sec_num": "3.1"
},
{
"text": "3.1.1 Distinguishing between Linked and Convergent Arguments. The question of whether an argumentation model needs to distinguish between linked and convergent arguments is still debated in argumentation theory (Conway 1991; Yanal 1991; van Eemeren, Grootendorst, and Snoeck Henkemans 1996; Freeman 2011) . From a perspective based on traditional logic, linked arguments indicate deductive reasoning and convergent arguments represent inductive reasoning (Henkemans 2000, page 453). However, Freeman (2011, page 91ff.) showed that the traditional definition of linked arguments is frequently ambiguous in everyday discourse. Yanal (1991) argues that the distinction is equivalent to separating several arguments and Conway (1991) argues that linked structures can simply be omitted for modeling single arguments. From a computational perspective, the identification of linked arguments is equivalent to finding groups of premises or classifying the reasoning type of an argument as either deductive or inductive. Accordingly, it is not necessary to distinguish linked and convergent arguments during the identification of argumentation structures since this task can be solved in subsequent analysis steps.",
"cite_spans": [
{
"start": 211,
"end": 224,
"text": "(Conway 1991;",
"ref_id": "BIBREF25"
},
{
"start": 225,
"end": 236,
"text": "Yanal 1991;",
"ref_id": "BIBREF96"
},
{
"start": 237,
"end": 290,
"text": "van Eemeren, Grootendorst, and Snoeck Henkemans 1996;",
"ref_id": null
},
{
"start": 291,
"end": 304,
"text": "Freeman 2011)",
"ref_id": "BIBREF35"
},
{
"start": 492,
"end": 518,
"text": "Freeman (2011, page 91ff.)",
"ref_id": null
},
{
"start": 625,
"end": 637,
"text": "Yanal (1991)",
"ref_id": "BIBREF96"
},
{
"start": 716,
"end": 729,
"text": "Conway (1991)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Diagramming",
"sec_num": "3.1"
},
{
"text": "Trees. Defining argumentation structures as trees implies the exclusion of divergent arguments, to allow only one target for each premise and to neglect cycles. From a theoretical perspective, divergent structures are equivalent to several arguments (one for each claim) (Freeman 2011, page 16) . As a result of this treatment, a great many of theoretical textbooks neglect divergent structures (Henkemans 2000; Reed and Rowe 2004) and also most computational approaches consider arguments as trees (Cohen 1987; Mochales-Palau and Moens 2009; Peldszus 2014) . However, there is little empirical evidence regarding the structure of arguments. We are only aware of one study, which showed that 5.26% of the arguments in political speeches (which can be assumed to exhibit complex argumentation structures) are divergent.",
"cite_spans": [
{
"start": 271,
"end": 294,
"text": "(Freeman 2011, page 16)",
"ref_id": null
},
{
"start": 412,
"end": 431,
"text": "Reed and Rowe 2004)",
"ref_id": "BIBREF78"
},
{
"start": 499,
"end": 511,
"text": "(Cohen 1987;",
"ref_id": "BIBREF22"
},
{
"start": 512,
"end": 542,
"text": "Mochales-Palau and Moens 2009;",
"ref_id": "BIBREF61"
},
{
"start": 543,
"end": 557,
"text": "Peldszus 2014)",
"ref_id": "BIBREF69"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Argumentation Structures as",
"sec_num": "3.1.2"
},
{
"text": "Essay writing usually follows a claim-oriented procedure (Kemper and Sebranek 2004; Shiach 2009; Whitaker 2009; Perutz 2010) . Starting with the formulation of the standpoint on the topic, authors collect claims in support (or opposition) of their view. Subsequently, they collect premises that support or attack their claims. The following example illustrates this procedure. A major claim on abortion, for instance, is \"abortion should be illegal\"; a supporting claim could be \"abortion is ethically wrong\" and the associated premises \"unborn babies are considered human beings\" and \"killing human beings is wrong\". Because of this common writing procedure, divergent and circular structures are rather unlikely in persuasive essays. Therefore, we assume that modeling the argumentation structure of essays as a tree is a reasonable decision.",
"cite_spans": [
{
"start": 57,
"end": 83,
"text": "(Kemper and Sebranek 2004;",
"ref_id": "BIBREF45"
},
{
"start": 84,
"end": 96,
"text": "Shiach 2009;",
"ref_id": "BIBREF82"
},
{
"start": 97,
"end": 111,
"text": "Whitaker 2009;",
"ref_id": "BIBREF94"
},
{
"start": 112,
"end": 124,
"text": "Perutz 2010)",
"ref_id": "BIBREF73"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Argumentation Structures as",
"sec_num": "3.1.2"
},
{
"text": "Types. Assigning argumentative types to the components of an argument is unambiguous if the argumentation structure is shallow. It is, for instance, obvious that an argument component c 1 is a premise and argument component c 2 is a claim, if c 1 supports c 2 in a basic argument (cf. Figure 1 ). However, if the tree structure is deeper (i.e., exhibits serial structures), assigning argumentative types becomes ambiguous. Essentially, there are three different approaches for assigning argumentative types to argument components. First, according to Beardsley (1950) a serial argument includes one argument component which is both a claim and a premise. Therefore, the inner argument component bears two different argumentative types (multi-label approach). Second, Govier (2010, page 24) distinguishes between \"main claim\" and \"subclaim\". Similarly, Damer (2009, page 17) distinguishes between \"premise\" and \"subpremise\" for labeling argument components in serial structures. Both approaches define specific labels for each level in the argumentation structure (level approach). Third, Cohen (1987) considers only the root node of an argumentation tree as a claim and the following nodes in the structure as premises (one-claim approach). In order to define an argumentation model for persuasive essays, we propose a hybrid approach that combines the level approach and the one-claim approach.",
"cite_spans": [
{
"start": 551,
"end": 567,
"text": "Beardsley (1950)",
"ref_id": "BIBREF5"
},
{
"start": 767,
"end": 789,
"text": "Govier (2010, page 24)",
"ref_id": null
},
{
"start": 852,
"end": 873,
"text": "Damer (2009, page 17)",
"ref_id": null
},
{
"start": 1088,
"end": 1100,
"text": "Cohen (1987)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 285,
"end": 293,
"text": "Figure 1",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Argumentation Structures and Argument Component",
"sec_num": "3.1.3"
},
{
"text": "We model the argumentation structure of persuasive essays as a connected tree structure. We use a level approach for modeling the first level of the tree and a one-claim approach for representing the structure of each individual argument. Accordingly, we model the first level of the tree with two different argument component types and the structure of individual arguments with argumentative relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argumentation Structures in Persuasive Essays",
"sec_num": "3.2"
},
{
"text": "The major claim is the root node of the argumentation structure and represents the author's standpoint on the topic. It is an opinionated statement that is usually stated in the introduction and restated in the conclusion of the essay. The individual body paragraphs of an essay include the actual arguments. They either support or attack the author's standpoint expressed in the major claim. Each argument consists of a claim and at least one premise. In order to differentiate between supporting and attacking arguments, each claim has a stance attribute that can take the values \"for\" or \"against\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argumentation Structures in Persuasive Essays",
"sec_num": "3.2"
},
{
"text": "We model the structure of each argument with a one-claim approach. The claim constitutes the central component of each argument. The premises are the reasons of the argument. The actual structure of an argument comprises directed argumentative support and attack relations, which link a premise either to a claim or to another premise (serial arguments). Each premise p has one outgoing relation (i.e., there is a relation that has p as source component) and none or several incoming relations (i.e., there can be a relation with p as target component). A claim can exhibit several incoming relations but no outgoing relation. The ambiguous function of inner premises in serial arguments is implicitly modeled by the structure of the argument. The inner premise exhibits one outgoing relation and at least one incoming relation. Finally, the stance of each premise is indicated by the type of its outgoing relation (support or attack).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argumentation Structures in Persuasive Essays",
"sec_num": "3.2"
},
{
"text": "The following example illustrates the argumentation structure of a persuasive essay. 3 The introduction of an essay describes the controversial topic and usually includes the major claim:",
"cite_spans": [
{
"start": 85,
"end": 86,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Argumentation Structures in Persuasive Essays",
"sec_num": "3.2"
},
{
"text": "Ever since researchers at the Roslin Institute in Edinburgh cloned an adult sheep, there has been an ongoing debate about whether cloning technology is morally and ethically right or not. Some people argue for and others against and there is still no agreement whether cloning technology should be permitted. However, as far as I'm concerned, [cloning is an important technology for humankind] MajorClaim1 since [it would be very useful for developing novel cures] Claim1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argumentation Structures in Persuasive Essays",
"sec_num": "3.2"
},
{
"text": "The first two sentences introduce the topic and do not include argumentative content. The third sentence contains the major claim (boldfaced) and a claim that supports the major claim (underlined). The following body paragraphs of the essay include arguments that either support or attack the major claim. For example, the following body paragraph includes one argument that supports the positive standpoint of the author on cloning:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argumentation Structures in Persuasive Essays",
"sec_num": "3.2"
},
{
"text": "First, [cloning will be beneficial for many people who are in need of organ transplants] Claim2 . The first sentence contains the claim of the argument, which is supported by five premises in the following three sentences (wavy underlined). The second sentence includes two premises, of which Premise 1 supports Claim 2 and Premise 2 supports Premise 1 . Premise 3 in the third sentence supports Claim 2 . The fourth sentence includes Premise 4 and Premise 5 . Both support Premise 3 . The next paragraph illustrates a body paragraph with two arguments: The initial sentence includes the first argument, which consists of Premise 6 and Claim 3 . The following three sentences include the second argument. Premise 7 and Premise 8 both support Claim 4 in the last sentence. Both arguments cover different aspects (development in science and cloning humans), which both support the author's standpoint on cloning. This example illustrates that knowing argumentative relations is important for separating several arguments in a paragraph. The example also shows that argument components frequently exhibit preceding text units that are not relevant to the argument but helpful for recognizing the argument component type. For example, preceding discourse connectors like \"therefore\", \"consequently\", or \"thus\" can signal a subsequent claim. Discourse markers like \"because\", \"since\", or \"furthermore\" could indicate a premise. Formally, these preceding tokens of an argument component starting at token t i are defined as the tokens t i\u2212m , ..., t i\u22121 that are not covered by another argument component in the sentence The paragraph begins with Claim 5 , which attacks the stance of the author. It is supported by Premise 9 in the second sentence. The third sentence includes two premises, both of which defend the stance of the author. Premise 11 is an attack of Claim 5 , and Premise 10 supports Premise 11 . The last paragraph (conclusion) restates the major claim and summarizes the main aspects of the essay:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argumentation Structures in Persuasive Essays",
"sec_num": "3.2"
},
{
"text": "s = t 1 , t 2 , ..., t n where 1 \u2264 i \u2264 n and i \u2212 m \u2265 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argumentation Structures in Persuasive Essays",
"sec_num": "3.2"
},
{
"text": "To sum up, although [permitting cloning might bear some risks like misuse for military purposes] Claim6 , I strongly believe that [this technology is beneficial to humanity] MajorClaim2 . It is likely that [this technology bears some important cures which will significantly improve life conditions] Claim7 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argumentation Structures in Persuasive Essays",
"sec_num": "3.2"
},
{
"text": "The conclusion of the essay starts with an attacking claim followed by the restatement of the major claim. The last sentence includes another claim that summarizes the most important points of the author's argumentation. Figure 2 shows the entire argumentation structure of the example essay.",
"cite_spans": [],
"ref_spans": [
{
"start": 221,
"end": 229,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Argumentation Structures in Persuasive Essays",
"sec_num": "3.2"
},
{
"text": "Argumentation structure of the example essay. Arrows indicate argumentative relations. Arrowheads denote argumentative support relations and circleheads attack relations. Dashed lines indicate relations that are encoded in the stance attributes of claims. \"P\" denotes premises.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2",
"sec_num": null
},
{
"text": "The motivation for creating a new corpus is threefold: First, our previous corpus is relatively small. We believe that more data will improve the accuracy of our computational models. Second, we wanted to ensure the reproducibility of the annotation study and validate our previous results. Third, we improved our annotation guidelines. We added more precise rules for segmenting argument components and a detailed description of common essay structures. We expect that our novel annotation guidelines will guide annotators towards adequate agreement without collaborative training sessions. Our annotation guidelines comprise 31 pages and include the following three steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "4."
},
{
"text": "1. Topic and stance identification: We found in our previous annotation study that knowing the topic and stance of an essay improves inter-annotator agreement (Stab and Gurevych 2014a) . For this reason, we ask the annotators to read the entire essay before starting with the annotation task.",
"cite_spans": [
{
"start": 159,
"end": 184,
"text": "(Stab and Gurevych 2014a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "4."
},
{
"text": "Annotation of argument components: Annotators mark major claims, claims, and premises. They annotate the boundaries of argument components and determine the stance attribute of claims.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "Linking premises with argumentative relations: The annotators identify the structure of arguments by linking each premise to a claim or another premise with argumentative support or attack relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.",
"sec_num": null
},
{
"text": "Three non-native speakers participated in our annotation study. One of the three annotators had participated in our previous study (expert annotator). 4 The two other annotators learned the task by independently reading the annotation guidelines. We used the brat rapid annotation tool (Stenetorp et al. 2012) . It provides a graphical web interface for marking text units and linking them.",
"cite_spans": [
{
"start": 151,
"end": 152,
"text": "4",
"ref_id": null
},
{
"start": 286,
"end": 309,
"text": "(Stenetorp et al. 2012)",
"ref_id": "BIBREF88"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "3.",
"sec_num": null
},
{
"text": "We randomly selected 402 English essays with a description of the writing prompt from essayforum.com. This online forum is an active community that provides correction and feedback about different texts such as research papers, essays, or poetry. For example, students post their essays in order to receive feedback about their writing skills while preparing for standardized language tests. The corpus includes 7,116 sentences with 147,271 tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "All three annotators independently annotated a random subset of 80 essays. The remaining 322 essays were annotated by the expert annotator. We evaluate the interannotator agreement of the argument component annotations using two different strategies: First, we evaluate if the annotators agree on the presence of argument components in sentences using observed agreement and Fleiss' \u03ba (Fleiss 1971) . We consider each sentence as a markable and evaluate the presence of each argument component type t \u2208 {MajorClaim, Claim, Premise} in a sentence individually. Accordingly, the number of markables for each argument component type t corresponds to the number of sentences N = 1,441, the number of annotations per markable equals the number of annotators (n = 3), and the number of categories is k = 2 (t or not t). Evaluating the agreement at the sentence level is an approximation of the actual agreement since the boundaries of argument components can differ from sentence boundaries and a sentence can include several argument components. 5 Therefore, for the second evaluation strategy, we use Krippendorff's \u03b1 U (Krippendorff 2004) . In contrast to common alpha coefficients, this coefficient allows us to evaluate the agreement of unitizing tasks by comparing the boundaries of the annotation units. We use the squared difference \u03b4 2 between any two annotators' sections as proposed by Krippendorff (2004, page 9) and consider each essay as a single continuum at the token level. Accordingly, the length L of each continuum is the number of tokens in an essay. The number of annotators m that unitize the continuum is 3. We report the average \u03b1 U scores over 80 essays. For determining the inter-annotator agreement, we use DKPro Agreement, whose implementations of interannotator agreement measures are well-tested with various examples from the literature (Meyer et al. 2014) . Table 2 shows the inter-annotator agreement of each argument component type. The agreement is best for major claims. The IAA score of 97.9% and \u03ba = 0.877 indicate that annotators are able to reliably identify major claims in persuasive essays. In addition, the unitized alpha measure of \u03b1 U = 0.810 shows that there are only few disagreements about the boundaries of major claims. The results also indicate good agreement for premises (\u03ba = 0.833 and \u03b1 U = 0.824). We obtain the lowest agreement of \u03ba = 0.635 for claims, which shows that the identification of claims is more complex than identifying major claims and premises. The joint unitized measure for all argument components is \u03b1 U = 0.767, and thus the agreement improved by 0.043 compared with our previous study (Stab and Gurevych 2014b) . Therefore, we tentatively conclude that overall, human annotators agree on the argument components in persuasive essays.",
"cite_spans": [
{
"start": 385,
"end": 398,
"text": "(Fleiss 1971)",
"ref_id": "BIBREF32"
},
{
"start": 1116,
"end": 1135,
"text": "(Krippendorff 2004)",
"ref_id": "BIBREF48"
},
{
"start": 1391,
"end": 1418,
"text": "Krippendorff (2004, page 9)",
"ref_id": null
},
{
"start": 1863,
"end": 1882,
"text": "(Meyer et al. 2014)",
"ref_id": "BIBREF58"
},
{
"start": 2656,
"end": 2681,
"text": "(Stab and Gurevych 2014b)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1885,
"end": 1892,
"text": "Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Inter-Annotator Agreement",
"sec_num": "4.2"
},
{
"text": "For determining the agreement of the stance attribute, we follow the same methodology as for the sentence-level agreement described above, but we consider each sentence containing a claim as \"for\" or \"against\" according to its stance attribute, and all sentences without a claim as \"none\" (N = 1,441; n = 3; k = 3). Consequently, the agreement of claims constitutes the upper bound for the stance attribute. We obtain an agreement of 88.5% and \u03ba = 0.623, which is slightly below the agreement scores Table 2 ). Therefore, human annotators can reliably differentiate between supporting and attacking claims. We determined the markables for evaluating the agreement of argumentative relations by pairing all argument components in the same paragraph. For each paragraph with argument components c 1 , ..., c n , we consider each pair p = (c i , c j ) with 1 \u2264 i, j \u2264 n and i = j as markable. Thus, the set of all markables corresponds to all argument component pairs that can be annotated according to our guidelines. The number of argument component pairs is N = 4,922, the number of ratings per markable is n = 3, and the number of categories k = 2. Table 3 shows the inter-annotator agreement of argumentative relations. We obtain kappa scores above 0.7 for both argumentative support and attack relations, which allows tentative conclusions (Krippendorff 2004) . On average, the annotators marked only 0.9% of the 4,922 pairs as argumentative attack relations and 18.4% as argumentative support relations. Although the agreement is usually much lower if a category is rare (Artstein and Poesio 2008, page 573) , the annotators agree more on argumentative attack relations. This indicates that the identification of argumentative attack relations is a simpler task than the identification of argumentative support relations. The agreement scores for argumentative relations are approximately 0.10 lower compared with our previous study. This difference can be attributed to the fact that we did not explicitly annotate relations between claims and major claims, which are easy to annotate because claims are always linked to major claims (cf. Section 3.2).",
"cite_spans": [
{
"start": 1343,
"end": 1362,
"text": "(Krippendorff 2004)",
"ref_id": "BIBREF48"
},
{
"start": 1575,
"end": 1611,
"text": "(Artstein and Poesio 2008, page 573)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 500,
"end": 507,
"text": "Table 2",
"ref_id": "TABREF0"
},
{
"start": 1150,
"end": 1157,
"text": "Table 3",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Inter-Annotator Agreement",
"sec_num": "4.2"
},
{
"text": "For analyzing the disagreements between the annotators, we determined Confusion Probability Matrices (CPMs) (Cinkov\u00e1, Holub, and Kr\u00ed\u017e 2012) . Compared with traditional confusion matrices, a CPM also allows us to analyze confusion if more than two annotators are involved in an annotation study. A CPM includes conditional probabilities that an annotator assigns a category in the column given that another annotator selected the category in the row. Table 4 shows the CPM of argument component annotations. It shows that the highest confusion is between claims and premises. We observed that one annotator frequently did not split sentences including a claim. For instance, the annotator labeled the entire sentence as a claim although it includes an additional premise. This type of error also explains the lower unitized alpha score compared with the sentence-level agreements in Table 2 . Furthermore, we found that concessions before claims were frequently not annotated as an attacking premise. For example, annotators often did not split sentences similarly to the following example: The distinction between major claims and claims exhibits less confusion. This may be because major claims are relatively easy to locate in essays since they occur usually in introductions or conclusions, whereas claims can occur anywhere in the essay. Table 5 shows the CPM of argumentative relations. There is little confusion between argumentative support and attack relations. The CPM also shows that the highest confusion is between argumentative relations (support and attack) and unlinked pairs. This can be attributed to the identification of the correct targets of premises. In particular, we observed that agreement on the targets decreases if a paragraph includes several claims or serial argument structures.",
"cite_spans": [
{
"start": 108,
"end": 139,
"text": "(Cinkov\u00e1, Holub, and Kr\u00ed\u017e 2012)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 450,
"end": 457,
"text": "Table 4",
"ref_id": "TABREF2"
},
{
"start": 882,
"end": 889,
"text": "Table 2",
"ref_id": "TABREF0"
},
{
"start": 1342,
"end": 1349,
"text": "Table 5",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Analysis of Human Disagreement",
"sec_num": "4.3"
},
{
"text": "We created a partial gold standard of the essays annotated by all annotators. We use this partial gold standard of 80 essays as our test data (20%) and the remaining 322 essays annotated by the expert annotator as our training data (80%). The creation of our gold standard test data consists of the following two steps: First, we merge the annotation of all argument components. Thus, each annotator annotates argumentative relations based on the same argument components. Second, we merge the argumentative relations to compile our final gold standard test data. Because the argument component types are strongly related-the selection of the premises, for instance, depends on the selected claim(s) in a paragraph-we did not merge the annotations using majority voting as in our previous study. Instead, we discussed the disagreements in several meetings with all annotators for resolving the disagreements. Table 6 gives an overview of the size of the corpus. It contains 6,089 argument components, 751 major claims, 1,506 claims, and 3,832 premises. Such a large proportion of claims compared with premises is common in argumentative texts because writers tend to provide several reasons for ensuring a robust standpoint (Mochales-Palau and Moens 2011).",
"cite_spans": [],
"ref_spans": [
{
"start": 909,
"end": 916,
"text": "Table 6",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Creation of the Final Corpus",
"sec_num": "4.4"
},
{
"text": "The proportion of non-argumentative text amounts to 47,474 tokens (32.2%) and 1,631 sentences (22.9%). The number of sentences with several argument components is 583, of which 302 include several components with different types (e.g., a claim followed by premise). Therefore, the identification of argument components requires the separation of argumentative from non-argumentative text units and the recognition of component boundaries at the token level. The proportion of paragraphs with unlinked argument components (e.g., unsupported claims without incoming relations) is 421 (23%). Thus, methods that link all argument components in a paragraph are only of limited use for identifying the argumentation structures in our corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Statistics",
"sec_num": "4.5"
},
{
"text": "In total, the corpus includes 1,130 arguments (i.e., claims supported by at least one premise). Only 140 of them have an attack relation. Thus, the proportion of arguments with attack relations is considerably lower than in the microtext corpus from Peldszus and Stede (2015) . Most of the arguments are convergent-that is, the depth of the argument is 1. The number of arguments with serial structure is 236 (20.9%).",
"cite_spans": [
{
"start": 250,
"end": 275,
"text": "Peldszus and Stede (2015)",
"ref_id": "BIBREF71"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Statistics",
"sec_num": "4.5"
},
{
"text": "Our approach for parsing argumentation structures consists of five consecutive subtasks, depicted in Figure 3 . The identification model separates argumentative from nonargumentative text units and recognizes the boundaries of argument components. The next three models constitute a joint model for recognizing the argumentation structure. We train two base classifiers. The argument component classification model labels each argument component as major claim, claim, or premise, and the argumentative relation identification model recognizes if two argument components are argumentatively linked or not. The tree generation model globally optimizes the results of the two base classifiers for finding a tree (or several ones) in each paragraph. Finally, the stance recognition model differentiates between support and attack relations. For preprocessing, we use several models from the DKPro Framework (Eckart de Castilho and Gurevych 2014). We identify tokens and sentence boundaries using the LanguageTool segmenter 6 and identify paragraphs by checking for line breaks. We lemmatize each token using the Mate Tools lemmatizer (Bohnet et al. 2013) and apply the Stanford part-of-speech (POS) tagger (Toutanova et al. 2003) , constituent and dependency parsers (Klein and Manning 2003) , and sentiment analyzer (Socher et al. 2013) . We use a discourse parser from Lin, Ng, and Kan (2014) for recognizing PDTB-style discourse relations. We use the DKPro TC text classification framework (Daxenberger et al. 2014) for feature extraction and experimentation.",
"cite_spans": [
{
"start": 1131,
"end": 1151,
"text": "(Bohnet et al. 2013)",
"ref_id": "BIBREF8"
},
{
"start": 1203,
"end": 1226,
"text": "(Toutanova et al. 2003)",
"ref_id": "BIBREF90"
},
{
"start": 1264,
"end": 1288,
"text": "(Klein and Manning 2003)",
"ref_id": "BIBREF47"
},
{
"start": 1314,
"end": 1334,
"text": "(Socher et al. 2013)",
"ref_id": "BIBREF83"
},
{
"start": 1490,
"end": 1515,
"text": "(Daxenberger et al. 2014)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 101,
"end": 109,
"text": "Figure 3",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Parsing Argumentation Structure",
"sec_num": "5."
},
{
"text": "In the following sections, we describe each model in detail. For finding the bestperforming models, we conduct model selection on our training data using 5-fold crossvalidation. Then, we conduct model assessment on our test data. We determine the evaluation scores of each cross-validation experiment by accumulating the confusion matrices of each fold into one confusion matrix, which has been shown to be the least biased method for evaluating cross-validation experiments (Forman and Scholz 2010) . We use macro-averaging as described by Sokolova and Lapalme (2009) and report macro precision (P), macro recall (R), and macro F1 scores (F1). We use a two-sided Wilcoxon signed-rank test with p = 0.01 for significance testing. Because most evaluation measures for comparing system outputs are not normally distributed (S\u00f8gaard 2013) , this non-parametric test is preferable to parametric tests, which make stronger assumptions about the underlying distribution of the random variables. We apply this test to all reported evaluation scores obtained for each of the 80 essays in our test set.",
"cite_spans": [
{
"start": 475,
"end": 499,
"text": "(Forman and Scholz 2010)",
"ref_id": "BIBREF34"
},
{
"start": 541,
"end": 568,
"text": "Sokolova and Lapalme (2009)",
"ref_id": "BIBREF85"
},
{
"start": 821,
"end": 835,
"text": "(S\u00f8gaard 2013)",
"ref_id": "BIBREF84"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Argumentation Structure",
"sec_num": "5."
},
{
"text": "The remainder of this section is structured as follows: In the following section, we introduce the baselines and the upper bound for each task. In Section 5.2, we present the identification model that detects argument components and their boundaries. In Section 5.3, we propose a new joint model for identifying argumentation structures. In Section 5.4, we introduce our stance recognition model. In Section 5.5, we report the results of the model assessment on our test data and on the microtext corpus from Peldszus and Stede (2015) . We present the results of the error analysis in Section 5.6. We evaluate the identification model independently and use the gold standard argument components for evaluating the remaining models.",
"cite_spans": [
{
"start": 509,
"end": 534,
"text": "Peldszus and Stede (2015)",
"ref_id": "BIBREF71"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Argumentation Structure",
"sec_num": "5."
},
{
"text": "For evaluating our models, we use two different types of baselines: First, we use majority baselines that label each instance with the majority class. Table A .1 in Appendix A shows the class distribution in our training data and test data for each task.",
"cite_spans": [],
"ref_spans": [
{
"start": 151,
"end": 158,
"text": "Table A",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baselines and Upper Bound",
"sec_num": "5.1"
},
{
"text": "Second, we use heuristic baselines, which are motivated by the common structure of persuasive essays (Whitaker 2009; Perutz 2010) . The heuristic baseline of the identification task exploits sentence boundaries. It selects all sentences as argument components except the first two and the last sentence of an essay. 7 The heuristic baseline of the classification task labels the first argument component in each body paragraph as claim, and all remaining components in body paragraphs as premise. The last argument component in the introduction and the first argument component in the conclusion are classified as major claim and all remaining argument components in the introduction and conclusion are labeled as claim. The heuristic baseline for the relation identification classifies an argument component pair as linked if the target is the first component of a body paragraph. We expect that this baseline will yield good results, because 62% of all body paragraphs in our corpus start with a claim. The heuristic baseline of the stance recognition classifies each argument component in the second to last paragraph as attack. The motivation for this baseline stems from essay writing guidelines, which recommend including opposing arguments in the second to last paragraph.",
"cite_spans": [
{
"start": 101,
"end": 116,
"text": "(Whitaker 2009;",
"ref_id": "BIBREF94"
},
{
"start": 117,
"end": 129,
"text": "Perutz 2010)",
"ref_id": "BIBREF73"
},
{
"start": 316,
"end": 317,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines and Upper Bound",
"sec_num": "5.1"
},
{
"text": "We determine the human upper bound for each task by averaging the evaluation scores of all three annotator pairs on our test data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines and Upper Bound",
"sec_num": "5.1"
},
{
"text": "We consider the identification of argument components as a sequence labeling task at the token level. We encode the argument components using an IOB-tagset (Ramshaw and Marcus 1995) and consider an entire essay as a single sequence. Accordingly, we label the first token of each argument component as \"Arg-B\", the tokens covered by an argument component as \"Arg-I\", and non-argumentative tokens as \"O\". As a learner, we use a CRF (Lafferty, McCallum, and Pereira 2001) with the averaged perceptron training method (Collins 2002) . Because a CRF considers contextual information, the model is particularly suited for sequence labeling tasks (Goudas et al. 2014, page 292) . For each token, we extract the following features (Table 7) :",
"cite_spans": [
{
"start": 156,
"end": 181,
"text": "(Ramshaw and Marcus 1995)",
"ref_id": "BIBREF77"
},
{
"start": 430,
"end": 468,
"text": "(Lafferty, McCallum, and Pereira 2001)",
"ref_id": "BIBREF51"
},
{
"start": 514,
"end": 528,
"text": "(Collins 2002)",
"ref_id": "BIBREF23"
},
{
"start": 640,
"end": 670,
"text": "(Goudas et al. 2014, page 292)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 723,
"end": 732,
"text": "(Table 7)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Identifying Argument Components",
"sec_num": "5.2"
},
{
"text": "Structural features capture the position of the token. We expect these features to be effective for filtering non-argumentative text units, since the introductions and conclusions of essays include few argumentatively relevant content. The punctuation features indicate if the token is a punctuation and if the token is adjacent to a punctuation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying Argument Components",
"sec_num": "5.2"
},
{
"text": "Syntactic features consist of the token's POS as well as features extracted from the Lowest Common Ancestor (LCA) of the current token t i and its adjacent tokens in the constituent parse tree. First, we define LCA preceding (t i ) =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying Argument Components",
"sec_num": "5.2"
},
{
"text": "|lcaPath(t i ,t i\u22121 )| depth",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying Argument Components",
"sec_num": "5.2"
},
{
"text": ", where |lcaPath(u, v)| is the length of the path from u to the LCA of u and v, and depth the depth of the constituent parse tree. Second, we define LCA following (t i ) =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying Argument Components",
"sec_num": "5.2"
},
{
"text": "|lcaPath(t i ,t i+1 )| depth",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying Argument Components",
"sec_num": "5.2"
},
{
"text": ", which considers the current token t i and its following token t i+1 . 8 Additionally, we add the constituent types of both lowest common ancestors to our feature set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying Argument Components",
"sec_num": "5.2"
},
{
"text": "Features used for argument component identification (*indicates genre-dependent features).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 7",
"sec_num": null
},
{
"text": "Token position Token present in introduction or conclusion*; token is first or last token in sentence; relative and absolute token position in document, paragraph and sentence Punctuation Token precedes or follows any punctuation, full stop, comma and semicolon; token is any punctuation or full stop Position of covering Absolute and relative position of the token's sentence covering sentence in the document and paragraph",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structural",
"sec_num": null
},
{
"text": "Part-of-speech The token's part-of-speech Lowest common ancestor (LCA)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic",
"sec_num": null
},
{
"text": "Normalized length of the path to the LCA with the following and preceding token in the parse tree LCA types",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic",
"sec_num": null
},
{
"text": "The two constituent types of the LCA of the current token and its preceding and following token",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic",
"sec_num": null
},
{
"text": "Lexico-syntactic Combination of lexical and syntactic features as described by Soricut and Marcu (2003) Prob Probability Conditional probability of the current token being the beginning of a component given its preceding tokens",
"cite_spans": [
{
"start": 91,
"end": 103,
"text": "Marcu (2003)",
"ref_id": "BIBREF86"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "LexSyn",
"sec_num": null
},
{
"text": "Lexico-syntactic features have been shown to be effective for segmenting elementary discourse units (Hernault et al. 2010) . We adopt the features introduced by Soricut and Marcu (2003) . We use lexical head projection rules (Collins 2003) implemented in the Stanford tool suite to lexicalize the constituent parse tree. For each token t, we extract its uppermost node n in the parse tree with the lexical head t and define a lexicosyntactic feature as the combination of t and the constituent type of n. We also consider the child node of n in the path to t and its right sibling, and combine their lexical heads and constituent types as described by Soricut and Marcu (2003) .",
"cite_spans": [
{
"start": 100,
"end": 122,
"text": "(Hernault et al. 2010)",
"ref_id": "BIBREF42"
},
{
"start": 173,
"end": 185,
"text": "Marcu (2003)",
"ref_id": "BIBREF86"
},
{
"start": 225,
"end": 239,
"text": "(Collins 2003)",
"ref_id": "BIBREF24"
},
{
"start": 652,
"end": 676,
"text": "Soricut and Marcu (2003)",
"ref_id": "BIBREF86"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "LexSyn",
"sec_num": null
},
{
"text": "The probability feature is the conditional probability of the current token t i being the beginning of an argument component (\"Arg-B\") given its preceding tokens. We maximize the probability for preceding tokens of a length up to n = 3: argmax n\u2208{1,2,3}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LexSyn",
"sec_num": null
},
{
"text": "P(t i = Arg-B|t i\u2212n , ..., t i\u22121 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LexSyn",
"sec_num": null
},
{
"text": "To estimate these probabilities, we use maximum likelihood estimation on our training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LexSyn",
"sec_num": null
},
{
"text": ". The results of model selection show that using all features performs best. Table C .1 in Appendix C provides the detailed results of the feature analysis. Table 8 shows the results of the model assessment on the test data. The heuristic baseline achieves a macro F1 score of 0.642. It achieves an F1 score of 0.677 for non-argumentative tokens (\"O\") and 0.867 for argumentative tokens (\"Arg-I\"). Thus, the heuristic baseline effectively separates argumentative from non-argumentative text units. However, it achieves a low F1 score of 0.364 for identifying the beginning of argument components (\"Arg-B\"). Because it does not split sentences, it recognizes 145 fewer argument components than the number of gold standard components in the test data. The CRF model with all features significantly outperforms the macro F1 score of the heuristic baseline (p = 7.85 \u00d7 10 \u221215 ). Compared with the heuristic baseline, it performs significantly better in identifying the beginning of argument components (p = 1.65 \u00d7 10 \u221214 ). It also performs better for separating argumentative from non-argumentative tokens (p = 4.06 \u00d7 10 \u221214 ). In addition, the number of identified argument components differs only slightly from the number of gold standard components in our test data. It identifies 1,272 argument components, whereas the number of gold standard components in our test data amounts to 1,266. The human upper bound yields a macro F1 score of 0.886 for identifying argument components. The macro F1 score of our model is only 0.019 less. Therefore, our model achieves 97.9% of human performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 77,
"end": 84,
"text": "Table C",
"ref_id": "TABREF13"
},
{
"start": 157,
"end": 164,
"text": "Table 8",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results of Argument Component Identification",
"sec_num": "5.2.1"
},
{
"text": "For identifying the most frequent errors of our model, we manually investigated the predicted argument components. The most frequent errors are false positives of \"Arg-I\". The model classifies 1,548 out of 9,403 non-argumentative tokens (\"O\") as argumentative (\"Arg-I\"). The reason for these errors is threefold: First, the model frequently labels non-argumentative sentences in the conclusion of an essay as argumentative. These sentences are, for instance, non-argumentative recommendations for future actions or summaries of the essay topic. Second, the model does not correctly recognize non-argumentative sentences in body paragraphs. It wrongly identifies argument components in 13 out of the 15 non-argumentative body paragraph sentences in our test data. The reason for these errors may be attributed to the high class imbalance in our training data. Third, the model tends to annotate lengthy non-argumentative preceding tokens as argumentative. For instance, it labels subordinate clauses preceding the actual argument component as argumentative in sentences similar to \"In addition to the reasons mentioned above, [actual \"Arg-B\"] ...\" (underlined text units represent the annotations of our model).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis.",
"sec_num": "5.2.2"
},
{
"text": "The second most frequent cause of errors are misclassified beginnings of argument components. The model classifies 137 of the 1,266 beginning tokens as \"Arg-I\". The model, for instance, fails to identify the correct beginning in sentences like \"Hence, from this case we are capable of stating that [actual \"Arg-B\"] ... \" or \"Apart from the reason I mentioned above, another equally important aspect is that [actual \"Arg-B\"] ...\". These examples also explain the false negatives of non-argumentative tokens which are wrongly classified as \"Arg-B\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis.",
"sec_num": "5.2.2"
},
{
"text": "The identification of argumentation structures involves the classification of argument component types and the identification of argumentative relations. Both argumentative types and argumentative relations share information (Stab and Gurevych 2014b, p. 54) . For instance, if an argument component is classified as claim, it is less likely to exhibit outgoing relations and more likely to have incoming relations. On the other hand, an argument component with an outgoing relation and few incoming relations is more likely to be a premise. Therefore, we propose a joint model that combines both types of information for finding the optimal structure. We train two local base classifiers. One classifier recognizes the type of argument components (Section 5.3.1), and another identifies argumentative relations between argument components (Section 5.3.2). For both models, we use an SVM (Cortes and Vapnik 1995) with a polynomial kernel implemented in the Weka machine learning framework (Hall et al. 2009) . The motivation for selecting this learner stems from the results of our previous work, in which we found that SVMs outperform several other learners in both tasks (Stab and Gurevych 2014b, page 51) . We globally optimize the outcomes of both classifiers in order to find the optimal argumentation structure using Integer Linear Programming (Section 5.3.3). In the following three sections, we first introduce the features of the two base classifiers before describing the Integer Linear Programming model.",
"cite_spans": [
{
"start": 225,
"end": 257,
"text": "(Stab and Gurevych 2014b, p. 54)",
"ref_id": null
},
{
"start": 887,
"end": 911,
"text": "(Cortes and Vapnik 1995)",
"ref_id": "BIBREF27"
},
{
"start": 988,
"end": 1006,
"text": "(Hall et al. 2009)",
"ref_id": "BIBREF40"
},
{
"start": 1172,
"end": 1206,
"text": "(Stab and Gurevych 2014b, page 51)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recognizing Argumentation Structures",
"sec_num": "5.3"
},
{
"text": "We consider the classification of argument component types as multiclass classification and label each argument component as \"major claim,\" \"claim,\" or \"premise.\" We experiment with the following feature groups:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifying Argument Components.",
"sec_num": "5.3.1"
},
{
"text": "Lexical features consist of binary lemmatized unigrams and the 2k most frequent dependency word pairs. We extract the unigrams from the component and its preceding tokens to ensure that discourse markers are included in the features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifying Argument Components.",
"sec_num": "5.3.1"
},
{
"text": "Structural features capture the position of the component in the document and token statistics (Table 9 ). Because major claims occur frequently in introductions or conclusions, we expect that these features are valuable for differentiating component types.",
"cite_spans": [],
"ref_spans": [
{
"start": 95,
"end": 103,
"text": "(Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Classifying Argument Components.",
"sec_num": "5.3.1"
},
{
"text": "Indicator features are based on four categories of lexical indicators that we manually extracted from 30 additional essays. Forward indicators such as \"therefore\", \"thus\", or \"consequently\" signal that the component following the indicator is a result of preceding argument components. Backward indicators indicate that the component following the indicator supports a preceding component. Examples of this category are \"in addition\", \"because\", or \"additionally\". Thesis indicators such as \"in my opinion\" or \"I believe that\" indicate major claims. Rebuttal indicators signal attacking premises or contra arguments. Examples are \"although\", \"admittedly\", or \"but\". The complete lists of all four categories are provided in Table B .1 in Appendix B. We define for each category a binary feature that indicates if an indicator of a category is present in the component or its preceding tokens. An additional binary feature indicates if first-person indicators are present in the argument component or its preceding tokens (Table 9) . We assume that first-person indicators are informative for identifying major claims.",
"cite_spans": [],
"ref_spans": [
{
"start": 724,
"end": 731,
"text": "Table B",
"ref_id": null
},
{
"start": 1021,
"end": 1030,
"text": "(Table 9)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Classifying Argument Components.",
"sec_num": "5.3.1"
},
{
"text": "Contextual features capture the context of an argument component. We define eight binary features set to true if a forward, backward, rebuttal, or thesis indicator precedes or follows the current component in its covering paragraph. Additionally, we count the number of noun and verb phrases of the argument component that are also present in the introduction or conclusion of the essay. These features are motivated by the observation that claims frequently restate entities or phrases of the essay topic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifying Argument Components.",
"sec_num": "5.3.1"
},
{
"text": "Features of the argument component classification model (*indicates genre-dependent features).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Table 9",
"sec_num": null
},
{
"text": "Feature Description Furthermore, we add four binary features indicating if the current component shares a noun or verb phrase with the introduction or conclusion. Syntactic features consist of the POS distribution of the argument component, the number of subclauses in the covering sentence, the depth of the constituent parse tree of the covering sentence, the tense of the main verb of the component, and a binary feature that indicates whether a modal verb is present in the component.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group",
"sec_num": null
},
{
"text": "The probability features are the conditional probabilities of the current component being assigned the type t \u2208 {MajorClaim, Claim, Premise} given the sequence of tokens p directly preceding the component. To estimate P(t|p), we use maximum likelihood estimation on our training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical",
"sec_num": null
},
{
"text": "Discourse features are based on the output of the PDTB-style discourse parser from Lin, Ng, and Kan (2014) . Each binary feature is a triple combining the following information: (1) the type of the relation that overlaps with the current argument component, (2) whether the current argument component overlaps with the first or second elementary discourse unit of a relation, and (3) if the discourse relation is implicit or explicit. For instance, the feature Contrast imp Arg1 indicates that the current component overlaps with the first discourse unit of an implicit contrast relation. The use of these features is motivated by the findings of Cabrio, Tonelli, and Villata (2013) . By analyzing several example arguments, they hypothesized that general discourse relations could be informative for identifying argument components.",
"cite_spans": [
{
"start": 83,
"end": 106,
"text": "Lin, Ng, and Kan (2014)",
"ref_id": "BIBREF53"
},
{
"start": 647,
"end": 682,
"text": "Cabrio, Tonelli, and Villata (2013)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical",
"sec_num": null
},
{
"text": "Embedding features are based on word embeddings trained on a part of the Google news data set (Mikolov et al. 2013) . We sum the vectors of each word of an argument component and its preceding tokens and add it to our feature set. In contrast to common bag-of-words representations, embedding features have a continuous feature space that helped to achieve better results in several NLP tasks (Socher et al. 2013) .",
"cite_spans": [
{
"start": 94,
"end": 115,
"text": "(Mikolov et al. 2013)",
"ref_id": "BIBREF59"
},
{
"start": 393,
"end": 413,
"text": "(Socher et al. 2013)",
"ref_id": "BIBREF83"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical",
"sec_num": null
},
{
"text": "By experimenting with individual features and several feature combinations, we found that a combination of all features yields the best results. The results of the model selection can be found in Table C .2 in Appendix C.",
"cite_spans": [],
"ref_spans": [
{
"start": 196,
"end": 203,
"text": "Table C",
"ref_id": "TABREF13"
}
],
"eq_spans": [],
"section": "Lexical",
"sec_num": null
},
{
"text": "Relations. The relation identification model classifies ordered pairs of argument components as \"linked\" or \"not-linked.\" In this analysis step, we consider both argumentative support and attack relations as \"linked.\" For each paragraph with argument components c 1 , ..., c n , we consider p = (c i , c j ) with i = j and 1 \u2264 i, j \u2264 n as an argument component pair. An argument component pair is \"linked\" if our corpus contains an argumentative relation with c i as source component and c j as target component. The class distribution is skewed towards \"not-linked\" pairs (Table A .1). We experiment with the following features:",
"cite_spans": [],
"ref_spans": [
{
"start": 573,
"end": 581,
"text": "(Table A",
"ref_id": null
}
],
"eq_spans": [],
"section": "Identifying Argumentative",
"sec_num": "5.3.2"
},
{
"text": "Lexical features are binary lemmatized unigrams of the source and target component and their preceding tokens. We limit the number of unigrams for both source and target component to the 500 most frequent words in our training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying Argumentative",
"sec_num": "5.3.2"
},
{
"text": "Syntactic features include binary POS features of the source and target component and the 500 most frequent production rules extracted from the parse tree of the source and target component as described in our previous work (Stab and Gurevych 2014b) .",
"cite_spans": [
{
"start": 224,
"end": 249,
"text": "(Stab and Gurevych 2014b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying Argumentative",
"sec_num": "5.3.2"
},
{
"text": "Structural features consist of the number of tokens in the source and target component, statistics on the components of the covering paragraph of the current pair, and position features (Table 10) .",
"cite_spans": [],
"ref_spans": [
{
"start": 186,
"end": 196,
"text": "(Table 10)",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Identifying Argumentative",
"sec_num": "5.3.2"
},
{
"text": "Indicator features are based on the forward, backward, thesis, and rebuttal indicators introduced in Section 5.3.1. We extract binary features from the source and target component and the context of the current pair (Table 10) . We assume that these features are helpful for modeling the direction of argumentative relations and the context of the current component pair.",
"cite_spans": [],
"ref_spans": [
{
"start": 216,
"end": 226,
"text": "(Table 10)",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Identifying Argumentative",
"sec_num": "5.3.2"
},
{
"text": "Discourse features are extracted from the source and target component of each component pair as described in Section 5.3.1. Although PDTB-style discourse relations are limited to adjacent relations, we expect that the types of general discourse relations can be helpful for identifying argumentative relations. We also experimented with features capturing PDTB relations between the target and source component. However, those were not effective for capturing argumentative relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying Argumentative",
"sec_num": "5.3.2"
},
{
"text": "PMI features are based on the assumption that particular words indicate incoming or outgoing relations. For instance, tokens like \"therefore\", \"thus\", or \"hence\" can signal incoming relations, whereas tokens such as \"because\", \"since\", or \"furthermore\" may indicate outgoing relations. To capture this information, we use Pointwise Mutual Information (PMI), which has been successfully used for measuring word associations (Turney 2002; Church and Hanks 1990) . However, instead of determining the PMI of two words, we estimate the PMI between a lemmatized token t and the direction of a relation",
"cite_spans": [
{
"start": 423,
"end": 436,
"text": "(Turney 2002;",
"ref_id": "BIBREF91"
},
{
"start": 437,
"end": 459,
"text": "Church and Hanks 1990)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying Argumentative",
"sec_num": "5.3.2"
},
{
"text": "d = {incoming, outgoing} as PMI(t, d) = log p(t,d) p(t) p(d) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying Argumentative",
"sec_num": "5.3.2"
},
{
"text": "Here, p(t, d) is the probability that token t occurs in an argument component with either incoming or outgoing relations. The ratio between p(t, d) and p(t) p(d) indicates the dependence between a token and the direction of a relation. We estimate PMI(t, d) for each token in our training data. We extract the ratio of tokens positively and negatively associated with incoming or outgoing relations for both source and target components. Additionally, we extract four binary features, which indicate if any token of the components has a positive or negative association with either incoming or outgoing relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying Argumentative",
"sec_num": "5.3.2"
},
{
"text": "Shared noun features (shNo) indicate if the source and target components share a noun. We also add the number of shared nouns to our feature set. These features are motivated by the observation that claims and premises often share the same subject.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying Argumentative",
"sec_num": "5.3.2"
},
{
"text": "For selecting the best performing model, we conducted feature ablation tests and experimented with individual features. The results show that none of the feature groups is informative when used individually. We achieved the best performance by removing lexical features from our feature set (detailed results of the model selection can be found in Table C .3 in Appendix C).",
"cite_spans": [],
"ref_spans": [
{
"start": 348,
"end": 355,
"text": "Table C",
"ref_id": "TABREF13"
}
],
"eq_spans": [],
"section": "Identifying Argumentative",
"sec_num": "5.3.2"
},
{
"text": "Relations and Argument Component Types. Both base classifiers identify argument component types and argumentative relations locally. Consequently, the results may not be globally consistent. For instance, the relation identification model does not link 37.1% of all premises in our model selection experiments. Therefore, we propose a joint model that globally optimizes the outcomes of the two base classifiers. We formalize this task as an Integer Linear Programming (ILP) problem. Given a paragraph including n argument components, 9 we define the following objective function",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Jointly Modeling Argumentative",
"sec_num": "5.3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "argmax x n i=1 n j=1 w ij x ij",
"eq_num": "(1)"
}
],
"section": "Jointly Modeling Argumentative",
"sec_num": "5.3.3"
},
{
"text": "with variables x ij \u2208 {0, 1} indicating an argumentative relation from argument component i to argument component j. 10 Each coefficient w ij \u2208 R is a weight of a relation. It is determined by incorporating the outcomes of the two base classifiers. To ensure that the resulting structure is a tree, we define the following constraints:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Jointly Modeling Argumentative",
"sec_num": "5.3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2200i : n j=1 x ij \u2264 1 (2) n i=1 n j=1 x ij \u2264 n \u2212 1",
"eq_num": "(3)"
}
],
"section": "Jointly Modeling Argumentative",
"sec_num": "5.3.3"
},
{
"text": "\u2200i :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Jointly Modeling Argumentative",
"sec_num": "5.3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x ii = 0",
"eq_num": "(4)"
}
],
"section": "Jointly Modeling Argumentative",
"sec_num": "5.3.3"
},
{
"text": "Equation (2) prevents an argument component i from having more than one outgoing relation. Equation (3) ensures that a paragraph includes at least one root node (i.e., a node without outgoing relation). Equation (4) prevents an argumentative relation from having the same source and target component.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Jointly Modeling Argumentative",
"sec_num": "5.3.3"
},
{
"text": "To prevent cycles, we adopt the approach described by K\u00fcbler et al. (2008, page 92) . We add the auxiliary variables b ij \u2208 {0, 1} to our objective function (1) where b ij = 1 if there is a directed path from argument component i to argument component j. The following constraints tie the auxiliary variables b ij to the variables x ij :",
"cite_spans": [
{
"start": 54,
"end": 83,
"text": "K\u00fcbler et al. (2008, page 92)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Jointly Modeling Argumentative",
"sec_num": "5.3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2200i \u2200j : x ij \u2212 b ij \u2264 0 (5) \u2200i \u2200j \u2200k : b ik \u2212 b ij \u2212 b jk \u2264 \u22121 (6) \u2200i : b ii = 0",
"eq_num": "(7)"
}
],
"section": "Jointly Modeling Argumentative",
"sec_num": "5.3.3"
},
{
"text": "The first constraint ensures that there is a path from i to j represented in variable b ij if there is a direct relation between the argument components i and j. The second constraint covers all paths of length greater than 1 in a transitive way. It states that if there is a path from argument component i to argument component j (b ij = 1) and another path from argument component j to argument component k (b jk = 1) then there is also a path from argument component i to argument component k. Thus, it iteratively covers paths of length l + 1 by having covered paths of length l. The third constraint avoids cycles by preventing all directed paths starting and ending with the same argument component.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Jointly Modeling Argumentative",
"sec_num": "5.3.3"
},
{
"text": "Having defined the ILP model, we consolidate the results of the two base classifiers. We consider this task by determining the weight matrix W \u2208 R n\u00d7n that includes the coefficients w ij \u2208 W of our objective function. The weight matrix W can be considered an adjacency matrix. The greater the weight of a particular relation is, the higher the likelihood that the relation appears in the optimal structure found by the ILP-solver.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Jointly Modeling Argumentative",
"sec_num": "5.3.3"
},
{
"text": "First, we incorporate the results of the relation identification model. Its result can be considered as an adjacency matrix R \u2208 {0, 1} n\u00d7n . For each pair of argument components (i, j) with 1 \u2264 i, j \u2264 n, each r ij \u2208 R is 1 if the relation identification model predicts an argumentative relation from argument component i (source) to argument component j (target), or 0 if the model does not predict an argumentative relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Jointly Modeling Argumentative",
"sec_num": "5.3.3"
},
{
"text": "Second, we derive a claim score cs i for each argument component i from the predicted relations in R:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Jointly Modeling Argumentative",
"sec_num": "5.3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "cs i = relin i \u2212 relout i + n \u2212 1 rel + n \u2212 1",
"eq_num": "(8)"
}
],
"section": "Jointly Modeling Argumentative",
"sec_num": "5.3.3"
},
{
"text": "Here, relin i = n k=1 r ki [i = k] is the number of predicted incoming relations of argument component i, relout i = n l=1 r il [i = l] is the number of predicted outgoing relations of argument component i, and rel = n k=1 n l=1 r kl [k = l] is the total number of relations predicted in the current paragraph. The claim score cs i is greater for argument components with many incoming relations and few outgoing relations. It becomes smaller for argument components with fewer incoming relations and more outgoing relations. By normalizing the score with the total number of predicted relations and argument components, it also accounts for contextual information in the current paragraph and prevents overly optimistic scores. For example, if all predicted relations point to argument component i, which has no outgoing relations, cs i is exactly 1. On the other hand, if there is an argument component j with no incoming and one outgoing relation in a paragraph with four argument components and three predicted relations in R, cs j is 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Jointly Modeling Argumentative",
"sec_num": "5.3.3"
},
{
"text": "3 . Because it is more likely that a relation links an argument component which has a lower claim score to an argument component with a higher claim score, we determine the weight for each argumentative relation as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Jointly Modeling Argumentative",
"sec_num": "5.3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "cr ij = cs j \u2212 cs i",
"eq_num": "(9)"
}
],
"section": "Jointly Modeling Argumentative",
"sec_num": "5.3.3"
},
{
"text": "By treating cs j of the target component j as a positive term, we assign a higher weight to relations pointing to argument components that are likely to be a claim. By subtracting the claim score cs i of the source component i, we assign smaller weights to relations outgoing argument components with larger claim score. Third, we incorporate the argument component types predicted by the classification model. We assign a higher score to the weight w ij if the target component j is predicted as claim, because it is more likely that argumentative relations point to claims. Accordingly, we set c ij = 1 if argument component j is labeled as claim and c ij = 0 if argument component j is labeled as premise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Jointly Modeling Argumentative",
"sec_num": "5.3.3"
},
{
"text": "Finally, we combine all three scores to estimate the weights of the objective function: Each \u03c6 represents a hyperparameter of the ILP model. In our model selection experiments, we found that \u03c6 r = 1 2 and \u03c6 cr = \u03c6 c = 1 4 yields the best performance. More detailed results of the model selection are provided in Table C .4 in Appendix C.",
"cite_spans": [],
"ref_spans": [
{
"start": 312,
"end": 319,
"text": "Table C",
"ref_id": "TABREF13"
}
],
"eq_spans": [],
"section": "Jointly Modeling Argumentative",
"sec_num": "5.3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w ij = \u03c6 r r ij + \u03c6 cr cr ij + \u03c6 c c ij",
"eq_num": "(10)"
}
],
"section": "Jointly Modeling Argumentative",
"sec_num": "5.3.3"
},
{
"text": "After applying the ILP model, we adapt the argumentative relations and argument types according to the results of the ILP-solver. We revise each relation according to the determined x ij scores, set the type of all components without outgoing relation to \"claim,\" and set the type of all remaining components to \"premise.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Jointly Modeling Argumentative",
"sec_num": "5.3.3"
},
{
"text": "The stance recognition model differentiates between argumentative support and attack relations. We model this task as binary classification and classify each claim and premise as \"support\" or \"attack.\" The stance of each premise is encoded in the type of its outgoing relation, whereas the stance of each claim is encoded in its stance attribute. We use an SVM 11 and the features listed in Table 11 . Table 12 shows the F1 scores of the classification, relation identification, and stance recognition tasks using our test data. The ILP joint model significantly outperforms the macro F1 score of the heuristic baselines for component classification (p = 1.49 \u00d7 10 \u22124 ) Table 12 Model assessment on persuasive essays ( \u2020 = significant improvement over baseline heuristic; \u2021 = significant improvement over base classifier). and relation identification (p = 0.003). It also significantly outperforms the macro F1 score of the base classifier for component classification (p = 7.45 \u00d7 10 \u22124 ). However, it does not yield a significant improvement over the macro F1 score of the base classifier for relation identification. The results show that the identification of claims and linked component pairs benefit most from the joint model. Compared with the base classifiers, the ILP joint model improves the F1 score of claims by 0.071 (p = 1.84 \u00d7 10 \u22124 ) and the F1 score of linked component pairs by 0.077 (p = 6.95 \u00d7 10 \u22125 ). The stance recognition model significantly outperforms the heuristic baseline by 0.118 macro F1 score (p = 0.008). It yields 0.947 F1 score for supporting components and 0.413 for attacking components.",
"cite_spans": [],
"ref_spans": [
{
"start": 391,
"end": 399,
"text": "Table 11",
"ref_id": "TABREF7"
},
{
"start": 402,
"end": 410,
"text": "Table 12",
"ref_id": "TABREF0"
},
{
"start": 670,
"end": 678,
"text": "Table 12",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Classifying Support and Attack Relations",
"sec_num": "5.4"
},
{
"text": "The human upper bound yields macro F1 scores of 0.868 for component classification, 0.854 for relation identification, and 0.844 for stance recognition. The ILP joint model almost achieves human performance for classifying argument components. Its F1 score is only .042 lower than human upper bound. Regarding relation identification and stance recognition, the macro F1 scores of our model are 0.103 and 0.164 lower than human performance. Thus, our model achieves 95.2% of human performance for component identification, 87.9% for relation identification, and 80.5% for stance recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5.5"
},
{
"text": "In order to verify the effectiveness of our approach, we also evaluated the ILP joint model on the English microtext corpus (cf. Section 2.4). To ensure the comparability to previous results, we used the same repeated cross-validation set-up as described by Peldszus and Stede (2015) . Because the microtext corpus does not include major claims, we removed the major claim label from our component classification model. Furthermore, it was necessary to adapt several features of the base classifiers, since the microtext corpus does not include non-argumentative text units. Therefore, we did not consider preceding tokens for lexical, indicator, and embedding features and removed Table 13 Model assessment on microtext corpus from Peldszus and Stede (2015) ( \u2020 = significant improvement over baseline heuristic; \u2021 = significant improvement over base classifier). Table 13 shows the evaluation results of our model on the microtext corpus. Our ILP joint model significantly outperforms the macro F1 score of the heuristic baselines for component classification (p = 2.10 \u00d7 10 \u221210 ) and relation identification (p = 1.5 \u00d7 10 \u22128 ). The results also show that our model yields significantly better macro F1 scores compared to the two base classifiers (p = 0.002 for component classification and p = 7.52 \u00d7 10 \u22127 for relation identification). The stance recognition model achieves 0.745 macro F1 score on the microtext corpus. It significantly improves the macro F1 score of the heuristic baseline by 0.203 (p = 7.55 \u00d7 10 \u221210 ). 12 The last two rows in Table 13 show the results reported by Peldszus and Stede (2015) on the English microtext corpus. The Best EG model is their best model for component classification, and MP+p is their best model for relation identification. Compared with our ILP joint model, the Best EG model achieves better macro F1 scores for component classification and relation identification. However, because the outcomes of their systems are not available to us, we cannot determine if this difference is significant. The MP+p model achieves a better macro F1 score for relation identification, but yields lower results for component classification and stance recognition compared to our ILP joint model. These differences can be attributed to the additional information about the function and role attribute incorporated in their joint models (cf. Section 2.3). They showed that both have a beneficial effect on the component classification and relation identification in their corpus (Peldszus and Stede 2015, Figure 3) . However, the role attribute is a unique feature of their corpus and the arguments in their corpus exhibit an unusually high proportion of attack relations. In particular, 86.6% of their arguments include attack relations, whereas the proportion of arguments with attack relations in our corpus amounts to only 12.4%. Therefore, we assume that incorporating function and role attributes will not be beneficial using our corpus.",
"cite_spans": [
{
"start": 258,
"end": 283,
"text": "Peldszus and Stede (2015)",
"ref_id": "BIBREF71"
},
{
"start": 1526,
"end": 1528,
"text": "12",
"ref_id": null
}
],
"ref_spans": [
{
"start": 682,
"end": 690,
"text": "Table 13",
"ref_id": "TABREF1"
},
{
"start": 865,
"end": 873,
"text": "Table 13",
"ref_id": "TABREF1"
},
{
"start": 1550,
"end": 1558,
"text": "Table 13",
"ref_id": "TABREF1"
},
{
"start": 2537,
"end": 2546,
"text": "Figure 3)",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5.5"
},
{
"text": "Overall, the evaluation results show that our ILP joint model significantly outperforms challenging heuristic baselines and simultaneously improves the performance of component classification and relation identification on two different types of discourse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5.5"
},
{
"text": "In order to analyze frequent errors of the ILP joint model, we investigated the predicted argumentation structures in our test data. The confusion matrix of the component classification task (Table 14) shows that the highest confusion is between claims and premises. The model classifies 74 actual premises as claims and 82 claims as premises. By manually investigating these errors, we found that the model tends to label inner premises in serial structures as claims and wrongly identifies claims in sentences containing two premises. Regarding the relation identification, we observed that the model tends to identify argumentation structures that are more shallow than the structures in our gold standard. The model correctly identifies only 34.7% of the 98 serial arguments in our test data. This can be attributed to the \"claim-centered\" weight calculation in our objective function. In particular, the predicted relations in matrix R are the only information about serial arguments, whereas the other two scores (c ij and cr ij ) assign higher weights to relations pointing to claims. In order to determine if the ILP joint model correctly models the relationship between component types and argumentative relations, we artificially improved the predictions of both base classifiers as suggested by Peldszus and Stede (2015) . The dashed lines in Figure 4 show the performance of the artificially improved base classifiers. Continuous lines show the resulting performance of the ILP joint model. Figures 4a and 4b show the effect of improving the component classification and relation identification. They show that correct predictions of one base classifier are not maintained after applying the ILP model if the other base classifier exhibits less accurate predictions. In particular, less accurate argumentative relations have a more detrimental effect on the component types ( Figure 4a ) than less accurate component types do on the outcomes of the relation identification (Figure 4b ). Thus, it is more reasonable to focus on improving relation identification than component classification in future work. Figure 4c depicts the effect of improving both base classifiers, which illustrates that the ILP joint model improves the component types more effectively than argumentative relations. Figure 4c shows that the ILP joint model improves both tasks if the base classifiers are improved. Therefore, we conclude that the ILP joint model successfully captures the natural relationship between argument component types and argumentative relations.",
"cite_spans": [
{
"start": 1306,
"end": 1331,
"text": "Peldszus and Stede (2015)",
"ref_id": "BIBREF71"
}
],
"ref_spans": [
{
"start": 191,
"end": 201,
"text": "(Table 14)",
"ref_id": "TABREF2"
},
{
"start": 1354,
"end": 1362,
"text": "Figure 4",
"ref_id": null
},
{
"start": 1503,
"end": 1521,
"text": "Figures 4a and 4b",
"ref_id": null
},
{
"start": 1889,
"end": 1898,
"text": "Figure 4a",
"ref_id": null
},
{
"start": 1986,
"end": 1996,
"text": "(Figure 4b",
"ref_id": null
},
{
"start": 2120,
"end": 2129,
"text": "Figure 4c",
"ref_id": null
},
{
"start": 2304,
"end": 2313,
"text": "Figure 4c",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.6"
},
{
"text": "Our argumentation structure parser is a pipeline consisting of several consecutive steps. Therefore, potential errors of the upstream models are propagated and ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6."
},
{
"text": "Influence of improving the base classifiers (x-axis shows the proportion of improved predictions and y-axis the macro F1 score).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 4",
"sec_num": null
},
{
"text": "negatively influence the results of the downstream models. For example, errors of the identification model can result in flawed argumentation structures if argumentatively relevant text units are not recognized or non-argumentative text units are identified as relevant. However, our identification model yields good accuracy and an \u03b1 U of 0.958 for identifying argument components. Therefore, it is unlikely that identification errors will significantly influence the outcome of the downstream models when applied to persuasive essays. However, as demonstrated by Levy et al. (2014) and Goudas et al. (2014) , the identification of argument components is more complex in other text genres than it is in persuasive essays. Another potential issue of the pipeline architecture is that wrongly classified major claims will decrease the accuracy of the model because they are not integrated in the joint modeling approach. For this reason, it is worthwhile to experiment in future work with structured machine learning methods that incorporate several tasks in one model (Moens 2013) .",
"cite_spans": [
{
"start": 565,
"end": 583,
"text": "Levy et al. (2014)",
"ref_id": "BIBREF52"
},
{
"start": 588,
"end": 608,
"text": "Goudas et al. (2014)",
"ref_id": "BIBREF37"
},
{
"start": 1068,
"end": 1080,
"text": "(Moens 2013)",
"ref_id": "BIBREF63"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 4",
"sec_num": null
},
{
"text": "In this work, we presented an approach for recognizing argumentation structures in persuasive essays. Other text genres, however, may exhibit less explicit arguments. Habernal and Gurevych (2017, page 27) , for instance, showed that 48% of the arguments in user-generated Web discourse do not include explicit claims. These incomplete arguments, so called enthymemes, make both annotation and automatic analysis challenging. Although humans may be able to deduce the missing parts by interpreting the argument, existing argument mining methods fail on that task and may produce incomplete or even wrong argumentation structures. In particular, the presented approach is not able to recognize gaps in reasoning (i.e., missing premises) or to infer the missing components of implicit arguments. Inferring implicit argument components is challenging since it requires robust methods for capturing the semantics of natural language arguments and appropriate background knowledge for reconstructing the missing parts.",
"cite_spans": [
{
"start": 167,
"end": 204,
"text": "Habernal and Gurevych (2017, page 27)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 4",
"sec_num": null
},
{
"text": "The presented argumentation structure parser is an important milestone for implementing argumentative writing support systems. For example, the recognized argumentation structures allow highlighting unwarranted claims, missing major claims, or different types of quantitative analyses on the number of arguments or their premises. It is still unknown, however, if this feedback provides an adequate guidance for improving students' argumentation skills. In order to answer this question, it is required to integrate the proposed model in writing environments and to investigate the effect of different feedback types on the argumentation skills of students in future research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 4",
"sec_num": null
},
{
"text": "In this article, we presented an end-to-end approach for parsing argumentation structures in persuasive essays. Previous approaches suffer from several limitations: Existing approaches either focus only on particular subtasks of argumentation structure parsing or rely on manually created rules. Consequently, previous approaches are only of limited use for parsing argumentation structures in real application scenarios. To the best of our knowledge, the presented work is the first approach that covers all required subtasks for identifying the global argumentation structure of documents. We showed that jointly modeling argumentation structures simultaneously improves the results of component classification and relation identification. Additionally, we introduced a novel annotation scheme and a new corpus of persuasive essays annotated with argumentation structures that represent the largest resource of its kind. Both the corpus and the annotation guidelines are freely available. Category Indicators Forward (24) \"As a result\", \"As the consequence\", \"Because\", \"Clearly\", \"Consequently\", \"Considering this subject\", \"Furthermore\", \"Hence\", \"leading to the consequence\", \"so\", \"So\", \"taking account on this fact\", \"That is the reason why\", \"The reason is that\", \"Therefore\", \"therefore\", \"This means that\", \"This shows that\", \"This will result\", \"Thus\", \"thus\", \"Thus, it is clearly seen that\", \"Thus, it is seen\", \"Thus, the example shows\" Backward (33) \"Additionally\", \"As a matter of fact\", \"because\", \"Besides\", \"due to\", \"Finally\", \"First of all\", \"Firstly\", \"for example\", \"For example\", \"For instance\", \"for instance\", \"Furthermore\", \"has proved it\", \"In addition\", \"In addition to this\", \"In the first place\", \"is due to the fact that\", \"It should also be noted\", \"Moreover\", \"On one hand\", \"On the one hand\", \"On the other hand\", \"One of the main reasons\", \"Secondly\", \"Similarly\", \"since\", \"Since\", \"So\", \"The reason\", \"To begin with\", \"To offer an instance\", \"What is more\" Thesis (48) \"All in all\", \"All things considered\", \"As far as I am concerned\", \"Based on some reasons\", \"by analyzing both the views\", \"considering both the previous fact\", \"Finally\", \"For the reasons mentioned above\", \"From explanation above\", \"From this point of view\", \"I agree that\", \"I agree with\", \"I agree with the statement that\", \"I believe\", \"I believe that\", \"I do not agree with this statement\", \"I firmly believe that\", \"I highly advocate that\", \"I highly recommend\", \"I strongly believe that\", \"I think that\", \"I think the view is\", \"I totally agree\", \"I totally agree to this opinion\", \"I would have to argue that\", \"I would reaffirm my position that\", \"In conclusion\", \"in conclusion\", \"in my opinion\", \"In my opinion\", \"In my personal point of view\", \"in my point of view\", \"In my point of view\", \"In summary\", \"In the light of the facts outlined above\", \"it can be said that\", \"it is clear that\", \"it seems to me that\", \"my deep conviction\", \"My sentiments\", \"Overall\", \"Personally\", \"the above explanations and example shows that\", \"This, however\", \"To conclude\", \"To my way of thinking\", \"To sum up\", \"Ultimately\" Rebuttal (10) \"Admittedly\", \"although\", \"Although\", \"besides these advantages\", \"but\", \"But\", \"Even though\", \"even though\", \"However\", \"Otherwise\" Table C .2 shows the model selection results of the classification model. Structural features are the only features that significantly outperform the macro F1 score of the heuristic baseline when used individually (p = 4.04\u00d710 \u22126 ). They are the most effective features for identifying major claims and claims. The second-best features for identifying claims are discourse features. With this knowledge, we can confirm the assumption that general discourse relations are useful for component classification (cf. Section 5.3.1). Embedding features do not perform as well as lexical features. They yield lower F1 scores for major claims and claims. Contextual features are effective for identifying major claims, since they implicitly capture if an argument component is present in the introduction or conclusion (cf. Section 5.3.1). Indicator features are most effective for identifying major claims, but contribute only slightly to the identification of claims. Syntactic features are predictive of major claims and premises, but are not effective for recognizing claims. The probability features are not informative for identifying claims, probably because forward indicators may also signal inner premises in serial structures. Omitting probability and embedding features yields the best accuracy. However, we select the best system by means of the macro F1 score, which is more appropriate for imbalanced data sets. Accordingly, we select the model with all features (Table C. 2). The model selection results for relation identification are shown in Table C .3. We report the results of feature ablation tests, since none of the feature groups yields remarkable results when used individually. Structural features are the most effective features for identifying relations. The second-and third-most effective feature groups are indicator and PMI features. Removing the shared noun feature does not yield a significant difference in accuracy or macro F1 score compared with SVM all features. We achieve the best macro F1 score by removing lexical features from the feature set. Table C .4 shows the model selection results of the ILP joint model. Base+heuristic shows the result of applying the baseline to all paragraphs in which the base classifiers identify neither claims nor argumentative relations. The heuristic baseline is triggered in 31 paragraphs, which results in 3.3% more trees identified compared with the base classifiers. However, the difference between Base+heuristic and the base classifiers is not statistically significant. For this reason, we can attribute any further improvements to the joint modeling approach. Moreover, Table C .4 shows selected results of the hyperparameter tuning of the ILP joint model. Using only predicted relations in the ILP-na\u00efve model does not yield an improvement compared with the macro F1 score of the base classifiers. ILP-relation uses only information from the relation identification base classifier. It significantly outperforms the macro F1 score of both base classifiers (p = 6.43\u00d710 \u221212 for relations and p = 7.23\u00d710 \u221213 for components), but converts a large number of premises to claims. The ILP-claim model uses only the outcomes of the argument component base classifier and improves neither component classification nor relation identification. All three models identify a relatively high proportion of claims compared to the number of claims in our training data. The reason for this is that many weights in W are 0. Combining the results of both base classifiers yields a more balanced proportion of component type conversions. All three models (ILP-equal, ILP-same, and ILP-balanced) significantly outperform the macro F1 score of the base classifiers. We identify the best performing system by means of the average macro F1 score for both tasks. Accordingly, we select ILP-balanced as our best performing ILP joint model. Table C .5 shows the model selection results for the stance recognition model. Using sentiment, structural, and embedding features individually does not yield an improvement over the majority baseline. Lexical features yield a significant improvement over the macro F1 score of the heuristic baseline when used individually (p = 8.02 \u00d7 10 \u221210 ). Syntactic features significantly improve precision (p = 1.81 \u00d7 10 \u221230 ), recall (p = 1.95 \u00d7 10 \u221247 ), F1 Support (p = 1.01 \u00d7 10 \u221227 ), and F1 Attack (p = 1.53 \u00d7 10 \u221254 ) over the heuristic baseline, but do not yield a significant improvement over the macro F1 score of the heuristic baseline. Discourse features significantly outperform the heuristic baseline regarding precision (p = 3.68 \u00d7 10 \u221228 ), recall (p = 3.43 \u00d7 10 \u221249 ), and F1 Support (p = 1.06 \u00d7 10 \u221232 ). Because omitting any of the feature groups yields a lower macro F1 score, we select the model with all features as the best performing model.",
"cite_spans": [],
"ref_spans": [
{
"start": 3276,
"end": 3283,
"text": "Table C",
"ref_id": "TABREF13"
},
{
"start": 4746,
"end": 4755,
"text": "(Table C.",
"ref_id": "TABREF13"
},
{
"start": 4829,
"end": 4836,
"text": "Table C",
"ref_id": "TABREF13"
},
{
"start": 5356,
"end": 5363,
"text": "Table C",
"ref_id": "TABREF13"
},
{
"start": 5924,
"end": 5931,
"text": "Table C",
"ref_id": "TABREF13"
},
{
"start": 7171,
"end": 7178,
"text": "Table C",
"ref_id": "TABREF13"
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "www.ukp.tu-darmstadt.de/data/argumentation-mining.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The kappa coefficient is an IAA measure for categorical items that accounts for agreement by chance. The formal definition and a comprehensive overview of chance-corrected IAA measures can be found in the survey ofArtstein and Poesio (2008).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The example essay was written by the authors to illustrate all phenomena of argumentation structures in persuasive essays.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Although it would be preferable to have a group of annotators with similar annotation experience (e.g. all non-experts), because of lack of resources it is a common practice to have mixed annotator groups.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In our evaluation set of 80 essays the annotators identified in 4.3% of the sentences several argument components of different types. Thus, evaluating the reliability of argument components at the sentence level is a good approximation of the inter-annotator agreement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "www.languagetool.org.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Full stops at the end of a sentence are all classified as non-argumentative. 8 We set LCA preceding = \u22121 if t i is the first token in its covering sentence and LCA following = \u22121 if t i is the last token in its covering sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We consider only claims and premises in our joint model, since argumentative relations between claims and major claims are modeled with a level approach (cf. Section 3.2). 10 We use the lpsolve framework (http://lpsolve.sourceforge.net) and set each variable in the objective function to binary mode for ensuring the upper bound of 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For finding the best learner, we compared na\u00efve Bayes(John and Langley 1995), Random Forests(Breiman 2001), Multinomial Logistic Regression (le Cessie and van Houwelingen 1992), C4.5 Decision Trees(Quinlan 1993), and SVM(Cortes and Vapnik 1995); we found that an SVM outperforms all other classifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The heuristic baseline for stance recognition on the microtext corpus classifies the fourth component as \"attack\" and all other components as \"support.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been supported by the Volkswagen Foundation as part of the Lichtenberg-Professorship Program under grant no. I/82806 and by the German Federal Ministry of Education and Research (BMBF) as a part of the Software Campus project AWS under grant no. 01-S12054. We would like to thank the anonymous reviewers for their valuable feedback; Can Diehl, Ilya Kuznetsov, Todd Shore, and Anshul Tak for their valuable contributions; and Andreas Peldszus for providing details about his corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": " Table A .1 shows the class distributions of the training and test data of the persuasive essay corpus for each analysis step. Table B .1 shows all of the lexical indicators we extracted from 30 persuasive essays. The lists include 24 forward indicators, 33 backward indicators, 48 thesis indicators, and 10 rebuttal indicators.",
"cite_spans": [],
"ref_spans": [
{
"start": 1,
"end": 8,
"text": "Table A",
"ref_id": null
},
{
"start": 127,
"end": 134,
"text": "Table B",
"ref_id": null
}
],
"eq_spans": [],
"section": "Appendix A. Class Distributions",
"sec_num": null
},
{
"text": "The following tables show the model selection results for all five tasks using 5-fold cross-validation on our training data. Table C .1 shows the results of using individual feature groups for the argument component identification task. Lexico-syntactic features perform best regarding the macro F1 score, and they perform particularly well for recognizing the beginning of argument components (\"Arg-B\"). The second best features are structural features. They yield the best F1 score for separating argumentative from non-argumentative text units (\"O\").Syntactic features are useful for identifying the beginning of argument components. The probability feature yields the lowest macro F1 score. Nevertheless, we observe a significant decrease compared with the macro F1 score of the model with all features when evaluating the system without the probability feature (p= 0.003). We obtain the best results by using all features. Because persuasive essays exhibit a particular paragraph structure, which may not be present in other text genres (e.g., user-generated Web discourse), we also evaluate the model without genre-dependent features (cf. ",
"cite_spans": [],
"ref_spans": [
{
"start": 125,
"end": 132,
"text": "Table C",
"ref_id": null
}
],
"eq_spans": [],
"section": "Appendix C. Detailed Results of Model Selections",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Counter-argumentation and discourse: A case study",
"authors": [
{
"first": "Stergos",
"middle": [],
"last": "Afantenos",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Workshop on Frontiers and Connections between Argumentation Theory and Natural Language Processing",
"volume": "",
"issue": "",
"pages": "11--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Afantenos, Stergos and Nicholas Asher. 2014. Counter-argumentation and discourse: A case study. In Proceedings of the Workshop on Frontiers and Connections between Argumentation Theory and Natural Language Processing, pages 11-16, Bertinoro.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Discourse parsing for multi-party chat dialogues",
"authors": [
{
"first": "Stergos",
"middle": [],
"last": "Afantenos",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Kow",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
},
{
"first": "J\u00e9r\u00e9my",
"middle": [],
"last": "Perret",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "928--937",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Afantenos, Stergos, Eric Kow, Nicholas Asher, and J\u00e9r\u00e9my Perret. 2015. Discourse parsing for multi-party chat dialogues. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 928-937, Lisbon.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A benchmark dataset for automatic detection of claims and evidence in the context of controversial topics",
"authors": [
{
"first": "Ehud",
"middle": [],
"last": "Aharoni",
"suffix": ""
},
{
"first": "Anatoly",
"middle": [],
"last": "Polnarov",
"suffix": ""
},
{
"first": "Tamar",
"middle": [],
"last": "Lavee",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Hershcovich",
"suffix": ""
},
{
"first": "Ran",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Ruty",
"middle": [],
"last": "Rinott",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Gutfreund",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Slonim",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the First Workshop on Argumentation Mining",
"volume": "",
"issue": "",
"pages": "64--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aharoni, Ehud, Anatoly Polnarov, Tamar Lavee, Daniel Hershcovich, Ran Levy, Ruty Rinott, Dan Gutfreund, and Noam Slonim 2014. A benchmark dataset for automatic detection of claims and evidence in the context of controversial topics. In Proceedings of the First Workshop on Argumentation Mining, pages 64-68, Baltimore, MD.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Inter-coder agreement for computational linguistics",
"authors": [
{
"first": "Ron",
"middle": [],
"last": "Artstein",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "4",
"pages": "555--596",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Artstein, Ron and Massimo Poesio. 2008. Inter-coder agreement for computational linguistics. Computational Linguistics, 34(4):555-596.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Logics of Conversation",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Lascarides",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Asher, Nicholas and Alex Lascarides. 2003. Logics of Conversation. Cambridge University Press, Cambridge.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Practical Logic",
"authors": [
{
"first": "Monroe",
"middle": [
"C"
],
"last": "Beardsley",
"suffix": ""
}
],
"year": 1950,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beardsley, Monroe C. 1950. Practical Logic. Prentice-Hall.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A taxonomy of argumentation models used for knowledge representation",
"authors": [
{
"first": "Beigman",
"middle": [],
"last": "Klebanov",
"suffix": ""
},
{
"first": "Beata",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Derrick",
"middle": [],
"last": "Higgins",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Seventh Workshop on Building Educational Applications Using NLP",
"volume": "33",
"issue": "",
"pages": "211--259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beigman Klebanov, Beata and Derrick Higgins. 2012. Measuring the use of factual information in test-taker essays. In Proceedings of the Seventh Workshop on Building Educational Applications Using NLP, pages 63-72, Montreal. Bentahar, Jamal, Bernard Moulin, and Micheline B\u00e9langer. 2010. A taxonomy of argumentation models used for knowledge representation. Artificial Intelligence Review, 33(3):211-259.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Identifying justifications in written dialogs by classifying text as argumentative",
"authors": [
{
"first": "Or",
"middle": [],
"last": "Biran",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
}
],
"year": 2011,
"venue": "International Journal of Semantic Computing",
"volume": "",
"issue": "04",
"pages": "363--381",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Biran, Or and Owen Rambow. 2011. Identifying justifications in written dialogs by classifying text as argumentative. International Journal of Semantic Computing, 05(04):363-381.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Joint morphological and syntactic analysis for richly inflected languages",
"authors": [
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Igor",
"middle": [],
"last": "Boguslavsky",
"suffix": ""
},
{
"first": "Rich\u00e1rd",
"middle": [],
"last": "Farkas",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
}
],
"year": 2013,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "415--428",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bohnet, Bernd, Joakim Nivre, Igor Boguslavsky, Rich\u00e1rd Farkas, Filip Ginter, and Jan Haji\u010d. 2013. Joint morphological and syntactic analysis for richly inflected languages. Transactions of the Association for Computational Linguistics, 1:415-428.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Back up your stance: Recognizing arguments in online discussions",
"authors": [
{
"first": "Filip",
"middle": [],
"last": "Boltu\u017ei\u0107",
"suffix": ""
},
{
"first": "Jan\u0161najder",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the First Workshop on Argumentation Mining",
"volume": "",
"issue": "",
"pages": "49--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boltu\u017ei\u0107, Filip and Jan\u0160najder. 2014. Back up your stance: Recognizing arguments in online discussions. In Proceedings of the First Workshop on Argumentation Mining, pages 49-58, Baltimore, MD.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Argument structure in learner writing: a corpus-based analysis using argument mapping",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Botley",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Philip",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "32",
"issue": "",
"pages": "45--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Botley, Simon Philip. 2014. Argument structure in learner writing: a corpus-based analysis using argument mapping. Kajian Malaysia, 32(1):45-77.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Combining natural and artificial examples to improve implicit discourse relation identification",
"authors": [
{
"first": "Chlo\u00e9",
"middle": [],
"last": "Braud",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Denis",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "1694--1705",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Braud, Chlo\u00e9 and Pascal Denis. 2014. Combining natural and artificial examples to improve implicit discourse relation identification. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1694-1705, Dublin.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Random forests",
"authors": [
{
"first": "Leo",
"middle": [],
"last": "Breiman",
"suffix": ""
}
],
"year": 2001,
"venue": "Machine Learning",
"volume": "45",
"issue": "",
"pages": "5--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Breiman, Leo. 2001. Random forests. Machine Learning, 45(1):5-32.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "From discourse analysis to argumentation schemes and back: Relations and differences",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Cabrio",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Tonelli",
"suffix": ""
},
{
"first": "Serena",
"middle": [],
"last": "Villata",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Logic in Multi-Agent Systems",
"volume": "8143",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cabrio, Elena, Sara Tonelli, and Serena Villata. 2013. From discourse analysis to argumentation schemes and back: Relations and differences. In Computational Logic in Multi-Agent Systems, volume 8143 of Lecture Notes in Computer Science.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Natural language arguments: A combined approach",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Cabrio",
"suffix": ""
},
{
"first": "Serena",
"middle": [],
"last": "Villata",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 20th European Conference on Artificial Intelligence, ECAI '12",
"volume": "",
"issue": "",
"pages": "205--210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cabrio, Elena and Serena Villata. 2012. Natural language arguments: A combined approach. In Proceedings of the 20th European Conference on Artificial Intelligence, ECAI '12, pages 205-210, Montpellier.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "NoDE: A benchmark of natural language arguments",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Cabrio",
"suffix": ""
},
{
"first": "Serena",
"middle": [],
"last": "Villata",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COMMA",
"volume": "",
"issue": "",
"pages": "449--450",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cabrio, Elena and Serena Villata. 2014. NoDE: A benchmark of natural language arguments. In Proceedings of COMMA, pages 449-450, Pitlochry.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Building a discourse-tagged corpus in the framework of rhetorical structure theory",
"authors": [
{
"first": "Lynn",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ellen"
],
"last": "Okurowski",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Second SIGdial Workshop on Discourse",
"volume": "16",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlson, Lynn, Daniel Marcu, and Mary Ellen Okurowski. 2001. Building a discourse-tagged corpus in the framework of rhetorical structure theory. In Proceedings of the Second SIGdial Workshop on Discourse and Dialogue -Volume 16, SIGDIAL '01, pages 1-10, Aalborg.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Towards relation based argumentation mining",
"authors": [
{
"first": "Lucas",
"middle": [],
"last": "Carstens",
"suffix": ""
},
{
"first": "Francesca",
"middle": [],
"last": "Toni",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2nd Workshop on Argumentation Mining",
"volume": "",
"issue": "",
"pages": "29--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carstens, Lucas and Francesca Toni. 2015. Towards relation based argumentation mining. In Proceedings of the 2nd Workshop on Argumentation Mining, pages 29-34, Denver, CO.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A broad-coverage collection of portable NLP components for building shareable analysis pipelines",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Eckart De Castilho",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Workshop on Open Infrastructures and Analysis Frameworks for HLT (OIAF4HLT) at COLING 2014",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eckart de Castilho, Richard and Iryna Gurevych. 2014. A broad-coverage collection of portable NLP components for building shareable analysis pipelines. In Proceedings of the Workshop on Open Infrastructures and Analysis Frameworks for HLT (OIAF4HLT) at COLING 2014, pages 1-11, Dublin.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Ridge estimators in logistic regression",
"authors": [
{
"first": "S",
"middle": [],
"last": "Le Cessie",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Van Houwelingen",
"suffix": ""
}
],
"year": 1992,
"venue": "Applied Statistics",
"volume": "41",
"issue": "1",
"pages": "191--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "le Cessie, S. and J. C. van Houwelingen. 1992. Ridge estimators in logistic regression. Applied Statistics, 41(1):191-201.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Word association norms, mutual information, and lexicography",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hanks",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational Linguistics",
"volume": "16",
"issue": "1",
"pages": "22--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Church, Kenneth Ward and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. Computational Linguistics, 16(1):22-29.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Managing uncertainty in semantic tagging",
"authors": [
{
"first": "Silvie",
"middle": [],
"last": "Cinkov\u00e1",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Holub",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Kr\u00ed\u017e",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, EACL '12",
"volume": "",
"issue": "",
"pages": "840--850",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cinkov\u00e1, Silvie, Martin Holub, and Vincent Kr\u00ed\u017e. 2012. Managing uncertainty in semantic tagging. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, EACL '12, pages 840-850, Avignon.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Analyzing the structure of argumentative discourse",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1987,
"venue": "Computational Linguistics",
"volume": "13",
"issue": "1-2",
"pages": "11--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cohen, Robin. 1987. Analyzing the structure of argumentative discourse. Computational Linguistics, 13(1-2):11-24.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing",
"volume": "10",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collins, Michael. 2002. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing -Volume 10, EMNLP '02, pages 1-8, Pennsylvania, PA.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Head-driven statistical models for natural language parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "4",
"pages": "589--637",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collins, Michael. 2003. Head-driven statistical models for natural language parsing. Computational Linguistics, 29(4):589-637.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "On the distinction between convergent and linked arguments",
"authors": [
{
"first": "David",
"middle": [
"A"
],
"last": "Conway",
"suffix": ""
}
],
"year": 1991,
"venue": "Informal Logic",
"volume": "13",
"issue": "",
"pages": "145--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Conway, David A. 1991. On the distinction between convergent and linked arguments. Informal Logic, 13:145-158.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Introduction To Logic",
"authors": [
{
"first": "Irving",
"middle": [
"M"
],
"last": "Copi",
"suffix": ""
},
{
"first": "Carl",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Copi, Irving M. and Carl Cohen. 1990. Introduction To Logic, 8th edition. Macmillan Publishing Company.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Support-vector networks",
"authors": [
{
"first": "Corinna",
"middle": [],
"last": "Cortes",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1995,
"venue": "Machine Learning",
"volume": "20",
"issue": "3",
"pages": "273--297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cortes, Corinna and Vladimir Vapnik. 1995. Support-vector networks. Machine Learning, 20(3):273-297.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Attacking Faulty Reasoning: A Practical Guide to Fallacy-Free Reasoning",
"authors": [
{
"first": "T",
"middle": [],
"last": "Damer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Edward",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Damer, T. Edward. 2009. Attacking Faulty Reasoning: A Practical Guide to Fallacy-Free Reasoning, 6th edition. Wadsworth Cengage Learning.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "DKPro TC: A Java-based framework for supervised learning experiments on textual data",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Daxenberger",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Ferschke",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
},
{
"first": "Torsten",
"middle": [],
"last": "Zesch",
"suffix": ""
}
],
"year": 1996,
"venue": "Fundamentals of Argumentation Theory: A Handbook of Historical Backgrounds and Contemporary Developments. Routledge",
"volume": "",
"issue": "",
"pages": "61--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daxenberger, Johannes, Oliver Ferschke, Iryna Gurevych, and Torsten Zesch. 2014. DKPro TC: A Java-based framework for supervised learning experiments on textual data. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. System Demonstrations, pages 61-66, Baltimore, MD. van Eemeren, Frans H., Rob Grootendorst, and Francisca Snoeck Henkemans. 1996. Fundamentals of Argumentation Theory: A Handbook of Historical Backgrounds and Contemporary Developments. Routledge, Taylor & Francis Group.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Classifying arguments by scheme",
"authors": [
{
"first": "Vanessa",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "987--996",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Feng, Vanessa Wei and Graeme Hirst. 2011. Classifying arguments by scheme. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, HLT '11, pages 987-996, Portland, OR.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A linear-time bottom-up discourse parser with constraints and post-editing",
"authors": [
{
"first": "Vanessa",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "511--521",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Feng, Vanessa Wei and Graeme Hirst. 2014. A linear-time bottom-up discourse parser with constraints and post-editing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 511-521, Baltimore, MD.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Measuring nominal scale agreement among many raters",
"authors": [
{
"first": "Joseph",
"middle": [
"L"
],
"last": "Fleiss",
"suffix": ""
}
],
"year": 1971,
"venue": "Psychological Bulletin",
"volume": "76",
"issue": "5",
"pages": "378--382",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fleiss, Joseph L. 1971. Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5):378-382.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Argument extraction for supporting public policy formulation",
"authors": [
{
"first": "Eirini",
"middle": [],
"last": "Florou",
"suffix": ""
},
{
"first": "Stasinos",
"middle": [],
"last": "Konstantopoulos",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 7th Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities",
"volume": "",
"issue": "",
"pages": "49--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Florou, Eirini, Stasinos Konstantopoulos, Antonis Koukourikos, and Pythagoras Karampiperis. 2013. Argument extraction for supporting public policy formulation. In Proceedings of the 7th Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities, pages 49-54, Sofia.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Apples-to-apples in cross-validation studies: Pitfalls in classifier performance measurement",
"authors": [
{
"first": "George",
"middle": [],
"last": "Forman",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Scholz",
"suffix": ""
}
],
"year": 2010,
"venue": "SIGKDD Explorations",
"volume": "12",
"issue": "1",
"pages": "49--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Forman, George and Martin Scholz. 2010. Apples-to-apples in cross-validation studies: Pitfalls in classifier performance measurement. SIGKDD Explorations, 12(1):49-57.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Argument Structure: Representation and Theory, volume 18 of Argumentation Library",
"authors": [
{
"first": "James",
"middle": [
"B"
],
"last": "Freeman",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Freeman, James B. 2011. Argument Structure: Representation and Theory, volume 18 of Argumentation Library. Springer.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Analyzing argumentative discourse units in online interactions",
"authors": [
{
"first": "Debanjan",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Smaranda",
"middle": [],
"last": "Muresan",
"suffix": ""
},
{
"first": "Nina",
"middle": [],
"last": "Wacholder",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Aakhus",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Mitsui",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the First Workshop on Argumentation Mining",
"volume": "",
"issue": "",
"pages": "39--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ghosh, Debanjan, Smaranda Muresan, Nina Wacholder, Mark Aakhus, and Matthew Mitsui. 2014. Analyzing argumentative discourse units in online interactions. In Proceedings of the First Workshop on Argumentation Mining, pages 39-48, Baltimore, MD.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Argument extraction from news, blogs, and social media",
"authors": [
{
"first": "Theodosis",
"middle": [],
"last": "Goudas",
"suffix": ""
},
{
"first": "Christos",
"middle": [],
"last": "Louizos",
"suffix": ""
}
],
"year": 2014,
"venue": "Artificial Intelligence: Methods and Applications",
"volume": "8445",
"issue": "",
"pages": "287--299",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goudas, Theodosis, Christos Louizos, Georgios Petasis, and Vangelis Karkaletsis. 2014. Argument extraction from news, blogs, and social media. In Artificial Intelligence: Methods and Applications, volume 8445 of Lecture Notes in Computer Science. Springer International Publishing, pages 287-299.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "A Practical Study of Argument",
"authors": [
{
"first": "Trudy",
"middle": [],
"last": "Govier",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Govier, Trudy. 2010. A Practical Study of Argument, 7th edition. Wadsworth, Cengage Learning.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Argumentation mining in user-generated web discourse",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Habernal",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2017,
"venue": "Computational Linguistics",
"volume": "43",
"issue": "1",
"pages": "125--179",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Habernal, Ivan and Iryna Gurevych. 2017. Argumentation mining in user-generated web discourse. Computational Linguistics, 43(1):125-179.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "The WEKA data mining software: An update. SIGKDD Explorations",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Eibe",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Holmes",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "Pfahringer",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Reutemann",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "11",
"issue": "",
"pages": "10--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hall, Mark, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The WEKA data mining software: An update. SIGKDD Explorations, 11(1):10-18.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Why are you taking this stance? Identifying and classifying reasons in ideological debates",
"authors": [
{
"first": "Kazi",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Saidul",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "14",
"issue": "",
"pages": "447--473",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hasan, Kazi Saidul and Vincent Ng. 2014. Why are you taking this stance? Identifying and classifying reasons in ideological debates. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 751-762, Doha. Henkemans, A. Francisca Snoeck. 2000. State-of-the-art: The structure of argumentation. Argumentation, 14(4):447-473.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Hilda: A discourse parser using support vector machine classification",
"authors": [
{
"first": "Hugo",
"middle": [],
"last": "Hernault",
"suffix": ""
},
{
"first": "Helmut",
"middle": [],
"last": "Prendinger",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "",
"suffix": ""
},
{
"first": "Mitsuru",
"middle": [],
"last": "Ishizuka",
"suffix": ""
}
],
"year": 2010,
"venue": "Dialogue and Discourse",
"volume": "1",
"issue": "3",
"pages": "1--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hernault, Hugo, Helmut Prendinger, David A. duVerle, and Mitsuru Ishizuka. 2010. Hilda: A discourse parser using support vector machine classification. Dialogue and Discourse, 1(3):1-33.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Estimating continuous distributions in Bayesian classifiers",
"authors": [
{
"first": "George",
"middle": [
"H"
],
"last": "John",
"suffix": ""
},
{
"first": "Pat",
"middle": [],
"last": "Langley",
"suffix": ""
}
],
"year": 1995,
"venue": "Eleventh Conference on Uncertainty in Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "338--345",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John, George H. and Pat Langley. 1995. Estimating continuous distributions in Bayesian classifiers. In Eleventh Conference on Uncertainty in Artificial Intelligence, pages 338-345, Montreal.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Inside Writing: Persuasive Essays",
"authors": [
{
"first": "Dave",
"middle": [],
"last": "Kemper",
"suffix": ""
},
{
"first": "Pat",
"middle": [],
"last": "Sebranek",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kemper, Dave and Pat Sebranek. 2004. Inside Writing: Persuasive Essays. Great Source Education Group.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Linking the thoughts: Analysis of argumentation structures in scientific publications",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Kirschner",
"suffix": ""
},
{
"first": "Judith",
"middle": [],
"last": "Eckle-Kohler",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2nd Workshop on Argumentation Mining",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kirschner, Christian, Judith Eckle-Kohler, and Iryna Gurevych. 2015. Linking the thoughts: Analysis of argumentation structures in scientific publications. In Proceedings of the 2nd Workshop on Argumentation Mining, 1-11, Denver, CO.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Accurate unlexicalized parsing",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "423--430",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klein, Dan and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics - Volume 1, ACL '03, pages 423-430, Sapporo.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Measuring the reliability of qualitative text analysis data",
"authors": [
{
"first": "Klaus",
"middle": [],
"last": "Krippendorff",
"suffix": ""
}
],
"year": 2004,
"venue": "Quality & Quantity",
"volume": "38",
"issue": "6",
"pages": "787--800",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Krippendorff, Klaus. 2004. Measuring the reliability of qualitative text analysis data. Quality & Quantity, 38(6):787-800.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Dependency Parsing",
"authors": [
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K\u00fcbler, Sandra, Ryan McDonald, Joakim Nivre, and Graeme Hirst. 2008. Dependency Parsing. Morgan and Claypool Publishers.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Identifying and classifying subjective claims",
"authors": [
{
"first": "Namhee",
"middle": [],
"last": "Kwon",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Stuart",
"middle": [
"W"
],
"last": "Shulman",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 8th Annual International Conference on Digital Government Research: Bridging Disciplines & Domains",
"volume": "",
"issue": "",
"pages": "76--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kwon, Namhee, Liang Zhou, Eduard Hovy, and Stuart W. Shulman. 2007. Identifying and classifying subjective claims. In Proceedings of the 8th Annual International Conference on Digital Government Research: Bridging Disciplines & Domains, pages 76-81, Philadelphia, PA.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [
"D"
],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [
"C N"
],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Eighteenth International Conference on Machine Learning, ICML '01",
"volume": "",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lafferty, John D., Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML '01, 282-289, San Francisco, CA.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Context dependent claim detection",
"authors": [
{
"first": "Ran",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bilu",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Hershcovich",
"suffix": ""
},
{
"first": "Ehud",
"middle": [],
"last": "Aharoni",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Slonim",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 25th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1489--1500",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Levy, Ran, Yonatan Bilu, Daniel Hershcovich, Ehud Aharoni, and Noam Slonim. 2014. Context dependent claim detection. In Proceedings of the 25th International Conference on Computational Linguistics (COLING 2014), 1489-1500, Dublin.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Recognizing implicit discourse relations in the Penn Discourse Treebank",
"authors": [
{
"first": "Ziheng",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "151--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, Ziheng, Min-Yen Kan, and Hwee Tou Ng. 2009. Recognizing implicit discourse relations in the Penn Discourse Treebank. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1, EMNLP '09, pages 343-351, Suntec. Lin, Ziheng, Hwee Tou Ng, and Min-Yen Kan. 2014. A PDTB-styled end-to-end discourse parser. Natural Language Engineering, 20(2):151-184.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Context-independent claim detection for argument mining",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Lippi",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Torroni",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI 2015)",
"volume": "",
"issue": "",
"pages": "185--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lippi, Marco and Paolo Torroni. 2015. Context-independent claim detection for argument mining. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI 2015), pages 185-191, Buenos Aires.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Using entity features to classify implicit discourse relations",
"authors": [
{
"first": "Annie",
"middle": [],
"last": "Louis",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Rashmi",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGDIAL '10",
"volume": "",
"issue": "",
"pages": "59--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Louis, Annie, Aravind Joshi, Rashmi Prasad, and Ani Nenkova. 2010. Using entity features to classify implicit discourse relations. In Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGDIAL '10, pages 59-62, Stroudsburg, PA.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Rhetorical structure theory: A theory of text organization",
"authors": [
{
"first": "William",
"middle": [
"C"
],
"last": "Mann",
"suffix": ""
},
{
"first": "Sandra",
"middle": [
"A"
],
"last": "Thompson",
"suffix": ""
}
],
"year": 1987,
"venue": "Information Sciences Institute",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mann, William C. and Sandra A. Thompson. 1987. Rhetorical structure theory: A theory of text organization. Technical Report ISI/RS-87-190, Information Sciences Institute.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "An unsupervised approach to recognizing discourse relations",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "Abdessamad",
"middle": [],
"last": "Echihabi",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, ACL '02",
"volume": "",
"issue": "",
"pages": "368--375",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcu, Daniel and Abdessamad Echihabi. 2002. An unsupervised approach to recognizing discourse relations. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, ACL '02, pages 368-375.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "DKPro agreement: An open-source Java library for measuring inter-rater agreement",
"authors": [
{
"first": "Christian",
"middle": [
"M"
],
"last": "Meyer",
"suffix": ""
},
{
"first": "Margot",
"middle": [],
"last": "Mieskes",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Stab",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 25th International Conference on Computational Linguistics: System Demonstrations (COLING)",
"volume": "",
"issue": "",
"pages": "105--109",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meyer, Christian M., Margot Mieskes, Christian Stab, and Iryna Gurevych. 2014. DKPro agreement: An open-source Java library for measuring inter-rater agreement. In Proceedings of the 25th International Conference on Computational Linguistics: System Demonstrations (COLING), pages 105-109, Dublin.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems 26",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikolov, Tomas, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26. Curran Associates, Inc., pages 3111-3119.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Creating an argumentation corpus: Do theories apply to real arguments? A case study on the legal argumentation of the ECHR",
"authors": [
{
"first": "Raquel",
"middle": [],
"last": "Mochales-Palau",
"suffix": ""
},
{
"first": "Aagje",
"middle": [],
"last": "Ieven",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th International Conference on Artificial Intelligence and Law (ICAIL '09)",
"volume": "",
"issue": "",
"pages": "21--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mochales-Palau, Raquel and Aagje Ieven. 2009. Creating an argumentation corpus: Do theories apply to real arguments? A case study on the legal argumentation of the ECHR. In Proceedings of the 12th International Conference on Artificial Intelligence and Law (ICAIL '09), pages 21-30, Barcelona.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Argumentation mining: The detection, classification and structure of arguments in text",
"authors": [
{
"first": "Raquel",
"middle": [],
"last": "Mochales-Palau",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th International Conference on Artificial Intelligence and Law, ICAIL '09",
"volume": "",
"issue": "",
"pages": "98--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mochales-Palau, Raquel and Marie-Francine Moens. 2009. Argumentation mining: The detection, classification and structure of arguments in text. In Proceedings of the 12th International Conference on Artificial Intelligence and Law, ICAIL '09, pages 98-107, Barcelona.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Argumentation mining",
"authors": [
{
"first": "Raquel",
"middle": [],
"last": "Mochales-Palau",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2011,
"venue": "Artificial Intelligence and Law",
"volume": "19",
"issue": "1",
"pages": "1--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mochales-Palau, Raquel and Marie-Francine Moens. 2011. Argumentation mining. Artificial Intelligence and Law, 19(1):1-22.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "Argumentation mining: Where are we now, where do we want to be and how do we get there?",
"authors": [
{
"first": "Marie",
"middle": [],
"last": "Moens",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Francine",
"suffix": ""
}
],
"year": 2013,
"venue": "Post-proceedings of the Forum for Information Retrieval Evaluation (FIRE 2013)",
"volume": "",
"issue": "",
"pages": "4--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moens, Marie Francine. 2013. Argumentation mining: Where are we now, where do we want to be and how do we get there? In Post-proceedings of the Forum for Information Retrieval Evaluation (FIRE 2013), pages 4-6, New Delhi.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "Automatic detection of arguments in legal texts",
"authors": [
{
"first": "Marie",
"middle": [],
"last": "Moens",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Francine",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Boiy",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 11th International Conference on Artificial Intelligence and Law, ICAIL '07",
"volume": "",
"issue": "",
"pages": "225--230",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moens, Marie Francine, Erik Boiy, Raquel Mochales Palau, and Chris Reed. 2007. Automatic detection of arguments in legal texts. In Proceedings of the 11th International Conference on Artificial Intelligence and Law, ICAIL '07, pages 225-230, Stanford, CA.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "Extracting argument and domain words for identifying argument components in texts",
"authors": [
{
"first": "Huy",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Diane",
"middle": [],
"last": "Litman",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2nd Workshop on Argumentation Mining",
"volume": "",
"issue": "",
"pages": "22--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nguyen, Huy and Diane Litman. 2015. Extracting argument and domain words for identifying argument components in texts. In Proceedings of the 2nd Workshop on Argumentation Mining, pages 22-28, Denver, CO.",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "Two concepts of argument",
"authors": [
{
"first": "Daniel",
"middle": [
"J"
],
"last": "O'keefe",
"suffix": ""
}
],
"year": 1977,
"venue": "Journal of the American Forensic Association",
"volume": "13",
"issue": "3",
"pages": "121--128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "O'Keefe, Daniel J. 1977. Two concepts of argument. Journal of the American Forensic Association, 13(3):121-128.",
"links": null
},
"BIBREF67": {
"ref_id": "b67",
"title": "And that's a fact: Distinguishing factual and emotional argumentation in online dialogue",
"authors": [
{
"first": "Shereen",
"middle": [],
"last": "Oraby",
"suffix": ""
},
{
"first": "Lena",
"middle": [],
"last": "Reed",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Compton",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Whittaker",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2nd Workshop on Argumentation Mining",
"volume": "",
"issue": "",
"pages": "116--126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oraby, Shereen, Lena Reed, Ryan Compton, Ellen Riloff, Marilyn Walker, and Steve Whittaker. 2015. And that's a fact: Distinguishing factual and emotional argumentation in online dialogue. In Proceedings of the 2nd Workshop on Argumentation Mining, pages 116-126, Denver, CO.",
"links": null
},
"BIBREF68": {
"ref_id": "b68",
"title": "Identifying appropriate support for propositions in online user comments",
"authors": [
{
"first": "Joonsuk",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the First Workshop on Argumentation Mining",
"volume": "",
"issue": "",
"pages": "29--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Park, Joonsuk and Claire Cardie. 2014. Identifying appropriate support for propositions in online user comments. In Proceedings of the First Workshop on Argumentation Mining, pages 29-38, Baltimore, MD.",
"links": null
},
"BIBREF69": {
"ref_id": "b69",
"title": "Towards segment-based recognition of argumentation structure in short texts",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Peldszus",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the First Workshop on Argumentation Mining",
"volume": "",
"issue": "",
"pages": "88--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peldszus, Andreas. 2014. Towards segment-based recognition of argumentation structure in short texts. In Proceedings of the First Workshop on Argumentation Mining, 88-97, Baltimore, MD.",
"links": null
},
"BIBREF70": {
"ref_id": "b70",
"title": "From argument diagrams to argumentation mining in texts: A survey",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Peldszus",
"suffix": ""
},
{
"first": "Manfred",
"middle": [],
"last": "Stede",
"suffix": ""
}
],
"year": 2013,
"venue": "International Journal of Cognitive Informatics and Natural Intelligence (IJCINI)",
"volume": "7",
"issue": "1",
"pages": "1--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peldszus, Andreas and Manfred Stede. 2013. From argument diagrams to argumentation mining in texts: A survey. International Journal of Cognitive Informatics and Natural Intelligence (IJCINI), 7(1):1-31.",
"links": null
},
"BIBREF71": {
"ref_id": "b71",
"title": "Joint prediction in MST-style discourse parsing for argumentation mining",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Peldszus",
"suffix": ""
},
{
"first": "Manfred",
"middle": [],
"last": "Stede",
"suffix": ""
}
],
"year": 2015,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "938--948",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peldszus, Andreas and Manfred Stede. 2015. Joint prediction in MST-style discourse parsing for argumentation mining. In Conference on Empirical Methods in Natural Language Processing (EMNLP 2015), pages 938-948, Lisbon.",
"links": null
},
"BIBREF72": {
"ref_id": "b72",
"title": "Modeling argument strength in student essays",
"authors": [
{
"first": "Isaac",
"middle": [],
"last": "Persing",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "543--552",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Persing, Isaac and Vincent Ng. 2015. Modeling argument strength in student essays. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 543-552, Beijing.",
"links": null
},
"BIBREF73": {
"ref_id": "b73",
"title": "A Helpful Guide to Essay Writing!, Student Services",
"authors": [
{
"first": "Vivien",
"middle": [],
"last": "Perutz",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Perutz, Vivien. 2010. A Helpful Guide to Essay Writing!, Student Services, Anglia Ruskin University.",
"links": null
},
"BIBREF74": {
"ref_id": "b74",
"title": "Automatic sense prediction for implicit discourse relations in text",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "Annie",
"middle": [],
"last": "Louis",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "",
"issue": "",
"pages": "683--691",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pitler, Emily, Annie Louis, and Ani Nenkova. 2009. Automatic sense prediction for implicit discourse relations in text. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 683-691, Suntec.",
"links": null
},
"BIBREF75": {
"ref_id": "b75",
"title": "The Penn Discourse Treebank 2.0",
"authors": [
{
"first": "Rashmi",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Dinesh",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Eleni",
"middle": [],
"last": "Miltsakaki",
"suffix": ""
},
{
"first": "Livio",
"middle": [],
"last": "Robaldo",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Prasad, Rashmi, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The Penn Discourse Treebank 2.0. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08), Marrakech.",
"links": null
},
"BIBREF76": {
"ref_id": "b76",
"title": "C4.5: Programs for Machine Learning",
"authors": [
{
"first": "Ross",
"middle": [],
"last": "Quinlan",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quinlan, Ross. 1993. C4.5: Programs for Machine Learning, Morgan Kaufmann Publishers.",
"links": null
},
"BIBREF77": {
"ref_id": "b77",
"title": "Text chunking using transformation-based learning",
"authors": [
{
"first": "Lance",
"middle": [
"A"
],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 3rd ACL Workshop on Very Large Corpora",
"volume": "",
"issue": "",
"pages": "82--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramshaw, Lance A. and Mitchell P. Marcus. 1995. Text chunking using transformation-based learning. In Proceedings of the 3rd ACL Workshop on Very Large Corpora, pages 82-94, Cambridge, MA.",
"links": null
},
"BIBREF78": {
"ref_id": "b78",
"title": "Araucaria: Software for argument analysis, diagramming and representation",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Reed",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Mochales-Palau",
"suffix": ""
},
{
"first": "Glenn",
"middle": [],
"last": "Rowe",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Glenn",
"middle": [],
"last": "Rowe",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Sixth International Conference on Language Resources and Evaluation, LREC '08",
"volume": "14",
"issue": "",
"pages": "961--980",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reed, Chris, Raquel Mochales-Palau, Glenn Rowe, and Marie-Francine Moens. 2008. Language resources for studying argument. In Proceedings of the Sixth International Conference on Language Resources and Evaluation, LREC '08, pages 2613-2618, Marrakech. Reed, Chris and Glenn Rowe. 2004. Araucaria: Software for argument analysis, diagramming and representation. International Journal on Artificial Intelligence Tools, 14(4):961-980.",
"links": null
},
"BIBREF79": {
"ref_id": "b79",
"title": "Show me your evidence-an automatic method for context dependent evidence detection",
"authors": [
{
"first": "Ruty",
"middle": [],
"last": "Rinott",
"suffix": ""
},
{
"first": "Lena",
"middle": [],
"last": "Dankin",
"suffix": ""
},
{
"first": "Carlos",
"middle": [
"Alzate"
],
"last": "Perez",
"suffix": ""
},
{
"first": "Mitesh",
"middle": [
"M"
],
"last": "Khapra",
"suffix": ""
},
{
"first": "Ehud",
"middle": [],
"last": "Aharoni",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Slonim",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP '15",
"volume": "",
"issue": "",
"pages": "440--450",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rinott, Ruty, Lena Dankin, Carlos Alzate Perez, Mitesh M. Khapra, Ehud Aharoni, and Noam Slonim. 2015. Show me your evidence-an automatic method for context dependent evidence detection. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP '15, pages 440-450, Lisbon.",
"links": null
},
"BIBREF80": {
"ref_id": "b80",
"title": "Applying kernel methods to argumentation mining",
"authors": [
{
"first": "Niall",
"middle": [],
"last": "Rooney",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Fiona",
"middle": [],
"last": "Browne",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Twenty-Fifth International Florida Artificial Intelligence Research Society Conference, FLAIRS '12",
"volume": "",
"issue": "",
"pages": "272--275",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rooney, Niall, Hui Wang, and Fiona Browne. 2012. Applying kernel methods to argumentation mining. In Proceedings of the Twenty-Fifth International Florida Artificial Intelligence Research Society Conference, FLAIRS '12, pages 272-275, Marco Island, FL.",
"links": null
},
"BIBREF81": {
"ref_id": "b81",
"title": "Ioannis Manousos Katakis, Georgios Petasis, and Vangelis Karkaletsis",
"authors": [
{
"first": "Christos",
"middle": [],
"last": "Sardianos",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2nd Workshop on Argumentation Mining",
"volume": "",
"issue": "",
"pages": "56--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sardianos, Christos, Ioannis Manousos Katakis, Georgios Petasis, and Vangelis Karkaletsis. 2015. Argument extraction from news. In Proceedings of the 2nd Workshop on Argumentation Mining, pages 56-66, Denver, CO.",
"links": null
},
"BIBREF82": {
"ref_id": "b82",
"title": "How to write essays",
"authors": [
{
"first": "Don",
"middle": [],
"last": "Shiach",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shiach, Don. 2009. How to write essays, 2nd ed. How To Books Ltd.",
"links": null
},
"BIBREF83": {
"ref_id": "b83",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1631--1642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Socher, Richard, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, WA.",
"links": null
},
"BIBREF84": {
"ref_id": "b84",
"title": "Estimating effect size across datasets",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "607--611",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S\u00f8gaard, Anders. 2013. Estimating effect size across datasets. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 607-611, Atlanta.",
"links": null
},
"BIBREF85": {
"ref_id": "b85",
"title": "A systematic analysis of performance measures for classification tasks",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Sokolova",
"suffix": ""
},
{
"first": "Guy",
"middle": [],
"last": "Lapalme",
"suffix": ""
}
],
"year": 2009,
"venue": "Information Processing & Management",
"volume": "45",
"issue": "4",
"pages": "427--437",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sokolova, Marina and Guy Lapalme. 2009. A systematic analysis of performance measures for classification tasks. Information Processing & Management, 45(4):427-437.",
"links": null
},
"BIBREF86": {
"ref_id": "b86",
"title": "Sentence level discourse parsing using syntactic and lexical information",
"authors": [
{
"first": "Swapna",
"middle": [],
"last": "Somasundaran",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "Suntec",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Yi",
"suffix": ""
},
{
"first": "Beata",
"middle": [
"Beigman"
],
"last": "Heilman",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Klebanov",
"suffix": ""
},
{
"first": "M",
"middle": [
"D"
],
"last": "Deane ; Baltimore",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology",
"volume": "1",
"issue": "",
"pages": "46--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Somasundaran, Swapna and Janyce Wiebe. 2009. Recognizing stances in online debates. In Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, ACL '09, pages 226-234, Suntec. Song, Yi, Michael Heilman, Beata Beigman Klebanov, and Paul Deane. 2014. Applying argumentation schemes for essay scoring. In Proceedings of the First Workshop on Argumentation Mining, pages 69-78, Baltimore, MD. Soricut, Radu and Daniel Marcu. 2003. Sentence level discourse parsing using syntactic and lexical information. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology -Volume 1, NAACL '03, pages 149-156, Edmonton. Stab, Christian and Iryna Gurevych. 2014a. Annotating argument components and relations in persuasive essays. In Proceedings of the 25th International Conference on Computational Linguistics (COLING 2014), pages 1501-1510, Dublin. Stab, Christian and Iryna Gurevych. 2014b. Identifying argumentative discourse structures in persuasive essays. In Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), pages 46-56, Doha.",
"links": null
},
"BIBREF87": {
"ref_id": "b87",
"title": "Argumentation mining in persuasive essays and scientific articles from the discourse structure perspective",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Stab",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Kirschner",
"suffix": ""
},
{
"first": "Judith",
"middle": [],
"last": "Eckle-Kohler",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Workshop on Frontiers and Connections between Argumentation Theory and Natural Language Processing",
"volume": "",
"issue": "",
"pages": "40--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stab, Christian, Christian Kirschner, Judith Eckle-Kohler, and Iryna Gurevych. 2014. Argumentation mining in persuasive essays and scientific articles from the discourse structure perspective. In Proceedings of the Workshop on Frontiers and Connections between Argumentation Theory and Natural Language Processing, pages 40-49, Bertinoro.",
"links": null
},
"BIBREF88": {
"ref_id": "b88",
"title": "Brat: A web-based tool for NLP-assisted text annotation",
"authors": [
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Topi\u0107",
"suffix": ""
},
{
"first": "Tomoko",
"middle": [],
"last": "Ohta",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics, EACL '12",
"volume": "",
"issue": "",
"pages": "102--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stenetorp, Pontus, Sampo Pyysalo, Goran Topi\u0107, Tomoko Ohta, Sophia Ananiadou, and Jun'ichi Tsujii. 2012. Brat: A web-based tool for NLP-assisted text annotation. In Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics, EACL '12, pages 102-107, Avignon.",
"links": null
},
"BIBREF89": {
"ref_id": "b89",
"title": "Practical Reasoning in Natural Language",
"authors": [
{
"first": "Stephen",
"middle": [
"N"
],
"last": "Thomas",
"suffix": ""
}
],
"year": 1973,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas, Stephen N. 1973. Practical Reasoning in Natural Language, Prentice-Hall.",
"links": null
},
"BIBREF90": {
"ref_id": "b90",
"title": "Feature-rich part-of-speech tagging with a cyclic dependency network",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, NAACL '03",
"volume": "",
"issue": "",
"pages": "173--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toutanova, Kristina, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, NAACL '03, pages 173-180, Edmonton.",
"links": null
},
"BIBREF91": {
"ref_id": "b91",
"title": "Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews",
"authors": [
{
"first": "Peter",
"middle": [
"D"
],
"last": "Turney",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, ACL '02",
"volume": "",
"issue": "",
"pages": "417--424",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Turney, Peter D. 2002. Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, ACL '02, pages 417-424, Philadelphia, PA.",
"links": null
},
"BIBREF92": {
"ref_id": "b92",
"title": "A corpus for research on deliberation and debate",
"authors": [
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Fox Tree",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Anand",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Abbott",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "King",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)",
"volume": "",
"issue": "",
"pages": "23--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Walker, Marilyn, Jean Fox Tree, Pranav Anand, Rob Abbott, and Joseph King. 2012. A corpus for research on deliberation and debate. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 23-25, Istanbul.",
"links": null
},
"BIBREF93": {
"ref_id": "b93",
"title": "Argumentation Schemes",
"authors": [
{
"first": "Douglas",
"middle": [],
"last": "Walton",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Reed",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Macagno",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Walton, Douglas, Chris Reed, and Fabrizio Macagno. 2008. Argumentation Schemes. Cambridge University Press.",
"links": null
},
"BIBREF94": {
"ref_id": "b94",
"title": "Academic Writing Guide 2010: A Step-by-Step Guide to Writing Academic Papers",
"authors": [
{
"first": "Anne",
"middle": [],
"last": "Whitaker",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Whitaker, Anne. 2009. Academic Writing Guide 2010: A Step-by-Step Guide to Writing Academic Papers. City University of Seattle.",
"links": null
},
"BIBREF95": {
"ref_id": "b95",
"title": "Argumentation schema and the myside bias in written argumentation",
"authors": [
{
"first": "Christopher",
"middle": [
"R"
],
"last": "Wolfe",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Anne",
"middle": [],
"last": "Britt",
"suffix": ""
}
],
"year": 2009,
"venue": "Written Communication",
"volume": "26",
"issue": "2",
"pages": "183--209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wolfe, Christopher R. and M. Anne Britt. 2009. Argumentation schema and the myside bias in written argumentation. Written Communication, 26(2):183-209.",
"links": null
},
"BIBREF96": {
"ref_id": "b96",
"title": "Dependent and independent reasons",
"authors": [
{
"first": "Robert",
"middle": [
"J"
],
"last": "Yanal",
"suffix": ""
}
],
"year": 1991,
"venue": "Informal Logic",
"volume": "13",
"issue": "3",
"pages": "137--144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yanal, Robert J. 1991. Dependent and independent reasons. Informal Logic, 13(3):137-144.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Component identification focuses on the separation of argumentative from non-argumentative text units and the identification of argument component boundaries.",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "Component classification addresses the function of argument components. It aims at classifying argument components into different types such as claims and premises.",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "Microstructures of arguments: Nodes are argument components and links represent argumentative relations. Nodes at the bottom are the claims of the arguments. claim individually; an argument is serial if it includes a reasoning chain and divergent if a premise supports several claims",
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"text": "[ ::::: Cloned :::::: organs :::: will :::::: match :::::::: perfectly :: to ::: the ::::: blood :::::: group :::: and ::::: tissue ::: of ::::::: patients] Premise1 since [ :::: they ::: can ::: be ::::: raised ::::: from :::::: cloned :::: stem :::: cells ::: of ::: the :::::: patient] Premise2 . In addition, [ : it ::::::: shortens ::: the :::::: healing :::::: process] Premise3 . Usually, [ :: it : is :::: very :::: rare :: to :::: find :: an :::::::::: appropriate ::::: organ ::::: donor] Premise4 and [ :: by ::::: using ::::::: cloning :: in ::::: order :: to :::: raise :::::::: required :::::: organs ::: the ::::::: waiting :::: time ::: can :: be :::::::: shortened :::::::::::: tremendously] Premise5 .",
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"uris": null,
"text": "Second, [ :::::::: scientists ::: use ::::::: animals :: as :::::: models :: in ::::: order :: to ::::: learn ::::: about :::::: human ::::::: diseases] Premise6 and therefore [cloning animals enables novel developments in science] Claim3 . Furthermore, [ :::::: infertile ::::::::: couples ::::: can :::::: bear ::::::::: children :::::: that ::::: are ::::::::::: genetically ::::: related] Premise7 . [ :::: Even ::::: same :::: sex ::::::: couples ::::: can ::::: have ::::::: children] Premise8 . Consequently, [cloning can help couples have children] Claim4 .",
"num": null,
"type_str": "figure"
},
"FIGREF5": {
"uris": null,
"text": "The third body paragraph illustrates a contra argument and argumentative attack relations: Admittedly, [cloning could be misused for military purposes] Claim5 . For example, [ : it ::::: could ::: be ::::: used :: to :::::::::: manipulate ::::::: human :::::: genes :: in :::::: order :: to :::::: create :::::::: obedient ::::::: soldiers :::: with :::::::::::: extraordinary ::::::: abilities] Premise9 . However, because [ :::: moral :::: and ::::::: ethical :::::: values ::: are :::::::::::: internationally :::::: shared] Premise10 , [ : it ::: is :::: very :::::::: unlikely :::: that ::::::: cloning :::: will :: be :::::::: misused ::: for :::::: militant ::::::::: objectives] Premise11 .",
"num": null,
"type_str": "figure"
},
"FIGREF6": {
"uris": null,
"text": "Although [ :: in ::::: some :::: cases :::::::::: technology :::::: makes ::::::: people's ::: life ::::: more :::::::::: complicated] premise , [the convenience of technology outweighs its drawbacks] claim .",
"num": null,
"type_str": "figure"
},
"FIGREF7": {
"uris": null,
"text": "Architecture of the argumentation structure parser.",
"num": null,
"type_str": "figure"
},
"FIGREF8": {
"uris": null,
"text": "of component, covering paragraph and covering sentence; number of tokens preceding and following the component in its sentence; ratio of component and sentence tokens Component position Component is first or last in paragraph; component present in introduction or conclusion*; Relative position in paragraph; number of preceding and following components in paragraph Indicators Type indicators Forward, backward, thesis or rebuttal indicators present in the component or its preceding tokens First-person indicators \"I\", \"me\", \"my\", \"mine\", or \"myself\" present in component or its preceding tokens Contextual Type indicators in context Forward, backward, thesis, or rebuttal indicators preceding or following the component in its paragraph Shared phrases* Shared noun phrases or verb phrases with the introduction or conclusion (number and binary) Syntactic Subclauses Number of subclauses in the covering sentence Depth of parse tree Depth of the parse tree of the covering sentence Tense of main verb Tense of the main verb of the component Modal verbs Modal verbs present in the component POS distribution POS distribution of the component Probability Type probability Conditional probability of the component being a major claim, claim or premise, given its preceding tokens Discourse Discourse Triples PDTB-discourse relations overlapping with the current component Embedding Combined word embeddings Sum of the word vectors of each word of the component and its preceding tokens",
"num": null,
"type_str": "figure"
},
"TABREF0": {
"num": null,
"content": "<table><tr><td colspan=\"3\">Component type Observed agreement Fleiss' \u03ba</td><td>\u03b1 U</td></tr><tr><td>MajorClaim</td><td>97.9%</td><td>0.877</td><td>0.810</td></tr><tr><td>Claim</td><td>88.9%</td><td>0.635</td><td>0.524</td></tr><tr><td>Premise</td><td>91.6%</td><td>0.833</td><td>0.824</td></tr></table>",
"text": "Inter-annotator agreement of argument components.",
"html": null,
"type_str": "table"
},
"TABREF1": {
"num": null,
"content": "<table><tr><td>Support</td><td>92.3%</td><td>0.708</td></tr><tr><td>Attack</td><td>99.6%</td><td>0.737</td></tr><tr><td>of claims (cf.</td><td/><td/></tr></table>",
"text": "Inter-annotator agreement of argumentative relations. Relation type Observed agreement Fleiss' \u03ba",
"html": null,
"type_str": "table"
},
"TABREF2": {
"num": null,
"content": "<table><tr><td/><td colspan=\"4\">MajorClaim Claim Premise NoArg</td></tr><tr><td>MajorClaim</td><td>0.771</td><td>0.077</td><td>0.010</td><td>0.142</td></tr><tr><td>Claim</td><td>0.036</td><td>0.517</td><td>0.307</td><td>0.141</td></tr><tr><td>Premise</td><td>0.002</td><td>0.131</td><td>0.841</td><td>0.026</td></tr><tr><td>NoArg</td><td>0.059</td><td>0.126</td><td>0.054</td><td>0.761</td></tr></table>",
"text": "Confusion probability matrix of argument component annotations (\"NoArg\" indicates sentences without argumentative content).",
"html": null,
"type_str": "table"
},
"TABREF3": {
"num": null,
"content": "<table><tr><td/><td colspan=\"3\">Support Attack Not-Linked</td></tr><tr><td>Support</td><td>0.605</td><td>0.006</td><td>0.389</td></tr><tr><td>Attack</td><td>0.107</td><td>0.587</td><td>0.307</td></tr><tr><td>Not-Linked</td><td>0.086</td><td>0.004</td><td>0.910</td></tr></table>",
"text": "Confusion probability matrix of argumentative relation annotations (\"Not-Linked\" indicates argument component pairs that are not argumentatively related).",
"html": null,
"type_str": "table"
},
"TABREF4": {
"num": null,
"content": "<table><tr><td/><td/><td>all</td><td colspan=\"2\">avg. per essay standard deviation</td></tr><tr><td>size</td><td>Sentences Tokens Paragraphs</td><td>7,116 147,271 1,833</td><td>18 366 5</td><td>4.2 62.9 0.6</td></tr><tr><td>arg. comp.</td><td>Arg. components MajorClaims Claims Premises Claims (for) Claims (against)</td><td>6,089 751 1,506 3,832 1,228 278</td><td>15 2 4 10 3 1</td><td>3.9 0.5 1.2 3.4 1.3 0.8</td></tr><tr><td>rel.</td><td>Support Attack</td><td>3,613 219</td><td>9 1</td><td>3.3 0.9</td></tr></table>",
"text": "Statistics of the final corpus.",
"html": null,
"type_str": "table"
},
"TABREF5": {
"num": null,
"content": "<table><tr><td/><td>F1</td><td>P</td><td>R</td><td colspan=\"2\">F1 Arg-B F1 Arg-I</td><td>F1 O</td></tr><tr><td>Human upper bound</td><td>0.886</td><td>0.887</td><td>0.885</td><td>0.821</td><td>0.941</td><td>0.892</td></tr><tr><td>Baseline majority</td><td>0.259</td><td>0.212</td><td>0.333</td><td>0</td><td>0.778</td><td>0</td></tr><tr><td>Baseline heuristic</td><td>0.642</td><td>0.664</td><td>0.621</td><td>0.364</td><td>0.867</td><td>0.677</td></tr><tr><td>CRF all features</td><td>\u20200.867</td><td>\u20200.873</td><td>\u20200.861</td><td>\u20200.809</td><td>\u20200.934</td><td>\u20200.857</td></tr></table>",
"text": "Model assessment of argument component identification ( \u2020 = significant improvement over baseline heuristic).",
"html": null,
"type_str": "table"
},
"TABREF6": {
"num": null,
"content": "<table><tr><td>Group</td><td>Feature</td><td>Description</td></tr><tr><td/><td>Unigrams</td><td>Binary lemmatized unigrams of the source and target</td></tr><tr><td>Lexical</td><td/><td>components including preceding tokens (500 most fre-</td></tr><tr><td/><td/><td>quent)</td></tr><tr><td/><td>Part-of-speech</td><td>Binary POS features of source and target components</td></tr><tr><td>Syntactic</td><td>Production rules</td><td>Production rules extracted from the constituent parse tree</td></tr><tr><td/><td/><td>(500 most frequent)</td></tr><tr><td/><td>Token statistics</td><td>Number of tokens of source and target</td></tr><tr><td/><td>Component statistics</td><td>Number of components between source and target; num-</td></tr><tr><td>Structural</td><td>Position features</td><td>ber of components in covering paragraph Source and target present in same sentence; target present</td></tr><tr><td/><td/><td>before source; source and target are first or last component</td></tr><tr><td/><td/><td>in paragraph; pair present in introduction or conclusion*</td></tr><tr><td/><td>Indicator source/target</td><td>Indicator type present in source or target</td></tr><tr><td>Indicator</td><td>Indicators between Indicators context</td><td>Indicator type present between source or target Indicator type follows or precedes source or target in the</td></tr><tr><td/><td/><td>covering paragraph of the pair</td></tr><tr><td>Discourse</td><td>Discourse Triples</td><td>Binary discourse triples of source and target</td></tr><tr><td/><td>Pointwise mutual information</td><td>Ratio of tokens positively or negatively associated with</td></tr><tr><td>PMI</td><td/><td>incoming or outgoing relations; Presence of words nega-tively or positively associated with incoming or outgoing</td></tr><tr><td/><td/><td>relations</td></tr><tr><td>ShNo</td><td>Shared nouns</td><td>Shared nouns between source and target components (number and binary)</td></tr></table>",
"text": "Features used for argumentative relation identification (*indicates genre-dependent features).",
"html": null,
"type_str": "table"
},
"TABREF7": {
"num": null,
"content": "<table><tr><td>Group</td><td>Feature</td><td>Description</td></tr><tr><td>Lexical</td><td>Unigrams</td><td>Binary and lemmatized unigrams of the component and its preceding token</td></tr><tr><td/><td>Subjectivity clues</td><td>Presence of negative words; number of negative, positive,</td></tr><tr><td/><td/><td>and neutral words; number of positive words subtracted</td></tr><tr><td>Sentiment</td><td/><td>by the number of negative words</td></tr><tr><td/><td>Sentiment scores</td><td>Five sentiment scores of covering sentence (Stanford senti-</td></tr><tr><td/><td/><td>ment analyzer)</td></tr><tr><td>Syntactic</td><td>POS distribution Production rules</td><td>POS distribution of the component Production rules extracted from the constituent parse tree</td></tr><tr><td/><td>Token statistics</td><td>Number of tokens of covering sentence; number of pre-</td></tr><tr><td/><td/><td>ceding and following tokens in covering sentence; ratio of</td></tr><tr><td>Structural</td><td>Component statistics</td><td>component and sentence tokens Number of components in paragraph; number of preceding</td></tr><tr><td/><td/><td>and following components in paragraph</td></tr><tr><td/><td>Component Position</td><td>Relative position of the argument component in paragraph</td></tr><tr><td>Discourse</td><td>Discourse Triples</td><td>PDTB discourse relations overlapping with the current component</td></tr><tr><td>Embedding</td><td colspan=\"2\">Combined word embeddings Sum of the word vectors of each word of the component and its preceding tokens</td></tr></table>",
"text": "Features used for stance recognition.",
"html": null,
"type_str": "table"
},
"TABREF8": {
"num": null,
"content": "<table><tr><td/><td/><td colspan=\"2\">Components</td><td/><td/><td>Relations</td><td/><td colspan=\"3\">Stance recognition</td></tr><tr><td/><td>F1</td><td colspan=\"3\">F1 MC F1 Cl F1 Pr</td><td>F1</td><td colspan=\"2\">F1 NoLi F1 Li</td><td>F1</td><td colspan=\"3\">F1 Sup F1 Att Avg F1</td></tr><tr><td>Human upper bound</td><td>0.868</td><td>0.926</td><td colspan=\"3\">0.754 0.924 0.854</td><td>0.954</td><td colspan=\"2\">0.755 0.844</td><td>0.975</td><td>0.703</td><td>0.855</td></tr><tr><td>Baseline majority</td><td>0.260</td><td>0</td><td>0</td><td colspan=\"2\">0.780 0.455</td><td>0.910</td><td>0</td><td>0.478</td><td>0.957</td><td>0</td><td>0.398</td></tr><tr><td>Baseline heuristic</td><td>0.759</td><td>0.759</td><td colspan=\"3\">0.620 0.899 0.700</td><td>0.901</td><td colspan=\"2\">0.499 0.562</td><td>0.776</td><td>0.201</td><td>0.674</td></tr><tr><td>Base classifier</td><td colspan=\"2\">0.794 \u20200.891</td><td colspan=\"3\">0.611 0.879 0.717</td><td>0.917</td><td colspan=\"4\">0.508 \u20200.680 \u20200.947 \u20200.413</td><td>0.730</td></tr><tr><td>ILP joint model</td><td colspan=\"11\">\u2020 0.752</td></tr></table>",
"text": "\u20210.826 \u20200.891 \u20210.682 \u20210.903 \u20200.751 \u20200.918 \u2020 \u20210.585 \u20200.680 \u20200.947 \u20200.413",
"html": null,
"type_str": "table"
},
"TABREF9": {
"num": null,
"content": "<table><tr><td colspan=\"3\">the probability feature</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td colspan=\"2\">Components</td><td/><td/><td>Relations</td><td/><td colspan=\"3\">Stance recognition</td><td/></tr><tr><td/><td>F1</td><td>F1 Cl</td><td>F1 Pr</td><td>F1</td><td>F1 NoLi</td><td>F1 Li</td><td>F1</td><td>F1 Sup</td><td>F1 Att</td><td>Avg F1</td></tr><tr><td>Baseline heuristic</td><td>0.712</td><td>0.536</td><td>0.888</td><td>0.618</td><td>0.856</td><td>0.380</td><td>0.542</td><td>0.773</td><td>0.293</td><td>0.624</td></tr><tr><td>Base classifier</td><td>\u20200.830</td><td>\u20200.712</td><td>0.937</td><td>\u20200.650</td><td colspan=\"4\">\u20200.841 \u20200.446 \u20200.745 \u20200.855</td><td>\u20200.628</td><td>0.742</td></tr><tr><td>ILP joint model</td><td colspan=\"3\">\u2020 \u20210.857 \u2020 \u20210.770 \u20200.943</td><td colspan=\"5\">\u2020 \u20210.683 \u2020 \u20210.881 \u2020 \u20210.486 \u20200.745 \u20200.855</td><td>\u20200.628</td><td>0.762</td></tr><tr><td>Best EG</td><td>0.869</td><td>-</td><td>-</td><td>0.693</td><td>-</td><td>0.502</td><td>0.710</td><td>-</td><td>-</td><td>0.757</td></tr><tr><td>MP+p</td><td>0.831</td><td>-</td><td>-</td><td>0.720</td><td>-</td><td>0.546</td><td>0.514</td><td>-</td><td>-</td><td>0.688</td></tr></table>",
"text": "of the component classification model. Additionally, we removed all genre-dependent features of both base classifiers.",
"html": null,
"type_str": "table"
},
"TABREF10": {
"num": null,
"content": "<table><tr><td/><td/><td/><td>predictions</td><td/></tr><tr><td/><td/><td colspan=\"3\">MajorClaim Claim Premise</td></tr><tr><td>actual</td><td>MajorClaim Claim Premise</td><td>139 20 0</td><td>12 202 74</td><td>2 82 735</td></tr></table>",
"text": "Confusion matrix of the ILP joint model of component classification on our test data.",
"html": null,
"type_str": "table"
},
"TABREF12": {
"num": null,
"content": "<table><tr><td>Table B.1</td><td/><td/><td/><td/></tr><tr><td colspan=\"3\">List of lexical indicators.</td><td/><td/></tr><tr><td>Class</td><td colspan=\"2\">Training data</td><td colspan=\"2\">Test data</td></tr><tr><td/><td colspan=\"2\">Identification</td><td/><td/></tr><tr><td>Arg-B</td><td>4,823</td><td>(4.1%)</td><td>1,266</td><td>(4.3%)</td></tr><tr><td>Arg-I</td><td colspan=\"3\">75,053 (63.6%) 18,655</td><td>(63.6%)</td></tr><tr><td>O</td><td colspan=\"2\">38,071 (32.3%)</td><td>9,403</td><td>(32.1%)</td></tr><tr><td/><td colspan=\"3\">Component classification</td><td/></tr><tr><td>MajorClaim</td><td>598</td><td>(12.4%)</td><td>153</td><td>(12.1%)</td></tr><tr><td>Claim</td><td>1,202</td><td>(24.9%)</td><td>304</td><td>(24.0%)</td></tr><tr><td>Premise</td><td>3,023</td><td>(62.7%)</td><td>809</td><td>(63.9%)</td></tr><tr><td/><td colspan=\"3\">Relation identification</td><td/></tr><tr><td>Not-Linked</td><td colspan=\"2\">14,227 (82.5%)</td><td>4,113</td><td>(83.5%)</td></tr><tr><td>Linked</td><td colspan=\"2\">3,023 (17.5%)</td><td>809</td><td>( 16.5%)</td></tr><tr><td/><td colspan=\"3\">Stance recognition</td><td/></tr><tr><td>Support</td><td>3,820</td><td>(90.4%)</td><td>1,021</td><td>(91.7%)</td></tr><tr><td>Attack</td><td>405</td><td>(9.6%)</td><td>92</td><td>(8.3%)</td></tr></table>",
"text": "Class distributions in training data and test data.",
"html": null,
"type_str": "table"
},
"TABREF13": {
"num": null,
"content": "<table><tr><td>.1</td><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"7\">Argument component identification ( \u2020 = significant improvement over baseline heuristic).</td></tr><tr><td/><td>F1</td><td>P</td><td>R</td><td colspan=\"2\">F1 Arg-B F1 Arg-I</td><td>F1 O</td></tr><tr><td>Baseline majority</td><td>0.259</td><td>0.212</td><td>0.333</td><td>0</td><td>0.778</td><td>0</td></tr><tr><td>Baseline heuristic</td><td>0.628</td><td>0.647</td><td>0.610</td><td>0.350</td><td>0.869</td><td>0.660</td></tr><tr><td>CRF only structural CRF only syntactic</td><td colspan=\"3\">\u20200.748 \u20200.757 \u20200.740 \u20200.730 \u20200.752 \u20200.710</td><td>\u20200.542 \u20200.638</td><td>\u20200.906 0.868</td><td>\u20200.789 0.601</td></tr><tr><td>CRF only lexSyn</td><td colspan=\"3\">\u20200.762 \u20200.780 \u20200.744</td><td>\u20200.714</td><td>\u20200.873</td><td>0.620</td></tr><tr><td>CRF only probability</td><td colspan=\"2\">0.605 \u20200.698</td><td>0.534</td><td>\u20200.520</td><td>0.806</td><td>0.217</td></tr><tr><td colspan=\"4\">CRF w/o genre-dependent \u20200.847 \u20200.851 \u20200.844</td><td>\u20200.778</td><td>\u20200.925</td><td>\u20200.835</td></tr><tr><td>CRF all features</td><td colspan=\"3\">\u20200.849 \u20200.853 \u20200.846</td><td>\u20200.777</td><td>\u20200.927</td><td>\u20200.842</td></tr></table>",
"text": "",
"html": null,
"type_str": "table"
},
"TABREF14": {
"num": null,
"content": "<table><tr><td/><td>F1</td><td>P</td><td>R</td><td colspan=\"2\">F1 MajorClaim F1 Claim F1 Premise</td></tr><tr><td>Baseline majority</td><td colspan=\"3\">0.257 0.209 0.333</td><td>0</td><td>0</td><td>0.771</td></tr><tr><td>Baseline heuristic</td><td colspan=\"3\">0.724 0.724 0.723</td><td>0.740</td><td>0.560</td><td>0.870</td></tr><tr><td>SVM only lexical</td><td colspan=\"3\">0.591 0.603 0.580</td><td>0.591</td><td>0.405</td><td>0.772</td></tr><tr><td>SVM only structural</td><td colspan=\"3\">\u20200.746 0.726 \u20200.767</td><td>\u20200.803</td><td>0.551</td><td>0.870</td></tr><tr><td>SVM only contextual</td><td colspan=\"3\">0.601 0.603 0.600</td><td>0.656</td><td>0.248</td><td>0.836</td></tr><tr><td>SVM only indicators</td><td colspan=\"3\">0.508 0.596 0.443</td><td>0.415</td><td>0.098</td><td>0.799</td></tr><tr><td>SVM only syntactic</td><td colspan=\"3\">0.387 0.371 0.405</td><td>0.313</td><td>0</td><td>0.783</td></tr><tr><td>SVM only probability</td><td colspan=\"3\">0.561 0.715 0.462</td><td>0.448</td><td>0.002</td><td>0.792</td></tr><tr><td>SVM only discourse</td><td colspan=\"3\">0.521 0.563 0.484</td><td>0.016</td><td>0.538</td><td>0.786</td></tr><tr><td>SVM only embeddings</td><td colspan=\"3\">0.588 0.620 0.560</td><td>0.560</td><td>0.355</td><td>0.815</td></tr><tr><td>SVM all w/o prob &amp; emb</td><td colspan=\"3\">\u20200.771 \u20200.771 \u20200.772</td><td>\u20200.855</td><td>0.596</td><td>0.863</td></tr><tr><td colspan=\"4\">SVM w/o genre-dependent \u20200.742 \u20200.745 0.739</td><td>\u20200.819</td><td>0.560</td><td>0.847</td></tr><tr><td>SVM all features</td><td colspan=\"3\">\u20200.773 \u20200.774 \u20200.771</td><td>\u20200.865</td><td>0.592</td><td>0.861</td></tr></table>",
"text": "Argument component classification ( \u2020 = significant improvement over baseline heuristic).",
"html": null,
"type_str": "table"
},
"TABREF15": {
"num": null,
"content": "<table><tr><td>Table C.4</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td colspan=\"3\">Parameter</td><td/><td colspan=\"2\">Components</td><td/><td/><td>Relations</td><td/><td/><td>Statistics</td><td/></tr><tr><td/><td colspan=\"3\">\u03c6 r \u03c6 cr \u03c6 c</td><td>F1</td><td colspan=\"2\">F1 MC F1 Cl</td><td>F1 Pr</td><td>F1</td><td>F1 NoLi</td><td>F1 Li</td><td colspan=\"3\">Cl\u2192Pr Pr\u2192Cl Trees</td></tr><tr><td>Base heuristic</td><td>-</td><td>-</td><td>-</td><td>0.724</td><td>0.740</td><td>0.560</td><td>0.870</td><td>0.660</td><td>0.885</td><td>0.436</td><td>-</td><td>-</td><td>100%</td></tr><tr><td>Base classifier</td><td>-</td><td>-</td><td>-</td><td colspan=\"2\">\u20200.773 \u20200.865</td><td>0.592</td><td>0.861</td><td>\u20200.736</td><td>\u20200.917</td><td>\u20200.547</td><td>-</td><td>-</td><td>20.9%</td></tr><tr><td colspan=\"2\">Base+heuristic -</td><td>-</td><td>-</td><td colspan=\"2\">\u20200.776 \u20200.865</td><td>0.601</td><td>0.861</td><td>\u20200.739</td><td colspan=\"2\">\u20200.917 \u20200.555</td><td>0</td><td>31</td><td>24.2%</td></tr><tr><td>ILP-na\u00efve</td><td>1</td><td>0</td><td>0</td><td colspan=\"3\">\u20200.765 \u20200.865 \u20200.591</td><td>0.761</td><td colspan=\"2\">\u20200.732 \u2020 \u20210.918</td><td>\u20200.530</td><td>206</td><td>1,144</td><td>100%</td></tr><tr><td>ILP-relation ILP-claim</td><td>1 2 0</td><td>1 2 0</td><td colspan=\"8\">0 \u2020 \u20210.809 \u20200.865 \u2020 \u20210.677 \u20210.875 \u2020 \u20210.759 \u2020 \u20210.919 \u2020 \u20210.598 1 \u20200.740 \u20200.865 0.549 0.777 0.666 0.894 0.434</td><td>299 229</td><td>571 818</td><td>100% 100%</td></tr><tr><td>ILP-equal</td><td>1 3</td><td>1 3</td><td>1 3</td><td colspan=\"5\">\u2020 \u20210.822 \u20200.865 \u2020 \u20210.699 \u2020 \u20210.903 \u2020 \u20210.751</td><td colspan=\"2\">\u20200.913 \u2020 \u20210.590</td><td>294</td><td>280</td><td>100%</td></tr><tr><td>ILP-same</td><td>1 4</td><td>1 4</td><td>1 2</td><td colspan=\"5\">\u2020 \u20210.817 \u20200.865 \u2020 \u20210.687 \u2020 \u20210.898 \u2020 \u20210.738</td><td colspan=\"2\">\u20200.908 \u2020 \u20210.569</td><td>264</td><td>250</td><td>100%</td></tr><tr><td>ILP-balanced</td><td>1 2</td><td>1 4</td><td>1 4</td><td colspan=\"5\">\u2020 \u20210.823 \u20200.865 \u2020 \u20210.701 \u2020 \u20210.904 \u2020 \u20210.752</td><td colspan=\"2\">\u20200.913 \u2020 \u20210.591</td><td>297</td><td>283</td><td>100%</td></tr><tr><td/><td/><td/><td/><td/><td>F1</td><td>P</td><td>R</td><td/><td colspan=\"4\">F1 Not-Linked F1 Linked</td><td/></tr><tr><td colspan=\"2\">Baseline majority</td><td/><td/><td colspan=\"2\">0.455</td><td>0.418</td><td colspan=\"2\">0.500</td><td colspan=\"2\">0.910</td><td/><td>0</td><td/></tr><tr><td colspan=\"2\">Baseline heuristic</td><td/><td/><td colspan=\"2\">0.660</td><td>0.657</td><td colspan=\"2\">0.664</td><td colspan=\"2\">0.885</td><td colspan=\"2\">0.436</td><td/></tr></table>",
"text": "Argumentative relation identification ( \u2020 = significant improvement over baseline heuristic; \u2021 = significant difference compared to SVM all features). Joint modeling approach ( \u2020 = significant improvement over base heuristic; \u2021 = significant improvement over base classifier; Cl\u2192Pr = number of claims converted to premises; Pr\u2192Cl = number of premises converted to claims; Trees = Percentage of correctly identified trees).",
"html": null,
"type_str": "table"
}
}
}
}