ACL-OCL / Base_JSON /prefixB /json /bea /2021.bea-1.22.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:09:43.873653Z"
},
"title": "\"Sharks are not the threat humans are\": Argument Component Segmentation in School Student Essays",
"authors": [
{
"first": "Tariq",
"middle": [],
"last": "Alhindi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Columbia University",
"location": {}
},
"email": ""
},
{
"first": "Debanjan",
"middle": [],
"last": "Ghosh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Columbia University",
"location": {}
},
"email": "dghosh@ets.org"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Argument mining is often addressed by a pipeline method where segmentation of text into argumentative units is conducted first and proceeded by an argument component identification task. In this research, we apply a token-level classification to identify claim and premise tokens from a new corpus of argumentative essays written by middle school students. To this end, we compare a variety of state-of-the-art models such as discrete features and deep learning architectures (e.g., BiLSTM networks and BERT-based architectures) to identify the argument components. We demonstrate that a BERT-based multi-task learning architecture (i.e., token and sentence level classification) adaptively pretrained on a relevant unlabeled dataset obtains the best results.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Argument mining is often addressed by a pipeline method where segmentation of text into argumentative units is conducted first and proceeded by an argument component identification task. In this research, we apply a token-level classification to identify claim and premise tokens from a new corpus of argumentative essays written by middle school students. To this end, we compare a variety of state-of-the-art models such as discrete features and deep learning architectures (e.g., BiLSTM networks and BERT-based architectures) to identify the argument components. We demonstrate that a BERT-based multi-task learning architecture (i.e., token and sentence level classification) adaptively pretrained on a relevant unlabeled dataset obtains the best results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Computational argument mining focuses on subtasks such as identifying the Argumentative Discourse Units (ADUs) (Peldszus and Stede, 2013) , their nature (i.e., claim or premise), and the relation (i.e., support/attack) between them Gurevych, 2014, 2017; Stede and Schneider, 2018; Nguyen and Litman, 2018; Lawrence and Reed, 2020) . Argumentation is essential in academic writing as it enhances the logical reasoning, as well as, critical thinking capacities of students (Ghosh et al., 2020) . Thus, in recent times, argument mining has been used to assess students' writing skills in essay scoring and provide feedback on the writing (Song et al., 2014; Somasundaran et al., 2016; Zhang and Litman, 2020) . Diet soda , sugar -free gum, and low -calorie sweeteners are what most people see as a way to sweeten up a day without the calories.",
"cite_spans": [
{
"start": 111,
"end": 137,
"text": "(Peldszus and Stede, 2013)",
"ref_id": "BIBREF41"
},
{
"start": 232,
"end": 253,
"text": "Gurevych, 2014, 2017;",
"ref_id": null
},
{
"start": 254,
"end": 280,
"text": "Stede and Schneider, 2018;",
"ref_id": "BIBREF55"
},
{
"start": 281,
"end": 305,
"text": "Nguyen and Litman, 2018;",
"ref_id": "BIBREF37"
},
{
"start": 306,
"end": 330,
"text": "Lawrence and Reed, 2020)",
"ref_id": "BIBREF29"
},
{
"start": 471,
"end": 491,
"text": "(Ghosh et al., 2020)",
"ref_id": "BIBREF20"
},
{
"start": 635,
"end": 654,
"text": "(Song et al., 2014;",
"ref_id": "BIBREF52"
},
{
"start": 655,
"end": 681,
"text": "Somasundaran et al., 2016;",
"ref_id": "BIBREF51"
},
{
"start": 682,
"end": 705,
"text": "Zhang and Litman, 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Despite the lack of calories, artificial sweeteners have multiple negative health e\u21b5ects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Over the past century, science has made it possible to replicate food with fabricated alternatives that simplify weight loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although many thought these new replacements would benefit overall health, there are more negative e\u21b5ects on manufactured food than the food they replaced. While argument mining literature has addressed students writing in the educational context, so far, it has primarily addressed college level writing (Blanchard et al., 2013; Persing and Ng, 2015; Beigman Klebanov et al., 2017; except for a very few ones (Attali and Burstein, 2006; Lugini et al., 2018; . Instead, in this paper, we concentrate on identifying arguments from essays written by middle school students. To this end, we perused a new corpus of 145 argumentative essays written by middle school students to identify the argument components. These essays are obtained from an Educational app -Writing Mentor -that operates as a Google-docs Add-on. 1 Normally, research that investigates college students writing in the context of argument mining apply a pipeline of subtasks to first detect arguments at the token-level text units and subsequently classify the text units to argument components . However, middle school student essays are vastly different from college students' writing (detailed in Section 3). We argue they are more di cult to analyze through the pipeline approach due to run-on sentences, unsupported claims, and the presence of several claims in a sentence. Thus, instead of segmenting the text into argumentative/non-argumentative units first, we conduct a token-level classification task to identify the type of the argument component (e.g., B/I tokens from claims and premises) directly by joining the first and the second subtask in a single task. Figure 1 presents an excerpt from an annotated essay with their corresponding gold annotations of claims (e.g., \"Diet soda . . . the calories\") and premises (e.g., \"there are . . . replaced\"). The legends represent the tokens by the standard BIO notations. We propose a detailed experimental setup to identify the argument components using both feature-based machine learning techniques and deep learning models. For the former, we used several structural, lexical, and syntactic features in a sequence classification framework using the Conditional Random Field (CRF) classifier (La\u21b5erty et al., 2001) . For the latter, we employ a BiLSTM network and, finally, a transformer architecture -BERT (Devlin et al., 2019 ) with its pretrained and task-specific finetuned models. We achieve the best result from a particular BERT architecture (7.5% accuracy improvement over the discrete features) that employs a joint multitask learning objective with an uncertainty-based weighting of two task-specific losses: (a) the main task of token-level sequence classification, and (b) the auxiliary task of sentence classification (i.e., whether a sentence contains argument or not). We make the dataset (student essays) from our research publicly available. 2",
"cite_spans": [
{
"start": 305,
"end": 329,
"text": "(Blanchard et al., 2013;",
"ref_id": "BIBREF9"
},
{
"start": 330,
"end": 351,
"text": "Persing and Ng, 2015;",
"ref_id": "BIBREF44"
},
{
"start": 352,
"end": 382,
"text": "Beigman Klebanov et al., 2017;",
"ref_id": "BIBREF7"
},
{
"start": 410,
"end": 437,
"text": "(Attali and Burstein, 2006;",
"ref_id": "BIBREF5"
},
{
"start": 438,
"end": 458,
"text": "Lugini et al., 2018;",
"ref_id": "BIBREF31"
},
{
"start": 814,
"end": 815,
"text": "1",
"ref_id": null
},
{
"start": 2219,
"end": 2241,
"text": "(La\u21b5erty et al., 2001)",
"ref_id": "BIBREF28"
},
{
"start": 2334,
"end": 2354,
"text": "(Devlin et al., 2019",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 1639,
"end": 1647,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The majority of the prior work on argument mining addressed the problems of argument segmentation, component, and relation identification modeled in a pipeline of subtasks (Peldszus and Stede, 2015; Potash et al., 2017; Niculae et al., 2017 ) except a few research (Schulz et al., 2019) . However, most of the research assumes the availability of segmented argumentative units and do the subsequent tasks such as the classification of argumentative component types (Biran and Rambow, 2011; Stab and Gurevych, 2014; Park and Cardie, 2014) , argument relations (Ghosh et al., 2016; Nguyen and Litman, 2016) , and argument schemes (Hou and Jochim, 2017; Feng and Hirst, 2011) .",
"cite_spans": [
{
"start": 186,
"end": 198,
"text": "Stede, 2015;",
"ref_id": "BIBREF42"
},
{
"start": 199,
"end": 219,
"text": "Potash et al., 2017;",
"ref_id": "BIBREF47"
},
{
"start": 220,
"end": 240,
"text": "Niculae et al., 2017",
"ref_id": "BIBREF38"
},
{
"start": 265,
"end": 286,
"text": "(Schulz et al., 2019)",
"ref_id": "BIBREF50"
},
{
"start": 465,
"end": 489,
"text": "(Biran and Rambow, 2011;",
"ref_id": "BIBREF8"
},
{
"start": 490,
"end": 514,
"text": "Stab and Gurevych, 2014;",
"ref_id": "BIBREF53"
},
{
"start": 515,
"end": 537,
"text": "Park and Cardie, 2014)",
"ref_id": "BIBREF40"
},
{
"start": 559,
"end": 579,
"text": "(Ghosh et al., 2016;",
"ref_id": "BIBREF21"
},
{
"start": 580,
"end": 604,
"text": "Nguyen and Litman, 2016)",
"ref_id": "BIBREF36"
},
{
"start": 628,
"end": 650,
"text": "(Hou and Jochim, 2017;",
"ref_id": "BIBREF24"
},
{
"start": 651,
"end": 672,
"text": "Feng and Hirst, 2011)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Previous work on argument segmentation includes approaches that model the task as a sentence classification to argumentative or nonargumentative sentences (Moens et al., 2007; Palau and Moens, 2009; Mochales and Moens, 2011; Rooney et al., 2012; Lippi and Torroni, 2015; Ajjour et al., 2017; Chakrabarty et al., 2019) , or by defining heuristics to identify argumentative segment boundaries (Madnani et al., 2012; Persing and Ng, 2015; . Although we conduct segmentation, we focus on the token-level classification to directly identify the argument component's type. This setup is related to Schulz et al. (2018) where authors analyzed students' diagnostic reasoning skills via token level identification. Our joint model using BERT is similar to . However, we set the main task as the token-level classification where the auxiliary task of argumentative sentence identification assists the main task to attain a better performance.",
"cite_spans": [
{
"start": 155,
"end": 175,
"text": "(Moens et al., 2007;",
"ref_id": "BIBREF35"
},
{
"start": 176,
"end": 198,
"text": "Palau and Moens, 2009;",
"ref_id": "BIBREF39"
},
{
"start": 199,
"end": 224,
"text": "Mochales and Moens, 2011;",
"ref_id": "BIBREF34"
},
{
"start": 225,
"end": 245,
"text": "Rooney et al., 2012;",
"ref_id": "BIBREF48"
},
{
"start": 246,
"end": 270,
"text": "Lippi and Torroni, 2015;",
"ref_id": "BIBREF30"
},
{
"start": 271,
"end": 291,
"text": "Ajjour et al., 2017;",
"ref_id": "BIBREF2"
},
{
"start": 292,
"end": 317,
"text": "Chakrabarty et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 391,
"end": 413,
"text": "(Madnani et al., 2012;",
"ref_id": "BIBREF33"
},
{
"start": 414,
"end": 435,
"text": "Persing and Ng, 2015;",
"ref_id": "BIBREF44"
},
{
"start": 592,
"end": 612,
"text": "Schulz et al. (2018)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "As stated earlier, most of the research on argumentative writing in an educational context focuses on identifying argument structures (i.e., argument components and their relations) (Persing and Ng, 2016; Nguyen and Litman, 2016) as well as to predict essays scores from features derived from the essays (e.g., number of claims and premises, number of supported claims, number of dangling claims) (Ghosh et al., 2016) . Related investigations have also examined the challenge of scoring a certain dimension of essay quality, such as relevance to the prompt (Persing and Ng, 2014) , opinions and their targets (Farra et al., 2015) , argument strength (Persing and Ng, 2015) among others.",
"cite_spans": [
{
"start": 182,
"end": 204,
"text": "(Persing and Ng, 2016;",
"ref_id": "BIBREF45"
},
{
"start": 205,
"end": 229,
"text": "Nguyen and Litman, 2016)",
"ref_id": "BIBREF36"
},
{
"start": 397,
"end": 417,
"text": "(Ghosh et al., 2016)",
"ref_id": "BIBREF21"
},
{
"start": 557,
"end": 579,
"text": "(Persing and Ng, 2014)",
"ref_id": "BIBREF43"
},
{
"start": 609,
"end": 629,
"text": "(Farra et al., 2015)",
"ref_id": "BIBREF18"
},
{
"start": 650,
"end": 672,
"text": "(Persing and Ng, 2015)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Majority of the above research are conducted in the context of college-level writing. For instance, Nguyen and Litman (2018) investigated argument structures in TOEFL11 corpus (Blanchard et al., 2013) which was also the main focus of (Ghosh et al., 2016) . Beigman Klebanov et al. 2017and Persing and Ng (2015) analyzed writing of university students and Stab and Gurevych (2017) used data from \"essayforum.com\", where college entrance examination is the largest forum. Although, writing quality in essays by young writers has been addressed (Attali and Burstein, 2006; Attali and Powers, 2008; Deane, 2014) , identification of arguments was not part of these studies. Computational analysis of arguments from school students is in infancy except for a few research (Lugini et al., 2018; Afrin et al., 2020; Ghosh et al., 2020) . We believe our dataset (Section 3) will be useful for researchers working at the intersection of argument mining and education.",
"cite_spans": [
{
"start": 100,
"end": 124,
"text": "Nguyen and Litman (2018)",
"ref_id": "BIBREF37"
},
{
"start": 176,
"end": 200,
"text": "(Blanchard et al., 2013)",
"ref_id": "BIBREF9"
},
{
"start": 234,
"end": 254,
"text": "(Ghosh et al., 2016)",
"ref_id": "BIBREF21"
},
{
"start": 289,
"end": 310,
"text": "Persing and Ng (2015)",
"ref_id": "BIBREF44"
},
{
"start": 542,
"end": 569,
"text": "(Attali and Burstein, 2006;",
"ref_id": "BIBREF5"
},
{
"start": 570,
"end": 594,
"text": "Attali and Powers, 2008;",
"ref_id": "BIBREF6"
},
{
"start": 595,
"end": 607,
"text": "Deane, 2014)",
"ref_id": "BIBREF15"
},
{
"start": 766,
"end": 787,
"text": "(Lugini et al., 2018;",
"ref_id": "BIBREF31"
},
{
"start": 788,
"end": 807,
"text": "Afrin et al., 2020;",
"ref_id": "BIBREF0"
},
{
"start": 808,
"end": 827,
"text": "Ghosh et al., 2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We obtained a large number of English essays (over 10K) through the Writing Mentor Educational App. This App is a Google Docs add-on designed to provide instructional writing support, especially for academic writing. The addon provides students to write argumentative or narrative essays and receive feedback on their writings. We selected a subset of 145 argumentative essays for the annotation purpose. Essays were either self-labeled as \"argumentative\" or annotators identified their argumentative nature from the titles (e.g., \"Should Artificial Sweeteners be Banned in America ?\"). 3 Essays covered various social issues related to climate change, veteran care, e\u21b5ects of wars, whether sharks are dangerous or not, etc. We denote this corpus as ARG2020 in the remaining sections of the paper. We employed three expert annotators (with academic and professional background in Linguistics and Education) to identify the argument components. The annotators were instructed to read sentences from the essays and identify the claims (defined as, \"a potentially arguable statement that indicates a person is arguing for or arguing against some-thing. Claims are not clarification or elaboration statements.\") that the argument is in reference to. Next, once the claims are identified, the annotators annotated the premises (defined as, \"reasons given by either for supporting or attacking the claims making those claims more than mere assertions\"). 4 Earlier research has addressed college level writing, and even such resources are scarce except for a few corpora ) (denoted as SG2017 in this paper). On the contrary, ARG2020 is based on middle school students writing, which di\u21b5ers from college level writing SG2017 in several aspects briefly discussed in the next paragraph.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "First, we notice that essays in SG2017 maintain distinct paragraphs such as the introduction (initiates the major claim in the essay), the conclusion (summarizes the arguments), and a few paragraphs in between that express many claims and their premises. However, essays written by middle school students do not always comply with such writing conventions to keep a concrete introduction and conclusion paragraph, rather, they write many short paragraphs (7-8 paragraphs on average) per essay while each paragraph contains multiple claims. Second, in general, claims in college essays in SG2017 are justified by one or multiple premises, whereas ARG2020 has many unsupported claims. For instance, the excerpt from the annotated essay in Figure 1 contains two unsupported claims (e.g., \"Diet soda, sugar . . . without the calories\" and \"artificial sweeteners . . . health e\u21b5ects\"). Third, middle school students often put opinions (e.g., \"Sugar substitutes produce sweet food without feeling guilty consequences\") or matter-of-fact statements (e.g., \"Even canned food and dairy products can be artificially sweetened\") that are not argumentative claims but structurally they are identical to claims. Fourth, multiple claims frequently appear in a single sentence that are separated by discourse markers or commas. Fifth, many essays contain run-on sentences (e.g., \"this is hard on the family, they have a hard time adjusting\") that make the task of parsing even tricky. We argue these reasons make identifying argument claims and premises from ARG2020 more challenging. The annotators were presented with specific guidelines and examples for annotation. We conducted a pilot task first where all the three annotators annotated ten essays and exchanged their notes for calibration. Following that, we continued pair-wise annotation tasks (30 essays for each pair of annotators), and finally, individual annotators annotated the remaining essays. Since the annotation task involves identifying each argumentative component's words, we have to account for fuzzy boundaries (e.g., in-claim vs. not-in-claim tokens) to measure the IAA. We considered the Krippendor\u21b5's \u21b5 (Krippendor\u21b5, 2004) metric to compute the IAA. We measure the \u21b5 between each pair of annotators and report the average. For claim we have a modest agreement of 0.71 that is comparable to (Stab and Gurevych, 2014) and for premise, we have a high agreement of 0.90.",
"cite_spans": [
{
"start": 2165,
"end": 2184,
"text": "(Krippendor\u21b5, 2004)",
"ref_id": "BIBREF27"
},
{
"start": 2352,
"end": 2377,
"text": "(Stab and Gurevych, 2014)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [
{
"start": 737,
"end": 745,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "Out of the 145 essays from ARG2020 we randomly assign 100 essays for training, 10 essays for dev, and the remaining 35 essays for test. Table 1 represents the data statistics in the standard BIO format. We find the number of claims is almost six times the number of premises showing that the middle school students often fail to justify their proposed claims. We keep identifying opinions and argumentative relations (support/attack) as future work.",
"cite_spans": [],
"ref_spans": [
{
"start": 136,
"end": 143,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "Majority of the argumentation research first segment the text in argumentative and nonargumentative segments and then identify the structures such as components and relations . Petasis (2019) mentioned that the granularity of computational approaches addressing the second task of argument component identification is diverse because some approaches consider detecting components at the clause level (e.g., approaches focused on the SG2017 corpus Gurevych, 2014, 2017; Ajjour et al., 2017; ) and others at the sentence levels (Chakrabarty et al., 2019; . We avoided both approaches for the following two reasons. First, middle school student essays often contain run-on sentences, and it is unclear how to handle clause level annotations because parsing might be inaccurate. Second, around 62% of the premises in the training set appears to be in the same sentence as their claims. This makes sentence classification to either claim or premise impractical ( Figure 1 contains one such example). Thus, instead of relying on the pipeline approach, we tackle the problem by identifying argument components from the token-level classification akin to Schulz et al. (2019) . Our unit of sequence tagging is a sentence, unlike a passage . We apply a five-way tokenclassification (or sequence tagging) task while using the standard BIO notation for the claim and premise tokens (See Table 1 ). Any token that is not \"B-Claim\", \"I-Claim\", \"B-Premise\", or \"I-Premise\" is denoted as \"O-Arg\". As expected, the number of \"O-Arg\" tokens is much larger than the other categories (see Table 1 ).",
"cite_spans": [
{
"start": 177,
"end": 191,
"text": "Petasis (2019)",
"ref_id": "BIBREF46"
},
{
"start": 447,
"end": 468,
"text": "Gurevych, 2014, 2017;",
"ref_id": null
},
{
"start": 469,
"end": 489,
"text": "Ajjour et al., 2017;",
"ref_id": "BIBREF2"
},
{
"start": 526,
"end": 552,
"text": "(Chakrabarty et al., 2019;",
"ref_id": "BIBREF11"
},
{
"start": 1147,
"end": 1167,
"text": "Schulz et al. (2019)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [
{
"start": 958,
"end": 966,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1376,
"end": 1383,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1570,
"end": 1577,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "We explore three separate machine learning approaches well-established for studying token-based classification. First, we experiment with the sequence classifier Conditional Random Field (CRF) that exploits state-of-theart discrete features. Second, we implement a BiLSTM network (with and without CRF) based on the BERT embeddings. Finally, we experiment with the fine-tuned BERT models with/without multitask learning setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "Akin to we experiment with three groups of discrete features: structural, syntactic and lexico-syntactic with some modifications. In addition, we experiment with embedding features extracted from the contextualized pre-trained language model of BERT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature-based Models",
"sec_num": "4.1"
},
{
"text": "Discrete Features For each token in a given essay, we extract structural features that include token position (e.g., the relative and absolute position of the token in the sentence, paragraph and, essay from the beginning of the essay) and punctuation features (e.g., whether the token is, preceded, or succeeded by punctuation). Such position features have shown to be useful in identifying claims and premises against sentences that do not contain any argument . We also extract syntactic features for each token that include part-of-speech tag of the token and normalized length to the lowest common ancestor (LCA) of the token and its preceding (and succeeding) token in the parse tree. In contrast with , we use dependency parsing as the base for the syntactic features rather than constituency parsing. Finally, we extract lexico-syntactic features (denoted as lexSyn in Table 2 ) that include the dependency relation governing the token in the dependency parse tree and the token itself, plus its governing dependency relation as another feature. This is also di\u21b5erent than where the authors used lexicalized-parse tree (Collins, 2003) to generate their lexico-syntactic features. These features are e\u21b5ective in identifying the argumentative discourse units. We also observed that using dependency parse trees as a basis for the lexico-syntactic features yields better results than constituency parse trees in our pilot experiments.",
"cite_spans": [
{
"start": 1127,
"end": 1142,
"text": "(Collins, 2003)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 877,
"end": 884,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Feature-based Models",
"sec_num": "4.1"
},
{
"text": "Embedding Features from BERT BERT (Devlin et al., 2019) , a bidirectional transformer model, has achieved state-of-the-art performance in many NLP tasks. BERT is initially trained on the tasks of masked language modeling (MLM) and next sentence prediction (NSP) over very large corpora of English Wikipedia and BooksCorpus. During its training, a special token \"[CLS]\" is added to the beginning of each training instance, and the \"[SEP]\" tokens are added to indicate the end of utterance(s) and separate, in case of two utterances.",
"cite_spans": [
{
"start": 34,
"end": 55,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature-based Models",
"sec_num": "4.1"
},
{
"text": "Pretrained BERT (\"bert-base-uncased\") can be used directly by extracting the token representations' embeddings. We use the average embeddings of the top four layers as suggested in Devlin et al. (2019) . For tokens with more than one word-piece when running BERT's tokenizer, their final embeddings feature is the average vector of all of their word-pieces. This feature yields a 768D-long vector that we use individually as well as in combination with the other discrete features in our experiments. We utilize the sklearn-crfsuite tool for our CRF experiments. 5",
"cite_spans": [
{
"start": 181,
"end": 201,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature-based Models",
"sec_num": "4.1"
},
{
"text": "To compare our models with standard sequence tagging models for argument segmentation (Petasis, 2019; Ajjour et al., 2019; Hua et al., 2019) , we experiment with the BiLSTM-CRF sequence tagging model introduced by Ma and Hovy (2016) using the flair library (Akbik et al., 2019) . We use the standard BERT (\"bertbase-uncased\") embeddings (768D) in the embedding layer and projected to a single-layer BiLSTM of 256D. BiLSTMs provide the context to the token's left and right, which proved to be useful for sequence tagging tasks. We train this model with and without a CRF decoder to see its e\u21b5ect on this task. The CRF layer considers both the output of the BiL-STM layer and the other neighboring tokens' labels, which improves the accuracy of the modeling desired transitions between labels (Ma and Hovy, 2016) .",
"cite_spans": [
{
"start": 86,
"end": 101,
"text": "(Petasis, 2019;",
"ref_id": "BIBREF46"
},
{
"start": 102,
"end": 122,
"text": "Ajjour et al., 2019;",
"ref_id": "BIBREF1"
},
{
"start": 123,
"end": 140,
"text": "Hua et al., 2019)",
"ref_id": "BIBREF25"
},
{
"start": 257,
"end": 277,
"text": "(Akbik et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 792,
"end": 811,
"text": "(Ma and Hovy, 2016)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BiLSTM-CRF Models",
"sec_num": "4.2"
},
{
"text": "Pre-trained BERT can also be used for transfer learning by fine-tuning on a downstream task, i.e., claim and premise token identification task where training instances are from the labeled dataset ARG2020. We denote this model as BERT bl . Besides fine-tuning with the labeled data, we also experiment with a multitask learning setting as well as conducted adaptive pretraining (Gururangan et al., 2020), that is continued pretraining on unlabeled corpora that can be task and domain relevant. We discuss the settings below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformers Fine-tuned Models",
"sec_num": "4.3"
},
{
"text": "Transformers Multitask Learning Multitask learning aims to leverage useful information in multiple related tasks to improve the performance of each task (Caruana, 1997) . We treat the sequence labeling task of fiveway token-level argument classification as the main task while we adopt the binary task of sentence-level argument identification (i.e., whether the candidate sentence contains an argument (Ghosh et al., 2020) as the auxiliary task. Here, if any sentence in the candidate essay contains claim or premise token(s), the sentence is labeled as the positive category (i.e., argumentative), otherwise non-argumentative. We hypothesize that this auxiliary task of identifying argumentative sentences in a multitask setting could be useful for the main task of token-level classification.",
"cite_spans": [
{
"start": 153,
"end": 168,
"text": "(Caruana, 1997)",
"ref_id": "BIBREF10"
},
{
"start": 403,
"end": 423,
"text": "(Ghosh et al., 2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformers Fine-tuned Models",
"sec_num": "4.3"
},
{
"text": "We deploy two classification heads -one for each task -and the relevant gold labels are passed to them. For the auxiliary task, the learned representation for the \"[CLS]\" token is passed to the classification head. The two losses from these individual heads are added and propagated back through the model. This allows BERT to model the nuances of both tasks and their interdependence simultaneously. However, instead of simply adding the losses from the two tasks, we employ dynamic weighting of task-specific losses during the training process, based on the homoscedastic uncertainty of tasks, as proposed in Kendall et al. (2018) :",
"cite_spans": [
{
"start": 611,
"end": 632,
"text": "Kendall et al. (2018)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformers Fine-tuned Models",
"sec_num": "4.3"
},
{
"text": "L = X t 1 2 2 t L t + log 2 t (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformers Fine-tuned Models",
"sec_num": "4.3"
},
{
"text": "where L t and t depict the task-specific loss and its variance (updated through backpropagation), respectively, over the training instances. We denote this model as BERT mt .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformers Fine-tuned Models",
"sec_num": "4.3"
},
{
"text": "Adaptive Pre-training Learning We adaptively pretrained BERT over two unlabeled corpora. First, we train on a task relevant Reddit corpus of 5.5 million opinionated claims that was released by Chakrabarty et al. (2019) . These claims are self-labeled by the acronym: IMO/IMHO (in my (humble) opinion), which is commonly used in Reddit. We denote this model as BERT IMHO . Next, we train on a task and domain relevant corpus of around 10K essays that we obtained originally (See section 3) from the Writing Mentor App, excluding the annotated set of ARG2020 essays. We denote this model as BERT essay . Figure 2 displays the use of the adaptive pretraining step (in orange block) and the two classification heads (in green blocks) employed for the multitask variation. Figure 2 : BERT fine-tuning with adaptive pretraining on unlabeled data from a relevant domain followed by fine-tuning on the labeled dataset with the multitask variation.",
"cite_spans": [
{
"start": 193,
"end": 218,
"text": "Chakrabarty et al. (2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 602,
"end": 610,
"text": "Figure 2",
"ref_id": null
},
{
"start": 768,
"end": 776,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Transformers Fine-tuned Models",
"sec_num": "4.3"
},
{
"text": "For brevity, the parameter tuning description for all the models and experiments -discrete feature-based and deep-learning ones (e.g., CRF, BiLSTM, BERT) is in the supplemental material.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformers Fine-tuned Models",
"sec_num": "4.3"
},
{
"text": "We present our experiments' results using the CRF, BiLSTM, and BERT models under different settings. We report the individual F1, Accuracy, and Macro-F1 (abbrev. as \"Acc.\" and \"F1\") scores for all the categories in Table 2 and Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 215,
"end": 235,
"text": "Table 2 and Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "We apply the discrete features (structural, syntactic, lexico-syntactic (\"lexSyn\")) together and individually to the CRF model. We observe the structural and syntactic features do not perform well individually, especially in the case of premise tokens (See Table 5 in Appendix A.3) and therefore, we only report the results of all discrete features (Discrete* in Table 2 ) and individually only the performance of the lexSyn features. noticed that structural features are e\u21b5ective to identify argument components, especially from the introduction and conclusion sections of the college level essays because they contain few argumentatively relevant content. On the contrary, as stated earlier, school student es- says do not always comply with such writing conventions. Table 2 displays that the lexSyn feature independently performs better by almost 8% accuracy than the combination of the other discourse features. This correlates to the findings from prior work on SG2017 where the lexSyn features reached the highest F1 on a similar corpus. Next, we augment the embedding features from the BERT pre-trained model with the discrete features and notice a marginal improvement in the accuracy score (less than 1%) over the performance of lexSyn features. This improvement is achieved from the higher accuracy in detecting the claim terms (e.g., Em-bedding+Discrete* achieves around 17% and 10%, an improvement over Discrete* features in the case of B-Claim and I-Claim, respectively). However, the accuracy of detecting the premise tokens is still significantly low. We assume that this could be due to the low frequency of premises in the training set, which seems to be more challenging for the CRF model to learn useful patterns from the pre-trained embeddings. On the contrary, the O-Arg token is the most frequent in the essays and that is reflected in the overall high accuracy scores for the O-Arg tokens (i.e., over 76% on average).",
"cite_spans": [],
"ref_spans": [
{
"start": 257,
"end": 264,
"text": "Table 5",
"ref_id": null
},
{
"start": 363,
"end": 370,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 770,
"end": 777,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "The overall performance(s) improve when we apply the BiLSTM networks on the test data. Accuracy improves by 5.3% in the case of BiLSTM against the Embeddings+lexSyn features. However, results do not improve when we augment the CRF classifier on top of the LSTM networks (BiLSTM-CRF). Instead, the performance drops by 0.8% accuracy (See Table 2). On related research, Petasis (2019) have conducted extensive experiments with the BiLSTM-CRF architecture with various types of embeddings and demonstrated that only the specific combination of embeddings (e.g., GloVe+Flair+BERT) achieves higher performance than BiLSTM-only architecture, but we leave such experiments for future work.",
"cite_spans": [
{
"start": 368,
"end": 382,
"text": "Petasis (2019)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "In the case of BERT based experiments, we observe BERT bl , obtains an accuracy of 73% that is comparable to the BiLSTM performance. In terms of the individual categories, we observe BERT bl achieves around 7.5% improvement over the BiLSTM-CRF classifier for the B-Premise tokens. We also observe that the two adaptive-pretrained models (e.g., BERT IMHO and BERT essay ) perform better than the BERT bl where BERT essay achieves the best accuracy of 74.7%, a 2% improvement over BERT bl . Although BERT IMHO was trained on a much larger corpus than BERT essay , we assume since BERT essay was trained on a domain relevant corpus it achieves the highest F1. Likewise, in the case of multitask models, we observe BERT mt performs better than BERT bl by 1.3%. This shows that using argumentative sentence identification as an auxiliary task is beneficial for token-level classification. With regards to the adaptive-pretrained models, akin to the BERT bl based experiments, we observe BERT mtessay perform best by achieving the highest accuracy over 75%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "We choose the five-way token-level classification of argument component over the standard pipeline approach because the standard level of granularity (sentence or clause-based) is not applicable to our training data. In order to test the benefit of the five-way token-level classification, we also compare it against the traditional approach of segmentation of argumentative units into argumentative and non-argumentative tokens. We again follow the standard BIO notation for a three-way token classification setup (B-Arg, I-Arg, and O-Arg) for argument segmentation. In this setup, the B-Claim and B-Premise classes are merged into B-Arg, and I-Claim and I-Premise are merged into I-Arg, while the O-Arg class remains unchanged. The results of all of our models on this task are shown in Table 3 . We notice similar patterns (except for BERT mt IMHO that performs better than BERT mt this time) in this three-way classification task as we saw in the five-way classification. The best model remains to be the BERT mtessay with 77.3% accuracy, which is an improvement of 2-3% over the BiLSTM and other BERT-based architecture.",
"cite_spans": [],
"ref_spans": [
{
"start": 789,
"end": 796,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Argument Segmentation",
"sec_num": null
},
{
"text": "In summary, we have two main observations from Table 2 and Table 3 . First, the best model in Table 3 reports only about 3% improvement over the result from Table 2 which shows that the five-way token-level classification is comparable against the standard task of argument segmentation. Second, the accuracy of the argument segmentation task is much lower than the accuracy of college-level essay corpus SG2017 reported accuracy of 89.5%). This supports the challenges of analyzing middle school student essays.",
"cite_spans": [],
"ref_spans": [
{
"start": 47,
"end": 66,
"text": "Table 2 and Table 3",
"ref_id": "TABREF4"
},
{
"start": 94,
"end": 101,
"text": "Table 3",
"ref_id": "TABREF6"
},
{
"start": 157,
"end": 164,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Argument Segmentation",
"sec_num": null
},
{
"text": "Since we have explored three separate machine learning approaches with a variety of experi- ments, we analyze the results obtained from the BERT mtessay model that has performed the best (Table 2 ). According to the confusion matrix, there are three major sources of errors: (a) around 2500 \"O-Arg\" tokens are wrongly classified as \"I-Claim\" (b) 2162 \"I-Claim\" tokens are wrongly classified as \"O-Arg\", and (c) 273 \"I-Premise\" tokens are erroneously classified as \"I-Claim\". Here, (a) and (b) are not surprising given these are the two categories with the largest number of tokens. For (c) we looked at a couple of examples, such as \"because of [Walmart 's goal of saving money] premise , [customers see value in Walmart that is absent from other retailers] claim \". Here, the premise tokens are wrongly classified as O-Arg tokens. This is probably because the premise appears before the claim, which is uncommon in our training set. We notice some of the other sources of errors, and we discuss them as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 187,
"end": 195,
"text": "(Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "5.1"
},
{
"text": "non-arguments classified as arguments: This error occurs often, but it is more challenging for opinions or hypothetical examples that resemble arguments but are not necessarily arguments. For instance, the opinion \"that actually makes me feel good afterward . . . \" and the hypothetical example \"Next , you will not be eating due to your lack of money\" are similar to an argument, and the classifier erroneously classifies them as claim. In the future, we plan to include the labeled opinions during training to investigate how the model(s) handle opinions vs. arguments during the classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "5.1"
},
{
"text": "missing multiple-claims from a sentence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "5.1"
},
{
"text": "In many examples, we observe multiple claims appear in a single sentence, such as: \"[Some coral can recover from this] claim though [for most it is the final straw .] claim \". During prediction, the model predicts the first claim correctly but then starts the second claim with an \"I-Claim\" label, which is an impossible transition from \"Arg-O\" (i.e., does not enforce well-formed spans). Besides, the model starts the second claim wrongly at the word \"most\" rather than \"for\". This indicates the model's inability to distinguish discourse markers such as \"though\" as potential separators between argument components. This could be explained by the fact that markers such as \"though\" or \"because\" are frequently part of an argument claim. Such as, in \"[those games do not seem as violent even though they are at the same level] claim \", \"though\" is labeled as \"I-Claim'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "5.1"
},
{
"text": "investigating run-on sentences: Some sentences contain multiple claims, which are written as one sentence via a comma-splice run-on such as \"[Humans in today 's world do not care about the consequences] claim , [only the money they may gain .] claim \" which has two claims in the gold annotations but it was predicted as one long claim by our best model. Another example is \"[The oceans are also another dire need in today's environment] claim , each day becoming more filled with trash and plastics.\", in which the claim is predicted correctly in addition to another predicted claim starting at the word \"each\". The model tends to over predicts claims when a comma comes in the middle of the sentence followed by a noun. However, in the former example, the adverb \"only\" that has a \"B-Claim\" label follows the comma rather than the more frequent nouns. Such instances add more complexity to understand and model argument structures in middle school student writing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "5.1"
},
{
"text": "e\u21b5ect of the multitask learning: We examined the impact of multitask learning and notice two characteristics. First, as expected, the multitask model can identify claims and premises that are missed by the single task model(s), such as: \"[many more negative effects that come with social media . . . \"] claim \" that was correctly identified by the multitask model. Second, the clever handling of the back-propagation helps the multitask model to reduce false positives to be more precise. Many non-argumentative sentences, such as: \"internet's social networks help teens find communities . . . \" and opinions, such as: \"take $1.3 billion o\u21b5 $11.3 billion the NCAA makes and give it to players\" are wrongly classified as claims by the single task models but are correctly classified as non-argumentative by the multitask model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "5.1"
},
{
"text": "We conduct a token-level classification task to identify the type of the argument component tokens (e.g., claims and premises) by combining the argument segmentation and component identification in one single task. We perused a new corpus collected from essays written by middle school students. W Our findings show that a multitask BERT performs the best with an absolute gain of 7.5% accuracy over the discrete features. We also conducted an in-depth comparison against the standard segmentation step (i.e., classifying the argumentative vs. nonargumentative units) and proposed a thorough qualitative analysis. Middle school student essays often contain run-on sentences or unsupported claims that make the task of identifying argument components much harder. We achieve the best performance using a multitask framework with an adaptive pretrained model, and we plan to continue to augment other tasks (e.g., opinion and stance identification) under a similar multitask framework . We plan to generate personalized feedback for the students (e.g., which are the supported claims in the essay?) that is useful in automated writing assistance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Haoran Zhang and Diane Litman. 2020. Automated topical component extraction using neural network attention scores from source-based essay scoring. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8569-8584, Online. Association for Computational Linguistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "A.1 Parameter Tuning CRF experiment: For the CRF model, we search over the two regularization parameters c1 and c2 by sampling from exponential distributions with 0.5 scale for c1 and 0.05 scale for c2 using a 3 cross-validation over 50 iterations, which takes about 20 minutes of run-time. The final values are 0.8 for c1 and 0.05 for c2 for the best CRF model that uses LexSyn and BERT embeddings features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "BiLSTM experiment: For BiLSTM networks based experiments we searched the hyper parameters over the dev set. Particularly we experimented with di\u21b5erent mini-batch size (e.g., 16, 32), dropout value (e.g., 0.1, 0.3, 0.5, 0.7), number of epochs (e.g., 40, 50, 100 with early stopping), hidden state of sized-vectors (256). Embeddings were generated using BERT (\"bert-base-uncased\") (768 dimensions). After tuning we use the following hyper-parameters for the test set: mini-batch size of 32, number of epochs = 100 (stop between 30-40 epochs), and dropout value of 0.1. The model has one BiLSTM layer with size 256 of the hidden layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "BERT based models: We use the dev partition for hyperparameter tuning (batch size of 8, 16, 32, 48), run for 3,5,6 epochs, learning rate of 3e-5) and optimized networks with the Adam optimizer. The training partitions were fine-tuned for 5 epochs with batch size = 16. Each training epoch took between 08:46 \u21e0 9 minutes over a K-80 GPU with 48GB vRAM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "We show below the results of using each of the three feature groups individually: structural, syntactic and lexical-syntactic. As mentioned in the results section of the paper, we can see below that the structural and syntactic features do not do well when used individually. Therefore, they were excluded from further ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Results of Discourse Feature Groups",
"sec_num": null
},
{
"text": "https://mentormywriting.org",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/EducationalTestingService/ argument-component-essays",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Other metadata reveal that middle school students write these essays. However, we did not use any such information while annotating the essays.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Definitions are from and Argument: Claims, Reasons, Evidence -Department of Communication, University of Pittsburgh (https://bit.ly/396Ap3H)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://sklearn-crfsuite.readthedocs.io",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors would like to thank Tuhin Chakrabarty, Elsbeth Turcan, Smaranda Muresan and Jill Burstein for their helpful comments and suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Annotation and classification of evidence and reasoning revisions in argumentative writing",
"authors": [
{
"first": "Tazin",
"middle": [],
"last": "Afrin",
"suffix": ""
},
{
"first": "Elaine",
"middle": [
"Lin"
],
"last": "Wang",
"suffix": ""
},
{
"first": "Diane",
"middle": [],
"last": "Litman",
"suffix": ""
},
{
"first": "Lindsay",
"middle": [
"Clare"
],
"last": "Matsumura",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Correnti",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "75--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tazin Afrin, Elaine Lin Wang, Diane Litman, Lind- say Clare Matsumura, and Richard Correnti. 2020. Annotation and classification of evidence and reasoning revisions in argumentative writ- ing. In Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educa- tional Applications, pages 75-84, Seattle, WA, USA \u2192 Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Modeling frames in argumentation",
"authors": [
{
"first": "Yamen",
"middle": [],
"last": "Ajjour",
"suffix": ""
},
{
"first": "Milad",
"middle": [],
"last": "Alshomary",
"suffix": ""
},
{
"first": "Henning",
"middle": [],
"last": "Wachsmuth",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2922--2932",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yamen Ajjour, Milad Alshomary, Henning Wachsmuth, and Benno Stein. 2019. Modeling frames in argumentation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2922-2932, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Unit segmentation of argumentative texts",
"authors": [
{
"first": "Yamen",
"middle": [],
"last": "Ajjour",
"suffix": ""
},
{
"first": "Wei-Fan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Kiesel",
"suffix": ""
},
{
"first": "Henning",
"middle": [],
"last": "Wachsmuth",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 4th Workshop on Argument Mining",
"volume": "",
"issue": "",
"pages": "118--128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yamen Ajjour, Wei-Fan Chen, Johannes Kiesel, Henning Wachsmuth, and Benno Stein. 2017. Unit segmentation of argumentative texts. In Proceedings of the 4th Workshop on Argument Mining, pages 118-128.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "FLAIR: An easy-to-use framework for state-of-the-art NLP",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Akbik",
"suffix": ""
},
{
"first": "Tanja",
"middle": [],
"last": "Bergmann",
"suffix": ""
},
{
"first": "Duncan",
"middle": [],
"last": "Blythe",
"suffix": ""
},
{
"first": "Kashif",
"middle": [],
"last": "Rasul",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Schweter",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Vollgraf",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)",
"volume": "",
"issue": "",
"pages": "54--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Voll- graf. 2019. FLAIR: An easy-to-use framework for state-of-the-art NLP. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguis- tics (Demonstrations), pages 54-59, Minneapo- lis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A news editorial corpus for mining argumentation strategies",
"authors": [
{
"first": "Khalid",
"middle": [],
"last": "Al-Khatib",
"suffix": ""
},
{
"first": "Henning",
"middle": [],
"last": "Wachsmuth",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Kiesel",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Hagen",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "3433--3443",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Khalid Al-Khatib, Henning Wachsmuth, Johannes Kiesel, Matthias Hagen, and Benno Stein. 2016. A news editorial corpus for mining argumenta- tion strategies. In Proceedings of COLING 2016, the 26th International Conference on Compu- tational Linguistics: Technical Papers, pages 3433-3443, Osaka, Japan. The COLING 2016 Organizing Committee.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automated essay scoring with e-rater v.2",
"authors": [
{
"first": "Yigal",
"middle": [],
"last": "Attali",
"suffix": ""
},
{
"first": "Jill",
"middle": [],
"last": "Burstein",
"suffix": ""
}
],
"year": 2006,
"venue": "The Journal of Technology, Learning and Assessment",
"volume": "4",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yigal Attali and Jill Burstein. 2006. Automated essay scoring with e-rater v.2. The Journal of Technology, Learning and Assessment, 4(3).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A developmental writing scale",
"authors": [
{
"first": "Yigal",
"middle": [],
"last": "Attali",
"suffix": ""
},
{
"first": "Don",
"middle": [],
"last": "Powers",
"suffix": ""
}
],
"year": 2008,
"venue": "ETS Research Report Series",
"volume": "",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yigal Attali and Don Powers. 2008. A developmen- tal writing scale. ETS Research Report Series, 2008(1):i-59.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Detecting Good Arguments in a Non-Topic-Specific Way: An Oxymoron?",
"authors": [
{
"first": "Binod",
"middle": [],
"last": "Beata Beigman Klebanov",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Gyawali",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Song",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "244--249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beata Beigman Klebanov, Binod Gyawali, and Yi Song. 2017. Detecting Good Arguments in a Non-Topic-Specific Way: An Oxymoron? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 2: Short Papers), pages 244-249, Vancou- ver, Canada. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Identifying justifications in written dialogs",
"authors": [
{
"first": "Or",
"middle": [],
"last": "Biran",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
}
],
"year": 2011,
"venue": "2011 IEEE Fifth International Conference on Semantic Computing",
"volume": "",
"issue": "",
"pages": "162--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Or Biran and Owen Rambow. 2011. Identify- ing justifications in written dialogs. In 2011 IEEE Fifth International Conference on Seman- tic Computing, pages 162-168. IEEE.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Derrick Higgins, Aoife Cahill, and Martin Chodorow",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Blanchard",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Tetreault",
"suffix": ""
}
],
"year": 2013,
"venue": "ETS Research Report Series",
"volume": "2013",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Blanchard, Joel Tetreault, Derrick Hig- gins, Aoife Cahill, and Martin Chodorow. 2013. Toefl11: A corpus of non-native english. ETS Research Report Series, 2013(2):i-15.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Multitask learning. Machine learning",
"authors": [
{
"first": "Rich",
"middle": [],
"last": "Caruana",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "28",
"issue": "",
"pages": "41--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rich Caruana. 1997. Multitask learning. Machine learning, 28(1):41-75.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "IMHO fine-tuning improves claim detection",
"authors": [
{
"first": "Tuhin",
"middle": [],
"last": "Chakrabarty",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Hidey",
"suffix": ""
},
{
"first": "Kathy",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "558--563",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tuhin Chakrabarty, Christopher Hidey, and Kathy McKeown. 2019. IMHO fine-tuning improves claim detection. In Proceedings of the 2019 Con- ference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 558-563, Minneapolis, Minnesota. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Head-driven statistical models for natural language parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational linguistics",
"volume": "29",
"issue": "4",
"pages": "589--637",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 2003. Head-driven statistical mod- els for natural language parsing. Computational linguistics, 29(4):589-637.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Automated scoring of students' use of text evidence in writing",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Correnti",
"suffix": ""
},
{
"first": "Lindsay",
"middle": [
"Clare"
],
"last": "Matsumura",
"suffix": ""
},
{
"first": "Elaine",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Diane",
"middle": [],
"last": "Litman",
"suffix": ""
},
{
"first": "Zahra",
"middle": [],
"last": "Rahimi",
"suffix": ""
},
{
"first": "Zahid",
"middle": [],
"last": "Kisa",
"suffix": ""
}
],
"year": 2020,
"venue": "Reading Research Quarterly",
"volume": "55",
"issue": "3",
"pages": "493--520",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Correnti, Lindsay Clare Matsumura, Elaine Wang, Diane Litman, Zahra Rahimi, and Zahid Kisa. 2020. Automated scoring of stu- dents' use of text evidence in writing. Reading Research Quarterly, 55(3):493-520.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "What is the essence of a claim? cross-domain claim identification",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Daxenberger",
"suffix": ""
},
{
"first": "Ste\u21b5en",
"middle": [],
"last": "Eger",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Habernal",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Stab",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2055--2066",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes Daxenberger, Ste\u21b5en Eger, Ivan Haber- nal, Christian Stab, and Iryna Gurevych. 2017. What is the essence of a claim? cross-domain claim identification. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2055-2066.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Using writing process and product features to assess writing quality and explore how those features relate to other literacy tasks",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Deane",
"suffix": ""
}
],
"year": 2014,
"venue": "ETS Research Report Series",
"volume": "",
"issue": "1",
"pages": "1--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Deane. 2014. Using writing process and prod- uct features to assess writing quality and explore how those features relate to other literacy tasks. ETS Research Report Series, (1):1-23.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Con- ference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapo- lis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Neural end-to-end learning for computational argumentation mining",
"authors": [
{
"first": "Ste\u21b5en",
"middle": [],
"last": "Eger",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Daxenberger",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "11--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ste\u21b5en Eger, Johannes Daxenberger, and Iryna Gurevych. 2017. Neural end-to-end learning for computational argumentation mining. In Pro- ceedings of the 55th Annual Meeting of the Asso- ciation for Computational Linguistics (Volume 1: Long Papers), pages 11-22.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Scoring persuasive essays using opinions and their targets",
"authors": [
{
"first": "Noura",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "Swapna",
"middle": [],
"last": "Somasundaran",
"suffix": ""
},
{
"first": "Jill",
"middle": [],
"last": "Burstein",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "64--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noura Farra, Swapna Somasundaran, and Jill Burstein. 2015. Scoring persuasive essays using opinions and their targets. In Proceedings of the Workshop on Innovative Use of NLP for Build- ing Educational Applications, pages 64-74.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Classifying arguments by scheme",
"authors": [
{
"first": "Vanessa",
"middle": [],
"last": "Wei Feng",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "987--996",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vanessa Wei Feng and Graeme Hirst. 2011. Clas- sifying arguments by scheme. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 987-996, Portland, Oregon, USA. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "An exploratory study of argumentative writing by young students: A transformer-based approach",
"authors": [
{
"first": "Debanjan",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Beata",
"middle": [
"Beigman"
],
"last": "Klebanov",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Song",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "145--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Debanjan Ghosh, Beata Beigman Klebanov, and Yi Song. 2020. An exploratory study of ar- gumentative writing by young students: A transformer-based approach. In Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 145-150, Seattle, WA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Coarse-grained argumentation features for scoring persuasive essays",
"authors": [
{
"first": "Debanjan",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Aquila",
"middle": [],
"last": "Khanam",
"suffix": ""
},
{
"first": "Yubo",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Smaranda",
"middle": [],
"last": "Muresan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "549--554",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Debanjan Ghosh, Aquila Khanam, Yubo Han, and Smaranda Muresan. 2016. Coarse-grained argu- mentation features for scoring persuasive essays. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Vol- ume 2: Short Papers), pages 549-554, Berlin, Germany. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Analyzing argumentative discourse units in online interactions",
"authors": [
{
"first": "Debanjan",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Smaranda",
"middle": [],
"last": "Muresan",
"suffix": ""
},
{
"first": "Nina",
"middle": [],
"last": "Wacholder",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Aakhus",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Mitsui",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the first workshop on argumentation mining",
"volume": "",
"issue": "",
"pages": "39--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Debanjan Ghosh, Smaranda Muresan, Nina Wa- cholder, Mark Aakhus, and Matthew Mitsui. 2014. Analyzing argumentative discourse units in online interactions. In Proceedings of the first workshop on argumentation mining, pages 39- 48.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Don't stop pretraining: Adapt language models to domains and tasks",
"authors": [
{
"first": "Ana",
"middle": [],
"last": "Suchin Gururangan",
"suffix": ""
},
{
"first": "Swabha",
"middle": [],
"last": "Marasovi\u0107",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Iz",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Downey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8342--8360",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342-8360. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Argument relation classification using a joint inference model",
"authors": [
{
"first": "Yufang",
"middle": [],
"last": "Hou",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Jochim",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 4th Workshop on Argument Mining",
"volume": "",
"issue": "",
"pages": "60--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yufang Hou and Charles Jochim. 2017. Argu- ment relation classification using a joint infer- ence model. In Proceedings of the 4th Work- shop on Argument Mining, pages 60-66, Copen- hagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Argument generation with retrieval, planning, and realization",
"authors": [
{
"first": "Xinyu",
"middle": [],
"last": "Hua",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2661--2672",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinyu Hua, Zhe Hu, and Lu Wang. 2019. Argu- ment generation with retrieval, planning, and realization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2661-2672, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Multi-task learning using uncertainty to weigh losses for scene geometry and semantics",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Kendall",
"suffix": ""
},
{
"first": "Yarin",
"middle": [],
"last": "Gal",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Cipolla",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "7482--7491",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Kendall, Yarin Gal, and Roberto Cipolla. 2018. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In Proceedings of the IEEE conference on com- puter vision and pattern recognition, pages 7482- 7491.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Measuring the reliability of qualitative text analysis data. Quality and quantity",
"authors": [
{
"first": "Klaus",
"middle": [],
"last": "Krippendor\u21b5",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "38",
"issue": "",
"pages": "787--800",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klaus Krippendor\u21b5. 2004. Measuring the reliabil- ity of qualitative text analysis data. Quality and quantity, 38:787-800.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [],
"last": "La\u21b5erty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando Cn",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John La\u21b5erty, Andrew McCallum, and Fer- nando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Argument mining: A survey",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lawrence",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Reed",
"suffix": ""
}
],
"year": 2020,
"venue": "Computational Linguistics",
"volume": "45",
"issue": "4",
"pages": "765--818",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lawrence and Chris Reed. 2020. Argument mining: A survey. Computational Linguistics, 45(4):765-818.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Contextindependent claim detection for argument mining",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Lippi",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Torroni",
"suffix": ""
}
],
"year": 2015,
"venue": "Twenty-Fourth International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Lippi and Paolo Torroni. 2015. Context- independent claim detection for argument min- ing. In Twenty-Fourth International Joint Con- ference on Artificial Intelligence.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Annotating student talk in text-based classroom discussions",
"authors": [
{
"first": "Luca",
"middle": [],
"last": "Lugini",
"suffix": ""
},
{
"first": "Diane",
"middle": [],
"last": "Litman",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Godley",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Olshefski",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "110--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luca Lugini, Diane Litman, Amanda Godley, and Christopher Olshefski. 2018. Annotating stu- dent talk in text-based classroom discussions. In Proceedings of the Thirteenth Workshop on In- novative Use of NLP for Building Educational Applications, pages 110-116.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1064--1074",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end se- quence labeling via bi-directional LSTM-CNNs- CRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 1064-1074, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Identifying highlevel organizational elements in argumentative discourse",
"authors": [
{
"first": "Nitin",
"middle": [],
"last": "Madnani",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Tetreault",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Chodorow",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "20--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitin Madnani, Michael Heilman, Joel Tetreault, and Martin Chodorow. 2012. Identifying high- level organizational elements in argumentative discourse. In Proceedings of the 2012 Confer- ence of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Language Technologies, pages 20-28, Montr\u00e9al, Canada. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Argumentation mining",
"authors": [
{
"first": "Raquel",
"middle": [],
"last": "Mochales",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2011,
"venue": "Artificial Intelligence and Law",
"volume": "19",
"issue": "1",
"pages": "1--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raquel Mochales and Marie-Francine Moens. 2011. Argumentation mining. Artificial Intelligence and Law, 19(1):1-22.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Automatic detection of arguments in legal texts",
"authors": [
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Boiy",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 11th international conference on Artificial intelligence and law",
"volume": "",
"issue": "",
"pages": "225--230",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie-Francine Moens, Erik Boiy, Raquel Mochales Palau, and Chris Reed. 2007. Automatic detection of arguments in legal texts. In Proceedings of the 11th interna- tional conference on Artificial intelligence and law, pages 225-230.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Contextaware argumentative relation mining",
"authors": [
{
"first": "Huy",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Diane",
"middle": [],
"last": "Litman",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1127--1137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huy Nguyen and Diane Litman. 2016. Context- aware argumentative relation mining. In Pro- ceedings of the 54th Annual Meeting of the Asso- ciation for Computational Linguistics (Volume 1: Long Papers), pages 1127-1137.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Argument mining for improving the automated scoring of persuasive essays",
"authors": [
{
"first": "V",
"middle": [],
"last": "Huy",
"suffix": ""
},
{
"first": "Diane",
"middle": [
"J"
],
"last": "Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Litman",
"suffix": ""
}
],
"year": 2018,
"venue": "Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huy V Nguyen and Diane J Litman. 2018. Argu- ment mining for improving the automated scor- ing of persuasive essays. In Thirty-Second AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Argument mining with structured SVMs and RNNs",
"authors": [
{
"first": "Vlad",
"middle": [],
"last": "Niculae",
"suffix": ""
},
{
"first": "Joonsuk",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "985--995",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vlad Niculae, Joonsuk Park, and Claire Cardie. 2017. Argument mining with structured SVMs and RNNs. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 985- 995, Vancouver, Canada. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Argumentation mining: The detection, classification and structure of arguments in text",
"authors": [
{
"first": "Raquel",
"middle": [],
"last": "Mochales Palau",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th International Conference on Artificial Intelligence and Law, ICAIL '09",
"volume": "",
"issue": "",
"pages": "98--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raquel Mochales Palau and Marie-Francine Moens. 2009. Argumentation mining: The detection, classification and structure of arguments in text. In Proceedings of the 12th International Confer- ence on Artificial Intelligence and Law, ICAIL '09, page 98-107, New York, NY, USA. Associa- tion for Computing Machinery.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Identifying appropriate support for propositions in online user comments",
"authors": [
{
"first": "Joonsuk",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the First Workshop on Argumentation Mining",
"volume": "",
"issue": "",
"pages": "29--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joonsuk Park and Claire Cardie. 2014. Identify- ing appropriate support for propositions in on- line user comments. In Proceedings of the First Workshop on Argumentation Mining, pages 29- 38, Baltimore, Maryland. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "From argument diagrams to argumentation mining in texts: A survey",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Peldszus",
"suffix": ""
},
{
"first": "Manfred",
"middle": [],
"last": "Stede",
"suffix": ""
}
],
"year": 2013,
"venue": "International Journal of Cognitive Informatics and Natural Intelligence (IJCINI)",
"volume": "7",
"issue": "1",
"pages": "1--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Peldszus and Manfred Stede. 2013. From argument diagrams to argumentation mining in texts: A survey. International Journal of Cognitive Informatics and Natural Intelligence (IJCINI), 7(1):1-31.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Joint prediction in MST-style discourse parsing for argumentation mining",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Peldszus",
"suffix": ""
},
{
"first": "Manfred",
"middle": [],
"last": "Stede",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "938--948",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Peldszus and Manfred Stede. 2015. Joint prediction in MST-style discourse parsing for argumentation mining. In Proceedings of the 2015 Conference on Empirical Methods in Nat- ural Language Processing, pages 938-948, Lis- bon, Portugal. Association for Computational Linguistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Modeling prompt adherence in student essays",
"authors": [
{
"first": "Isaac",
"middle": [],
"last": "Persing",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1534--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Isaac Persing and Vincent Ng. 2014. Modeling prompt adherence in student essays. In Proceed- ings of the 52nd Annual Meeting of the Associ- ation for Computational Linguistics (Volume 1: Long Papers), pages 1534-1543.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Modeling argument strength in student essays",
"authors": [
{
"first": "Isaac",
"middle": [],
"last": "Persing",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "543--552",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Isaac Persing and Vincent Ng. 2015. Modeling argument strength in student essays. In Pro- ceedings of the 53rd Annual Meeting of the As- sociation for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 543-552.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "End-to-end argumentation mining in student essays",
"authors": [
{
"first": "Isaac",
"middle": [],
"last": "Persing",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1384--1394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Isaac Persing and Vincent Ng. 2016. End-to-end argumentation mining in student essays. In Pro- ceedings of the 2016 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Tech- nologies, pages 1384-1394.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Segmentation of argumentative texts with contextualised word representations",
"authors": [
{
"first": "Georgios",
"middle": [],
"last": "Petasis",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 6th Workshop on Argument Mining",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Georgios Petasis. 2019. Segmentation of argumen- tative texts with contextualised word represen- tations. In Proceedings of the 6th Workshop on Argument Mining, pages 1-10, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Here's my point: Joint pointer architecture for argument mining",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Potash",
"suffix": ""
},
{
"first": "Alexey",
"middle": [],
"last": "Romanov",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1364--1373",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Potash, Alexey Romanov, and Anna Rumshisky. 2017. Here's my point: Joint pointer architecture for argument mining. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1364-1373.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Applying kernel methods to argumentation mining",
"authors": [
{
"first": "Niall",
"middle": [],
"last": "Rooney",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Fiona",
"middle": [],
"last": "Browne",
"suffix": ""
}
],
"year": 2012,
"venue": "FLAIRS Conference",
"volume": "172",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Niall Rooney, Hui Wang, and Fiona Browne. 2012. Applying kernel methods to argumentation min- ing. In FLAIRS Conference, volume 172.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Multi-task learning for argumentation mining in low-resource settings",
"authors": [
{
"first": "Claudia",
"middle": [],
"last": "Schulz",
"suffix": ""
},
{
"first": "Ste\u21b5en",
"middle": [],
"last": "Eger",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Daxenberger",
"suffix": ""
},
{
"first": "Tobias",
"middle": [],
"last": "Kahse",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "35--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claudia Schulz, Ste\u21b5en Eger, Johannes Daxen- berger, Tobias Kahse, and Iryna Gurevych. 2018. Multi-task learning for argumentation mining in low-resource settings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 35-41, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Challenges in the automatic analysis of students' diagnostic reasoning",
"authors": [
{
"first": "Claudia",
"middle": [],
"last": "Schulz",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Christian",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Meyer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "6974--6981",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claudia Schulz, Christian M Meyer, and Iryna Gurevych. 2019. Challenges in the automatic analysis of students' diagnostic reasoning. In Proceedings of the AAAI Conference on Artifi- cial Intelligence, volume 33, pages 6974-6981.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Evaluating argumentative and narrative essays using graphs",
"authors": [
{
"first": "Swapna",
"middle": [],
"last": "Somasundaran",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Riordan",
"suffix": ""
},
{
"first": "Binod",
"middle": [],
"last": "Gyawali",
"suffix": ""
},
{
"first": "Su-Youn",
"middle": [],
"last": "Yoon",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "1568--1578",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Swapna Somasundaran, Brian Riordan, Binod Gyawali, and Su-Youn Yoon. 2016. Evalu- ating argumentative and narrative essays us- ing graphs. In Proceedings of COLING 2016, the 26th International Conference on Compu- tational Linguistics: Technical Papers, pages 1568-1578.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Applying argumentation schemes for essay scoring",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "Beata",
"middle": [
"Beigman"
],
"last": "Klebanov",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Deane",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the First Workshop on Argumentation Mining",
"volume": "",
"issue": "",
"pages": "69--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Song, Michael Heilman, Beata Beigman Kle- banov, and Paul Deane. 2014. Applying argu- mentation schemes for essay scoring. In Proceed- ings of the First Workshop on Argumentation Mining, pages 69-78.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Identifying argumentative discourse structures in persuasive essays",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Stab",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "46--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Stab and Iryna Gurevych. 2014. Identi- fying argumentative discourse structures in per- suasive essays. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 46-56.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Parsing argumentation structures in persuasive essays",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Stab",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2017,
"venue": "Computational Linguistics",
"volume": "43",
"issue": "3",
"pages": "619--659",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Stab and Iryna Gurevych. 2017. Pars- ing argumentation structures in persuasive es- says. Computational Linguistics, 43(3):619-659.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Argumentation mining",
"authors": [
{
"first": "Manfred",
"middle": [],
"last": "Stede",
"suffix": ""
},
{
"first": "Jodi",
"middle": [],
"last": "Schneider",
"suffix": ""
}
],
"year": 2018,
"venue": "Synthesis Lectures on Human Language Technologies",
"volume": "11",
"issue": "2",
"pages": "1--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manfred Stede and Jodi Schneider. 2018. Argu- mentation mining. Synthesis Lectures on Hu- man Language Technologies, 11(2):1-191.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Annotating multiparty discourse: Challenges for agreement metrics",
"authors": [
{
"first": "Nina",
"middle": [],
"last": "Wacholder",
"suffix": ""
},
{
"first": "Smaranda",
"middle": [],
"last": "Muresan",
"suffix": ""
},
{
"first": "Debanjan",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Aakhus",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of LAW VIII-The 8th Linguistic Annotation Workshop",
"volume": "",
"issue": "",
"pages": "120--128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nina Wacholder, Smaranda Muresan, Debanjan Ghosh, and Mark Aakhus. 2014. Annotat- ing multiparty discourse: Challenges for agree- ment metrics. In Proceedings of LAW VIII-The 8th Linguistic Annotation Workshop, pages 120- 128.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Using argument mining to assess the argumentation quality of essays",
"authors": [
{
"first": "Henning",
"middle": [],
"last": "Wachsmuth",
"suffix": ""
},
{
"first": "Khalid",
"middle": [],
"last": "Al-Khatib",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "1680--1691",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Henning Wachsmuth, Khalid Al-Khatib, and Benno Stein. 2016. Using argument mining to assess the argumentation quality of essays. In Proceedings of COLING 2016, the 26th Inter- national Conference on Computational Linguis- tics: Technical Papers, pages 1680-1691, Osaka, Japan. The COLING 2016 Organizing Commit- tee.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Work performed during internship at ETSShould Artificial Sweeteners be Banned in America?",
"num": null,
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Artificial sweeteners have a huge impact on current day society. Legends: B-Claim I-Claim B-Premise I-Premise O-Arg Figure 1: Excerpt from an annotated essay with Claim Premise segments in BIO notation",
"num": null,
"uris": null
},
"TABREF1": {
"text": "Token counts of each category in the training, dev, and test sets of ARG2020",
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF4": {
"text": "",
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF6": {
"text": "",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>: F1 scores for Argument Token Detection</td></tr><tr><td>on the test set. Underlined: highest Accuracy/F1</td></tr><tr><td>in group. Bold: highest Accuracy/F1 overall.</td></tr></table>"
},
"TABREF8": {
"text": "Accuracy and F1 scores for Claim and Premise Token Detection on the test set for each group of the discrete features in the CRF model. experimentation with BERT embeddings. Only the LexSyn features were tested individually with the embeddings.",
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>"
}
}
}
}