| { |
| "paper_id": "S12-1006", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:24:11.549657Z" |
| }, |
| "title": "The Use of Granularity in Rhetorical Relation Prediction", |
| "authors": [ |
| { |
| "first": "Blake", |
| "middle": [ |
| "Stephen" |
| ], |
| "last": "Howald", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Ultralingua, Inc", |
| "location": { |
| "addrLine": "1313 SE Fifth Street, Suite 108 Minneapolis", |
| "postCode": "55414", |
| "region": "MN" |
| } |
| }, |
| "email": "howald@ultralingua.com" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Abramson", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Ultralingua, Inc", |
| "location": { |
| "addrLine": "1313 SE Fifth Street, Suite 108 Minneapolis", |
| "postCode": "55414", |
| "region": "MN" |
| } |
| }, |
| "email": "abramson@ultralingua.com" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We present the results of several machine learning tasks designed to predict rhetorical relations that hold between clauses in discourse. We demonstrate that organizing rhetorical relations into different granularity categories (based on relative degree of detail) increases average prediction accuracy from 58% to 70%. Accuracy further increases to 80% with the inclusion of clause types. These results, which are competitive with existing systems, hold across several modes of written discourse and suggest that features of information structure are an important consideration in the machine learnability of discourse.", |
| "pdf_parse": { |
| "paper_id": "S12-1006", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We present the results of several machine learning tasks designed to predict rhetorical relations that hold between clauses in discourse. We demonstrate that organizing rhetorical relations into different granularity categories (based on relative degree of detail) increases average prediction accuracy from 58% to 70%. Accuracy further increases to 80% with the inclusion of clause types. These results, which are competitive with existing systems, hold across several modes of written discourse and suggest that features of information structure are an important consideration in the machine learnability of discourse.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The rhetorical relations that hold between clauses in discourse index temporal and event information and contribute to a discourse's pragmatic coherence (Hobbs, 1985) . For example, in (1) the NARRATION relation holds between (1a) and (1b) as (1b) temporally follows (1a) at event time.", |
| "cite_spans": [ |
| { |
| "start": 153, |
| "end": 166, |
| "text": "(Hobbs, 1985)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "(1) a. Pascale closed the toy chest.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "b. She walked to the gate. c. The gate was locked securely. d. So she couldn't get into the kitchen.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The ELABORATION relation, describing the surrounding state of affairs, holds between (1b) and (1c). (1c) is temporally inclusive (subordinated) with (1b) and there is no temporal progression at event time. The RESULT relation holds between (1bc) and (1d). (1d) follows (1b) and its subordinated ELABORATION relation (1c) at event time.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Additional pragmatic information is encoded in these relations in terms of granularity. Granularity refers to the relative increases or decreases in the level of described detail. For example, moving from (1b) to (1c), we learn more information about the gate via the ELABORATION relation. Also, moving from (1b-c) to (1d) there is a consolidation of information associated with the RESULT relation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Through several supervised machine learning tasks, we investigate the degree to which granularity (as well as additional elements of discourse structure (e.g. tense, aspect, event)) serves as a viable organization and predictor of rhetorical relations in a range of written discourses. This paper is organized as follows. Section 2 reviews prior research on rhetorical relations, discourse structure, granularity and prediction. Section 3 discusses the analyzed data, the selection and annotation of features, and the construction of several machine learning tasks. Section 4 provides the results which are then discussed in Section 5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Rhetorical relation prediction has received considerable attention and has been shown to be useful for text summarization (Marcu, 1998) . Prediction tasks rely on a number of features (discourse connectives, part of speech, etc.) (Marcu and Echihabi, 2002; Lapata and Lascarides, 2004) . A wide range of accuracies are also reported -33.96% (Marcu and Echihabi, 2002) to 70.70% (Lapata and Lascarides, 2004) for all rhetorical relations and, for individual relations, CONTRAST (43.64%) and CONTINU-ATION (83.35%) (Sporleder and Lascarides, 2005) .", |
| "cite_spans": [ |
| { |
| "start": 122, |
| "end": 135, |
| "text": "(Marcu, 1998)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 230, |
| "end": 256, |
| "text": "(Marcu and Echihabi, 2002;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 257, |
| "end": 285, |
| "text": "Lapata and Lascarides, 2004)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 341, |
| "end": 367, |
| "text": "(Marcu and Echihabi, 2002)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 378, |
| "end": 407, |
| "text": "(Lapata and Lascarides, 2004)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 513, |
| "end": 545, |
| "text": "(Sporleder and Lascarides, 2005)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We seek to predict the inventory of rhetorical relations defined in Segmented Discourse Representation Theory (\"SDRT\") (Asher and Lascarides, 2003) . In addition to the relations illustrated in (1), we consider: BACKGROUND: It was Christmas. Pascale got a new toy.; EXPLANATION: The aardvark was dirty. It fell into a puddle.; CONSE-QUENCE: If the aardvark fell in the puddle, then it got dirty.; ALTERNATION: Pascale got an aardvark or a stuffed bunny.; and CONTINUATION: Pascale got an aardvark. Grimsby got a rawhide.", |
| "cite_spans": [ |
| { |
| "start": 119, |
| "end": 147, |
| "text": "(Asher and Lascarides, 2003)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Discourses were selected based on Smith (2003) who defines five primary discourse modes by: (1) the situations (events and states) they describe; (2) the overarching temporality (tense, aspect); and (3) the type of text progression (temporal -text and event time progression are similar; atemporal -text and event time progression are not similar). These contrastive elements inform the features selected for the machine learning tasks discussed in Section 3.2. The five modes, narratives, reports (news articles), description (recipes), information (scientific essays), and argument (editorials) were selected to ensure a balanced range of theoretically supported discourse types.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Granularity in discourse refers to the relative degree of detail. The higher the level of detail, the more informative the discourse is. We assume that there will be some pragmatic constraints on the informativeness of a discourse (e.g., consistent with Grice's (1975) Maxim of Quantity). For our purposes, we rely specifically on granularity as defined in Mulkar-Mehta et al. (2011) (\"MM\") who characterize granularity in terms of entities and events.", |
| "cite_spans": [ |
| { |
| "start": 254, |
| "end": 268, |
| "text": "Grice's (1975)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Granularity of Information", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "To illustrate, consider (2) where the rhetorical structure indicates that (2b) is an ELABORATION of (2a), the NARRATION relation holds between (2b) and (2c) and (2c) and (2d), and the RESULT relation between (2d) and (2e).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Granularity of Information", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "(2) a. The Pittsburgh Steelers needed to win.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Granularity of Information", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "b. Batch took the first snap. c. Then he threw the ball into the endzone. d. Ward caught the ball. e. A touchdown was scored.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Granularity of Information", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Entities and events can stand in part-whole and causality relationships with entities and events in subsequent clauses. A positive granularity shift indicates movement from whole to part (more detail) -e.g., Batch (2b) is a part of the whole Pittsburgh Steelers (2a). A negative granularity shift indicates movement from part to whole (less detail), or if one event causes a subsequent event (if an event is caused by a subsequent event, this is a positive shift) -e.g., Ward's catching of the ball (2d) caused the scoring of the touchdown (2e). Maintained granularities (not considered by MM) are illustrated in (2b-c) and (2c-d). Clauses (2b) through (2d) are temporally linked events, but there is no part-whole shift in, nor a causal relationship between, the entities or events; the granularity remains the same. We maintain that there is a close relationship between rhetorical relations and granularity. Consequently, rhetorical relations can be organized as follows: positive: BACKGROUND, ELABORATION, EXPLANATION; negative: CONSEQUENCE, RE-SULT; and maintained: ALTERNATION, CONTINU-ATION, NARRATION. The machine learning tasks discussed in the remainder of the paper consider this information in the prediction of rhetorical relations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Granularity of Information", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Five written discourses of similar sentence length were selected from each mode for 25 total discourses. The discourses were segmented by independent or dependent (subordinate) clauses, if the clauses contained discourse markers (but, however), and if the clauses were embedded in the sentence provided in the orginal written discourse (e.g., John, who is the director of NASA, gave a speech on Friday). The total number of clauses is 1090, averaging 43.6 clauses per discourse (\u03c3=7.2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and Methods", |
| "sec_num": "3" |
| }, |
| { |
| "text": "For prediction, we use a feature set distilled from Smith's classification of discourses: TENSE and ASPECT; EVENT (from the TimeML annotation scheme (Pustejovksy, et al., 2005) , Aspectual, Occurence, States, etc.); SEQUENCE information as the clause position normalized to the unit interval; and discourse MODE. We also include CLAUSE type -independent (IC) or dependent clauses (DC) with the inclusion of a discourse marker (M) or not, (Asher and Lascarides, 2003) , summarized in Table 1 . All 25 discourses were annotated by one of the authors using only a reference sheet. The other author independently coded 80% of the data (20 discourses, four from each mode). Average agreement and Cohen's Kappa (Cohen, 1960) These results are consistent with previously reported annotations for rhetorical relations (Sporleder and Lascarides, 2005 ; Howald and Katz, 2011), event verbs and durations, tense and aspect (Puscasu and Mititelu, 2008; Wiebe et al., 1997) . Positive, negative and maintained granularities were not annotated, but MM report a Kappa between .8500 and 1. The distribution of these granularities, based on the organization of the annotated rhetorical relations is presented in Table 1.", |
| "cite_spans": [ |
| { |
| "start": 149, |
| "end": 176, |
| "text": "(Pustejovksy, et al., 2005)", |
| "ref_id": null |
| }, |
| { |
| "start": 438, |
| "end": 466, |
| "text": "(Asher and Lascarides, 2003)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 705, |
| "end": 718, |
| "text": "(Cohen, 1960)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 810, |
| "end": 841, |
| "text": "(Sporleder and Lascarides, 2005", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 912, |
| "end": 940, |
| "text": "(Puscasu and Mititelu, 2008;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 941, |
| "end": 960, |
| "text": "Wiebe et al., 1997)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 483, |
| "end": 490, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Feature Annotation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Three supervised machine learning tasks were constructed to predict SDRT relations. The first task (Uncollapsed) created a 8-way classifier to predict the SDRT relations based on the feature set, omitting the GRANULARITY feature. The second task (Collapsed) created a 3-way classifier to predict the GRANULARITY categories (the SDRT feature was omitted). The third task (Combined) included 70the GRANULARITY feature back into the Uncollapsed 8-way classifier. We utilized the WEKA toolkit (Witten and Frank, 2005) and treated each clause as a vector of information (SDRT, EVENT, TENSE, ASPECT, SEQUENCE, CLAUSE, MODE, GRANULARITY), illustrated in (3) 1 :(3) a. The Pittsburgh Steelers needed to win. START, State, Pa., N, .200, IC, NA, start b. Batch took the first snap. ELAB., Occ., Pa., N, .400, IC, NA, pos. c. Then he threw the ball into the endzone. NAR., Asp., Pa., N, .600, NA, main. d. Ward caught the ball. NAR., Occ., Pa., N, .800, IC, NA, main. e. A touchdown was scored. RESULT, Occ., Pa., Perf., 1.00, IC, NA, neg.", |
| "cite_spans": [ |
| { |
| "start": 489, |
| "end": 513, |
| "text": "(Witten and Frank, 2005)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 700, |
| "end": 706, |
| "text": "START,", |
| "ref_id": null |
| }, |
| { |
| "start": 707, |
| "end": 713, |
| "text": "State,", |
| "ref_id": null |
| }, |
| { |
| "start": 714, |
| "end": 718, |
| "text": "Pa.,", |
| "ref_id": null |
| }, |
| { |
| "start": 719, |
| "end": 721, |
| "text": "N,", |
| "ref_id": null |
| }, |
| { |
| "start": 722, |
| "end": 727, |
| "text": ".200,", |
| "ref_id": null |
| }, |
| { |
| "start": 728, |
| "end": 731, |
| "text": "IC,", |
| "ref_id": null |
| }, |
| { |
| "start": 732, |
| "end": 735, |
| "text": "NA,", |
| "ref_id": null |
| }, |
| { |
| "start": 736, |
| "end": 741, |
| "text": "start", |
| "ref_id": null |
| }, |
| { |
| "start": 779, |
| "end": 784, |
| "text": "Occ.,", |
| "ref_id": null |
| }, |
| { |
| "start": 785, |
| "end": 789, |
| "text": "Pa.,", |
| "ref_id": null |
| }, |
| { |
| "start": 790, |
| "end": 792, |
| "text": "N,", |
| "ref_id": null |
| }, |
| { |
| "start": 793, |
| "end": 798, |
| "text": ".400,", |
| "ref_id": null |
| }, |
| { |
| "start": 799, |
| "end": 802, |
| "text": "IC,", |
| "ref_id": null |
| }, |
| { |
| "start": 803, |
| "end": 806, |
| "text": "NA,", |
| "ref_id": null |
| }, |
| { |
| "start": 807, |
| "end": 811, |
| "text": "pos.", |
| "ref_id": null |
| }, |
| { |
| "start": 862, |
| "end": 867, |
| "text": "Asp.,", |
| "ref_id": null |
| }, |
| { |
| "start": 868, |
| "end": 872, |
| "text": "Pa.,", |
| "ref_id": null |
| }, |
| { |
| "start": 873, |
| "end": 875, |
| "text": "N,", |
| "ref_id": null |
| }, |
| { |
| "start": 876, |
| "end": 881, |
| "text": ".600,", |
| "ref_id": null |
| }, |
| { |
| "start": 882, |
| "end": 885, |
| "text": "NA,", |
| "ref_id": null |
| }, |
| { |
| "start": 886, |
| "end": 891, |
| "text": "main.", |
| "ref_id": null |
| }, |
| { |
| "start": 923, |
| "end": 928, |
| "text": "Occ.,", |
| "ref_id": null |
| }, |
| { |
| "start": 929, |
| "end": 933, |
| "text": "Pa.,", |
| "ref_id": null |
| }, |
| { |
| "start": 934, |
| "end": 936, |
| "text": "N,", |
| "ref_id": null |
| }, |
| { |
| "start": 937, |
| "end": 942, |
| "text": ".800,", |
| "ref_id": null |
| }, |
| { |
| "start": 943, |
| "end": 946, |
| "text": "IC,", |
| "ref_id": null |
| }, |
| { |
| "start": 947, |
| "end": 950, |
| "text": "NA,", |
| "ref_id": null |
| }, |
| { |
| "start": 951, |
| "end": 956, |
| "text": "main.", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Machine Learning", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We report results from the Na\u00efve Bayes (NB), J48 (C4.5 decision tree (Quinlan, 1993) ) and K* (Cleary and Trigg, 1995) classifiers, run at 10-fold crossvalidation. Table 2 indicates that the best average accuracy for the Uncollapsed task is 58.99 (J48). The accuracy increases to 70.18 (K*) for the Collapsed task. The accuracy increases further to 80.00 (NB) for the Combined task. All accuracies are statistically significant over majority class baselines (\"MCB\"): Uncollapsed (MCB = 35) -\u03c7 2 = 15.11, d.f. = 0, p \u2264 .001; Collapsed (MCB = 41) -\u03c7 2 = 20.51, d.f. = 0, p \u2264 .001; and Combined (treating the best Collapsed accuracy as the new baseline (MCB = 70))\u03c7 2 = 1.43, d.f. = 0, p \u2264 .001.", |
| "cite_spans": [ |
| { |
| "start": 69, |
| "end": 84, |
| "text": "(Quinlan, 1993)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 94, |
| "end": 118, |
| "text": "(Cleary and Trigg, 1995)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 164, |
| "end": 171, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Machine Learning", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "As shown in Table 3 , based on the NB 8-way Combined classifier, the prediction accuracies of the individual modes are no more than 12 percentage points off of the average (80.00). Accuracies range from 68% A(rgument) (\u03c3=-12) to 92% D(escription) (\u03c3=+12) with N(arrative), R(eport), and I(nformation) being closest to average (\u03c3=-6-8). For individual relation predictions, NARRATION, RESULT and BACKGROUND have the highest total accuracies followed by ELABORATION and CON-TRAST. Performing less well is CONSEQUENCE, ALTERNATION and CONTINUATION with EXPLA-NATION performing the worst. All accuracies are statistically significant above baseline (\u03c7 2 = 341.89, d.f. = 7, p \u2264 .001).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 12, |
| "end": 19, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Using the Collapsed performance as a baseline for the Combined classifier, we discuss the features contributing to the 10 percentage point increase as well as the optimal (minimal) set of features for prediction. The best accuracies for the Combined experiment only require CLAUSE and GRANULAR-ITY information; achieving 79.08% (NB -44 above MCB, f-score=.750). Both CLAUSE and GRANU-LARITY are necessary. Relying only on CLAUSE achieves a 48.25% accuracy (J48) and relying only on GRANULARITY achieves 70.36% for all classifiers, but this higher accuracy is an artifact of the organization as evidenced by the f-score (.585).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The relationship between CLAUSE and the rhetorical relations is straightforward. For example, the CONSEQUENCE relation is often an \"intersentential\" relation (if the aardvark fell in the puddle, then it got dirty), each of the 16 CONSEQUENCE relations are embedded. Similarly, 93% of all ELABORATION relations, which are temporally subordinating, are embedded. Clause types appear to be a viable source of co-varying information in rhetorical relation prediction in the tasks under discussion.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The aspects of syntactic-semantic form and pragmatic function in the relationship between granularity and rhetorical relations is of central interest in this investigation. Asher and Lascarides represent discourses hierarchically through coordination and subordination of information which corresponds to changes in granularity. However, while the notion of granularity enters into the motivation and formulation of the SDRT inventory, it is not developed further. These results potentailly allow us to say something deeper about the structural organization of discourse as it relates to granularity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In particualr, while there is some probabilistic leverage in collapsing categories, it is not the case that arbitrary categorizations will perform similarly. This observation holds true even for theoretically informed categorizations. For example, organizing the SDRT inventory into coordinated and subordinated relations yields lower performance on relation prediction. Coordinated and subordinated can be predicted with 80% accuracy, but the prediction of the individual relations given the category performs only at 70%. Since the granularity-based organization presented here performs better, we suggest that the pragmatic function of the relation is more systematic than the syntactic-semantic form of the relation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Future research will focus on more data, different machine learning techniques (e.g. unsupervised learning) and automatization. Where clause, tense, aspect and event are readily automatable, rhetorical relations and granularity are less so. Automatically extracting such information from an annotated corpus such as the Penn Discourse Tree Bank is certainly feasible. However, the distribution of genres in this corpus is somewhat limited (i.e., predominately news text (Webber, 2009) ) and calls into question the generalizeability of results to other modes of discourse. Overall, we have demonstrated that the inclusion of a granularity-based organization in the machine learning prediction of rhetorical relations increases performance by 37%, which is roughly 14% above previous reported results for a broader range of discourses and relations.", |
| "cite_spans": [ |
| { |
| "start": 470, |
| "end": 484, |
| "text": "(Webber, 2009)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Note that what is being predicted is the rhetorical relation, or associated granularity, with the second clause in a clause pair. Tasks were performed where clause information was paired, but this did not translate into improved accuracies.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "Thank you to Jeff Ondich and Ultralingua for facilitating this research and to four anonymous *SEM reviewers for insightful and constructive comments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Logics of Conversation", |
| "authors": [ |
| { |
| "first": "Nicholas", |
| "middle": [], |
| "last": "Asher", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Lascarides", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nicholas Asher and Alex Lascarides. 2003. Logics of Conversation. Cambridge University Press, Cam- bridge, UK.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "K*: An Instance-based Learner Using an Entropic Distance Measure", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "John", |
| "suffix": "" |
| }, |
| { |
| "first": "Leonard", |
| "middle": [ |
| "E" |
| ], |
| "last": "Cleary", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Trigg", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings of the 12 International Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "108--113", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John G. Cleary and Leonard E. Trigg 1995. K*: An Instance-based Learner Using an Entropic Distance Measure. In Proceedings of the 12 International Con- ference on Machine Learning, 108-113.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Cohen", |
| "suffix": "" |
| } |
| ], |
| "year": 1960, |
| "venue": "", |
| "volume": "20", |
| "issue": "", |
| "pages": "37--46", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Cohen. 1960. A Coefficient of Agreement for Nominal Scales. Educational and Psychological Mea- surement, 20(1):37-46.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Logic and Conversation", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Grice", |
| "suffix": "" |
| } |
| ], |
| "year": 1975, |
| "venue": "Syntax and Semantics", |
| "volume": "3", |
| "issue": "", |
| "pages": "43--85", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "H. Paul Grice. 1975. Logic and Conversation. In Syntax and Semantics, Vol. 3, Speech Acts, 43-85. Academic Press, New York.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "On The Coherence and Structure of Discourse", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Jerry", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hobbs", |
| "suffix": "" |
| } |
| ], |
| "year": 1985, |
| "venue": "CSLI Technical Report", |
| "volume": "", |
| "issue": "", |
| "pages": "85--122", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jerry R. Hobbs. 1985. On The Coherence and Structure of Discourse. CSLI Technical Report, CSLI-85-37.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "The Exploitation of Spatial Information in Narrative Discourse", |
| "authors": [ |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Blake", |
| "suffix": "" |
| }, |
| { |
| "first": "Graham", |
| "middle": [], |
| "last": "Howald", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Katz", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the Ninth International Workshop on Computational Semantics", |
| "volume": "", |
| "issue": "", |
| "pages": "175--184", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Blake Stephen Howald and Graham Katz. 2011. The Exploitation of Spatial Information in Narrative Dis- course. In Proceedings of the Ninth International Workshop on Computational Semantics, 175-184.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Inferring Sentence Internal Temporal Relations", |
| "authors": [ |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Lascarides", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the North American Association of Computational Linguistics (NAACL-04", |
| "volume": "", |
| "issue": "", |
| "pages": "153--160", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mirella Lapata and Alex Lascarides. 2004. Inferring Sentence Internal Temporal Relations. In Proceedings of the North American Association of Computational Linguistics (NAACL-04) 2004, 153-160.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Improving Summarization Through Rhetorical Parsing Tuning", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of The 6th Workshop on Very Large Corpora", |
| "volume": "", |
| "issue": "", |
| "pages": "206--215", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Marcu. 1998. Improving Summarization Through Rhetorical Parsing Tuning. In Proceedings of The 6th Workshop on Very Large Corpora, 206-215.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "An Unsupervised Approach to Recognizing Discourse Relations", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| }, |
| { |
| "first": "Abdessamad", |
| "middle": [], |
| "last": "Echihabi", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the Association of Computational Linguistics (ACL-02", |
| "volume": "", |
| "issue": "", |
| "pages": "368--375", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Marcu and Abdessamad Echihabi. 2002. An Un- supervised Approach to Recognizing Discourse Rela- tions. In Proceedings of the Association of Computa- tional Linguistics (ACL-02) 2002, 368-375.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Granulairty in Natural Language Discourse", |
| "authors": [ |
| { |
| "first": "Rutu", |
| "middle": [], |
| "last": "Mulkar-Mehta", |
| "suffix": "" |
| }, |
| { |
| "first": "Jerry", |
| "middle": [ |
| "R" |
| ], |
| "last": "Hobbs", |
| "suffix": "" |
| }, |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the Ninth International Conference on Computational Semantics (IWCS 2011)", |
| "volume": "", |
| "issue": "", |
| "pages": "195--204", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rutu Mulkar-Mehta, Jerry R. Hobbs and Eduard Hovy. 2011. Granulairty in Natural Language Discourse. In Proceedings of the Ninth International Conference on Computational Semantics (IWCS 2011) 2011, 195- 204.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Annotation of WordNet Verbs with TimeML Event Classes", |
| "authors": [ |
| { |
| "first": "Georgiana", |
| "middle": [], |
| "last": "Puscasu", |
| "suffix": "" |
| }, |
| { |
| "first": "Verginica", |
| "middle": [], |
| "last": "Mititelu", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the Sixth International Language Resources and Evaluation (LREC08)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Georgiana Puscasu and Verginica Mititelu. 2008. Anno- tation of WordNet Verbs with TimeML Event Classes. Proceedings of the Sixth International Language Re- sources and Evaluation (LREC08)", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "TimeML: Robust Specification of Event and Temporal Expressions in Text", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Pustejovsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Jos\u00e9", |
| "middle": [], |
| "last": "Casta\u00f1o", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Ingria", |
| "suffix": "" |
| }, |
| { |
| "first": "Roser", |
| "middle": [], |
| "last": "Saur", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Gaizauskas", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrea", |
| "middle": [], |
| "last": "Setzer", |
| "suffix": "" |
| }, |
| { |
| "first": "Graham", |
| "middle": [], |
| "last": "Katz", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the Fith International Conference on Computational Semantics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "James Pustejovsky, Jos\u00e9 Casta\u00f1o, Robert Ingria, Roser Saur, Robert Gaizauskas, Andrea Setzer, and Graham Katz. 2005. TimeML: Robust Specification of Event and Temporal Expressions in Text. In Proceedings of the Fith International Conference on Computational Semantics (IWCS 2005)", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "C4.5: Programs for Machine Learning", |
| "authors": [ |
| { |
| "first": "Ross", |
| "middle": [], |
| "last": "Quinlan", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ross Quinlan. 1993 C4.5: Programs for Machine Learn- ing. Morgan Kaufmann, San Francisco, CA.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Modes of Discourse: The Local Structure of Texts", |
| "authors": [ |
| { |
| "first": "Carlota", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Carlota Smith. 2003. Modes of Discourse: The Local Structure of Texts. Cambridge University Press, Cam- bridge, UK.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Exploiting Linguistic Cues to Classify Rhetorical Relations", |
| "authors": [ |
| { |
| "first": "Caroline", |
| "middle": [], |
| "last": "Sporleder", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Lascarides", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of Recent Advances in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "532--539", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Caroline Sporleder and Alex Lascarides. 2005. Exploit- ing Linguistic Cues to Classify Rhetorical Relations. In Proceedings of Recent Advances in Natural Lan- guage Processing (RANLP-05), 532-539.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Using Automatically Labelled Examples to Classify Rhetorical Relations: An Assessment", |
| "authors": [ |
| { |
| "first": "Caroline", |
| "middle": [], |
| "last": "Sporleder", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Lascarides", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Natural Language Engineering", |
| "volume": "14", |
| "issue": "", |
| "pages": "369--416", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Caroline Sporleder and Alex Lascarides. 2008. Using Automatically Labelled Examples to Classify Rhetori- cal Relations: An Assessment. Natural Language En- gineering, 14:369-416.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "An Empirical Approach to Temporal Reference Resolution", |
| "authors": [ |
| { |
| "first": "Janyce", |
| "middle": [], |
| "last": "Wiebe", |
| "suffix": "" |
| }, |
| { |
| "first": "O'", |
| "middle": [], |
| "last": "Thomas", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hara", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenneth", |
| "middle": [], |
| "last": "Thorsten\u00f6hrstr\u00f6m-Sandgren", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mckeever", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of the 2nd Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "174--186", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Janyce Wiebe, Thomas O'Hara, Thorsten\u00d6hrstr\u00f6m- Sandgren and Kenneth McKeever. 1997. An Em- pirical Approach to Temporal Reference Resolution. In Proceedings of the 2nd Conference on Empirical Methods in Natural Language Processing (EMNLP- 97), 174-186.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Data Mining: Practical Machine Learning Techniques with Java Implementation", |
| "authors": [ |
| { |
| "first": "Ian", |
| "middle": [], |
| "last": "Witten", |
| "suffix": "" |
| }, |
| { |
| "first": "Eibe", |
| "middle": [], |
| "last": "Frank", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ian Witten and Eibe Frank. 2005. Data Mining: Prac- tical Machine Learning Techniques with Java Imple- mentation (2nd Ed.) Morgan Kaufmann, San Fran- cisco, CA.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Genre Distictions for Discourse in the Penn TreeBank", |
| "authors": [ |
| { |
| "first": "Bonnie", |
| "middle": [], |
| "last": "Webber", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 47th ACL Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "674--682", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bonnie Webber 2009. Genre Distictions for Discourse in the Penn TreeBank. In Proceedings of the 47th ACL Conference, 674-682.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "uris": null, |
| "text": "statistics were computed and are within acceptable ranges: TENSE (99.65 / .9945), ASPECT (99.30 / .9937), SDRT (77.42 / .6850), and EVENT (75.88 / .6362).", |
| "type_str": "figure" |
| }, |
| "TABREF0": { |
| "text": "Distribution of Relations by Granularity Type.", |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>Relation</td><td>Number (Avg.)</td></tr><tr><td>Positive</td><td>515 (47%)</td></tr><tr><td>BACKGROUND</td><td>315 (61%)</td></tr><tr><td>ELABORATION</td><td>161 (31%)</td></tr><tr><td>EXPLANATION</td><td>39 (7%)</td></tr><tr><td>Negative</td><td>59 (5%)</td></tr><tr><td>CONSEQUENCE</td><td>16 (26%)</td></tr><tr><td>RESULT</td><td>43 (71%)</td></tr><tr><td>Maintenance</td><td>490 (44%)</td></tr><tr><td>ALTERNATION</td><td>76 (14%)</td></tr><tr><td>CONTINUATION</td><td>30 (6%)</td></tr><tr><td>NARRATION</td><td>384 (78%)</td></tr><tr><td colspan=\"2\">embedded (EM) or not -and GRANULARITY shift</td></tr><tr><td colspan=\"2\">categories which are an organization of the SDRT</td></tr><tr><td>rhetorical relations</td><td/></tr></table>" |
| }, |
| "TABREF1": { |
| "text": "Relation Prediction -Combined Modes. Collapsed 69.90 70.18 69.81 41 Combined 78.62 71.92 80.00 35", |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>Feature</td><td>J48</td><td>K*</td><td>NB</td><td>MCB</td></tr><tr><td colspan=\"4\">Uncollapsed 58.99 55.41 56.69</td><td>35</td></tr></table>" |
| }, |
| "TABREF2": { |
| "text": "Individual Relation Prediction Accuracies (%).", |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td colspan=\"2\">Relation A</td><td>I</td><td>D</td><td>N</td><td>R</td><td>T</td></tr><tr><td>NAR.</td><td colspan=\"6\">73 55 100 100 94 96</td></tr><tr><td>RES.</td><td colspan=\"6\">75 88 85 100 100 93</td></tr><tr><td>BACK.</td><td colspan=\"3\">93 92 96</td><td>87</td><td colspan=\"2\">94 92</td></tr><tr><td>ELAB.</td><td colspan=\"3\">57 41 69</td><td>21</td><td colspan=\"2\">48 69</td></tr><tr><td colspan=\"3\">CONSEQ. 20 0</td><td>0</td><td>0</td><td>0</td><td>37</td></tr><tr><td>ALTER.</td><td colspan=\"2\">50 42</td><td>0</td><td>0</td><td colspan=\"2\">43 27</td></tr><tr><td>CONTIN.</td><td>8</td><td>0</td><td>0</td><td>0</td><td>0</td><td>23</td></tr><tr><td>EXPLAN.</td><td colspan=\"2\">0 20</td><td>0</td><td>9</td><td>0</td><td>2</td></tr><tr><td>Total</td><td colspan=\"3\">68 72 92</td><td>74</td><td colspan=\"2\">74 80</td></tr></table>" |
| } |
| } |
| } |
| } |