ACL-OCL / Base_JSON /prefixP /json /P13 /P13-1048.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P13-1048",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:37:10.659234Z"
},
"title": "Combining Intra-and Multi-sentential Rhetorical Parsing for Document-level Discourse Analysis",
"authors": [
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": "",
"affiliation": {},
"email": "sjoty@qf.org.qa"
},
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": "",
"affiliation": {},
"email": "carenini@cs.ubc.ca"
},
{
"first": "Raymond",
"middle": [],
"last": "Ng",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Yashar",
"middle": [],
"last": "Mehdad",
"suffix": "",
"affiliation": {},
"email": "mehdad@cs.ubc.ca"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose a novel approach for developing a two-stage document-level discourse parser. Our parser builds a discourse tree by applying an optimal parsing algorithm to probabilities inferred from two Conditional Random Fields: one for intrasentential parsing and the other for multisentential parsing. We present two approaches to combine these two stages of discourse parsing effectively. A set of empirical evaluations over two different datasets demonstrates that our discourse parser significantly outperforms the stateof-the-art, often by a wide margin.",
"pdf_parse": {
"paper_id": "P13-1048",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose a novel approach for developing a two-stage document-level discourse parser. Our parser builds a discourse tree by applying an optimal parsing algorithm to probabilities inferred from two Conditional Random Fields: one for intrasentential parsing and the other for multisentential parsing. We present two approaches to combine these two stages of discourse parsing effectively. A set of empirical evaluations over two different datasets demonstrates that our discourse parser significantly outperforms the stateof-the-art, often by a wide margin.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Discourse of any kind is not formed by independent and isolated textual units, but by related and structured units. Discourse analysis seeks to uncover such structures underneath the surface of the text, and has been shown to be beneficial for text summarization (Louis et al., 2010; Marcu, 2000b) , sentence compression (Sporleder and Lapata, 2005 ), text generation (Prasad et al., 2005) , sentiment analysis (Somasundaran, 2010) and question answering (Verberne et al., 2007) .",
"cite_spans": [
{
"start": 263,
"end": 283,
"text": "(Louis et al., 2010;",
"ref_id": "BIBREF8"
},
{
"start": 284,
"end": 297,
"text": "Marcu, 2000b)",
"ref_id": "BIBREF12"
},
{
"start": 321,
"end": 348,
"text": "(Sporleder and Lapata, 2005",
"ref_id": "BIBREF17"
},
{
"start": 368,
"end": 389,
"text": "(Prasad et al., 2005)",
"ref_id": "BIBREF14"
},
{
"start": 411,
"end": 431,
"text": "(Somasundaran, 2010)",
"ref_id": "BIBREF15"
},
{
"start": 455,
"end": 478,
"text": "(Verberne et al., 2007)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Rhetorical Structure Theory (RST) (Mann and Thompson, 1988) , one of the most influential theories of discourse, represents texts by labeled hierarchical structures, called Discourse Trees (DTs), as exemplified by a sample DT in Figure 1 . The leaves of a DT correspond to contiguous Elementary Discourse Units (EDUs) (six in the example). Adjacent EDUs are connected by rhetorical relations (e.g., Elaboration, Contrast), forming larger discourse units (represented by internal nodes), which in turn are also subject to this relation linking. Discourse units linked by a rhetorical relation are further distinguished based on their relative importance in the text: nucleus being the central part, whereas satellite being the peripheral one. Discourse analysis in RST involves two subtasks: discourse segmentation is the task of identifying the EDUs, and discourse parsing is the task of linking the discourse units into a labeled tree.",
"cite_spans": [
{
"start": 34,
"end": 59,
"text": "(Mann and Thompson, 1988)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 229,
"end": 237,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While recent advances in automatic discourse segmentation and sentence-level discourse parsing have attained accuracies close to human performance (Fisher and Roark, 2007; Joty et al., 2012) , discourse parsing at the document-level still poses significant challenges (Feng and Hirst, 2012) and the performance of the existing document-level parsers (Hernault et al., 2010; Subba and Di-Eugenio, 2009) is still considerably inferior compared to human gold-standard. This paper aims to reduce this performance gap and take discourse parsing one step further. To this end, we address three key limitations of existing parsers as follows.",
"cite_spans": [
{
"start": 147,
"end": 171,
"text": "(Fisher and Roark, 2007;",
"ref_id": "BIBREF3"
},
{
"start": 172,
"end": 190,
"text": "Joty et al., 2012)",
"ref_id": "BIBREF6"
},
{
"start": 268,
"end": 290,
"text": "(Feng and Hirst, 2012)",
"ref_id": "BIBREF2"
},
{
"start": 350,
"end": 373,
"text": "(Hernault et al., 2010;",
"ref_id": "BIBREF5"
},
{
"start": 374,
"end": 401,
"text": "Subba and Di-Eugenio, 2009)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "First, existing discourse parsers typically model the structure and the labels of a DT separately in a pipeline fashion, and also do not consider the sequential dependencies between the DT constituents, which has been recently shown to be critical (Feng and Hirst, 2012) . To address this limitation, as the first contribution, we propose a novel document-level discourse parser based on probabilistic discriminative parsing models, represented as Conditional Random Fields (CRFs) (Sutton et al., 2007) , to infer the probability of all possible DT constituents. The CRF models effectively represent the structure and the label of a DT constituent jointly, and whenever possible, capture the sequential dependencies between the constituents.",
"cite_spans": [
{
"start": 248,
"end": 270,
"text": "(Feng and Hirst, 2012)",
"ref_id": "BIBREF2"
},
{
"start": 481,
"end": 502,
"text": "(Sutton et al., 2007)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Second, existing parsers apply greedy and suboptimal parsing algorithms to build the DT for a document. To cope with this limitation, our CRF models support a probabilistic bottom-up parsing Figure 1: Discourse tree for two sentences in RST-DT. Each of the sentences contains three EDUs. The second sentence has a well-formed discourse tree, but the first sentence does not have one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "algorithm which is non-greedy and optimal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Third, existing discourse parsers do not discriminate between intra-sentential (i.e., building the DTs for the individual sentences) and multisentential parsing (i.e., building the DT for the document). However, we argue that distinguishing between these two conditions can result in more effective parsing. Two separate parsing models could exploit the fact that rhetorical relations are distributed differently intra-sententially vs. multi-sententially. Also, they could independently choose their own informative features. As another key contribution of our work, we devise two different parsing components: one for intrasentential parsing, the other for multi-sentential parsing. This provides for scalable, modular and flexible solutions, that can exploit the strong correlation observed between the text structure (sentence boundaries) and the structure of the DT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to develop a complete and robust discourse parser, we combine our intra-sentential and multi-sentential parsers in two different ways. Since most sentences have a well-formed discourse sub-tree in the full document-level DT (for example, the second sentence in Figure 1 ), our first approach constructs a DT for every sentence using our intra-sentential parser, and then runs the multi-sentential parser on the resulting sentencelevel DTs. However, this approach would disregard those cases where rhetorical structures violate sentence boundaries. For example, consider the first sentence in Figure 1 . It does not have a well-formed sub-tree because the unit containing EDUs 2 and 3 merges with the next sentence and only then is the resulting unit merged with EDU 1. Our second approach, in an attempt of dealing with these cases, builds sentence-level sub-trees by applying the intra-sentential parser on a sliding window covering two adjacent sentences and by then consolidating the results produced by over-lapping windows. After that, the multi-sentential parser takes all these sentence-level sub-trees and builds a full rhetorical parse for the document.",
"cite_spans": [],
"ref_spans": [
{
"start": 270,
"end": 278,
"text": "Figure 1",
"ref_id": null
},
{
"start": 601,
"end": 609,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While previous approaches have been tested on only one corpus, we evaluate our approach on texts from two very different genres: news articles and instructional how-to-do manuals. The results demonstrate that our contributions provide consistent and statistically significant improvements over previous approaches. Our final result compares very favorably to the result of state-of-the-art models in document-level discourse parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the rest of the paper, after discussing related work in Section 2, we present our discourse parsing framework in Section 3. In Section 4, we describe the intra-and multi-sentential parsing components. Section 5 presents the two approaches to combine the two stages of parsing. The experiments and error analysis, followed by future directions are discussed in Section 6. Finally, we summarize our contributions in Section 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The idea of staging document-level discourse parsing on top of sentence-level discourse parsing was investigated in (Marcu, 2000a; LeThanh et al., 2004) . These approaches mainly rely on discourse markers (or cues), and use hand-coded rules to build DTs for sentences first, then for paragraphs, and so on. However, often rhetorical relations are not explicitly signaled by discourse markers (Marcu and Echihabi, 2002) , and discourse structures do not always correspond to paragraph structures (Sporleder and Lascarides, 2004) . Therefore, rather than relying on hand-coded rules based on discourse markers, recent approaches employ supervised machine learning techniques with a large set of informative features. Hernault et al., (2010) presents the publicly available HILDA parser. Given the EDUs in a doc- ument, HILDA iteratively employs two Support Vector Machine (SVM) classifiers in pipeline to build the DT. In each iteration, a binary classifier first decides which of the adjacent units to merge, then a multi-class classifier connects the selected units with an appropriate relation label. They evaluate their approach on the RST-DT corpus (Carlson et al., 2002) of news articles. On a different genre of instructional texts, Subba and Di-Eugenio (2009) propose a shift-reduce parser that relies on a classifier for relation labeling. Their classifier uses Inductive Logic Programming (ILP) to learn first-order logic rules from a set of features including compositional semantics. In this work, we address the limitations of these models (described in Section 1) introducing our novel discourse parser.",
"cite_spans": [
{
"start": 116,
"end": 130,
"text": "(Marcu, 2000a;",
"ref_id": "BIBREF11"
},
{
"start": 131,
"end": 152,
"text": "LeThanh et al., 2004)",
"ref_id": "BIBREF7"
},
{
"start": 392,
"end": 418,
"text": "(Marcu and Echihabi, 2002)",
"ref_id": "BIBREF10"
},
{
"start": 495,
"end": 527,
"text": "(Sporleder and Lascarides, 2004)",
"ref_id": "BIBREF18"
},
{
"start": 715,
"end": 738,
"text": "Hernault et al., (2010)",
"ref_id": "BIBREF5"
},
{
"start": 1152,
"end": 1174,
"text": "(Carlson et al., 2002)",
"ref_id": "BIBREF1"
},
{
"start": 1238,
"end": 1265,
"text": "Subba and Di-Eugenio (2009)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Given a document with sentences already segmented into EDUs, the discourse parsing problem is determining which discourse units (EDUs or larger units) to relate (i.e., the structure), and how to relate them (i.e., the labels or the discourse relations) in the resulting DT. Since we already have an accurate sentence-level discourse parser (Joty et al., 2012) , a straightforward approach to document-level parsing could be to simply apply this parser to the whole document. However this strategy would be problematic because of scalability and modeling issues. Note that the number of valid trees grows exponentially with the number of EDUs in a document. 1 Therefore, an exhaustive search over the valid trees is often unfeasible, even for relatively small documents. For modeling, the problem is two-fold. On the one hand, it appears that rhetorical relations are distributed differently intra-sententially vs. multisententially. For example, Figure 2 shows a comparison between the two distributions of six most Notice that relations Attribution and Same-Unit are more frequent than Joint in intra-sentential case, whereas Joint is more frequent than the other two in multi-sentential case. On the other hand, different kinds of features are applicable and informative for intra-sentential vs. multi-sentential parsing. For example, syntactic features like dominance sets (Soricut and Marcu, 2003) are extremely useful for sentence-level parsing, but are not even applicable in multi-sentential case. Likewise, lexical chain features (Sporleder and Lascarides, 2004) , that are useful for multi-sentential parsing, are not applicable at the sentence level. Based on these observations, our discourse parsing framework comprises two separate modules: an intra-sentential parser and a multisentential parser (Figure 3) . First, the intrasentential parser produces one or more discourse sub-trees for each sentence. Then, the multisentential parser generates a full DT for the document from these sub-trees. Both of our parsers have the same two components: a parsing model assigns a probability to every possible DT, and a parsing algorithm identifies the most probable DT among the candidate DTs in that scenario. While the two models are rather different, the same parsing algorithm is shared by the two modules. Staging multi-sentential parsing on top of intra-sentential parsing in this way allows us to exploit the strong correlation between the text structure and the DT structure as explained in detail in Section 5. Before describing our parsing models and the parsing algorithm, we introduce some terminology that we will use throughout the paper.",
"cite_spans": [
{
"start": 340,
"end": 359,
"text": "(Joty et al., 2012)",
"ref_id": "BIBREF6"
},
{
"start": 1376,
"end": 1401,
"text": "(Soricut and Marcu, 2003)",
"ref_id": "BIBREF16"
},
{
"start": 1538,
"end": 1570,
"text": "(Sporleder and Lascarides, 2004)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 946,
"end": 954,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 1810,
"end": 1820,
"text": "(Figure 3)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Our Discourse Parsing Framework",
"sec_num": "3"
},
{
"text": "Following (Joty et al., 2012) , a DT can be formally represented as a set of constituents of the form R[i, m, j], referring to a rhetorical relation R between the discourse unit containing EDUs i through m and the unit containing EDUs m+1 through j. For example, the DT for the second sentence in Figure 1 can be represented as {Elaboration-NS[4,4,5], Same-Unit-NN[4,5,6]}. Notice that a relation R also specifies the nuclearity statuses of the discourse units involved, which can be one of Nucleus-Satellite (NS), Satellite-Nucleus (SN) and Nucleus-Nucleus (NN).",
"cite_spans": [
{
"start": 10,
"end": 29,
"text": "(Joty et al., 2012)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 297,
"end": 305,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Our Discourse Parsing Framework",
"sec_num": "3"
},
{
"text": "The job of our intra-sentential and multi-sentential parsing models is to assign a probability to each of the constituents of all possible DTs at the sentence level and at the document level, respectively. Formally, given the model parameters \u0398, for each possible constituent R[i, m, j] in a candidate DT at the sentence or document level, the parsing model estimates P (R[i, m, j]|\u0398), which specifies a joint distribution over the label R and the structure [i, m, j] of the constituent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Models and Parsing Algorithm",
"sec_num": "4"
},
{
"text": "Recently, we proposed a novel parsing model for sentence-level discourse parsing (Joty et al., 2012) , that outperforms previous approaches by effectively modeling sequential dependencies along with structure and labels jointly. Below we briefly describe the parsing model, and show how it is applied to obtain the probabilities of all possible DT constituents at the sentence level. Figure 4 shows the intra-sentential parsing model expressed as a Dynamic Conditional Random Field (DCRF) (Sutton et al., 2007) . The observed nodes U j in a sequence represent the discourse units (EDUs or larger units). The first layer of hidden nodes are the structure nodes, where S j \u2208{0, 1} denotes whether two adjacent discourse units U j\u22121 and U j should be connected or not. The second layer of hidden nodes are the relation nodes, with R j \u2208{1 . . . M } denoting the relation between two adjacent units U j\u22121 and U j , where M is the total number of relations in the relation set. The connections between adjacent nodes in a hidden layer encode sequential dependencies between the respective hidden nodes, and can enforce constraints such as the fact that a S j = 1 must not follow a S j\u22121 = 1. The connections between the two hidden layers model the structure and the relation of a DT (sentence-level) constituent jointly.",
"cite_spans": [
{
"start": 81,
"end": 100,
"text": "(Joty et al., 2012)",
"ref_id": "BIBREF6"
},
{
"start": 489,
"end": 510,
"text": "(Sutton et al., 2007)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 384,
"end": 392,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Intra-Sentential Parsing Model",
"sec_num": "4.1"
},
{
"text": "To obtain the probability of the constituents of all candidate DTs for a sentence, we apply the parsing model recursively at different levels of the DT and compute the posterior marginals over the relation-structure pairs. To illustrate the Figure 4 : A chain-structured DCRF as our intrasentential parsing model. process, let us assume that the sentence contains four EDUs. At the first (bottom) level, when all the units are the EDUs, there is only one possible unit sequence to which we apply our DCRF model ( Figure 5(a) ). We compute the posterior marginals P (R 2 , S 2 =1|e 1 , e 2 , e 3 , e 4 , \u0398), P (R 3 , S 3 =1|e 1 , e 2 , e 3 , e 4 , \u0398) and P (R 4 , S 4 =1|e 1 , e 2 , e 3 , e 4 , \u0398) to obtain the probability of the constituents R[1, 1, 2], R[2, 2, 3] and R[3, 3, 4], respectively. At the second level, there are three possible unit sequences (e 1:2 , e 3 , e 4 ), (e 1 ,e 2:3 , e 4 ) and (e 1 ,e 2 ,e 3:4 ). Figure 5 (b) shows their corresponding DCRFs. The posterior marginals P (R 3 , S 3 =1|e 1:2 ,e 3 ,e 4 ,\u0398), P (R 2:3 S 2:3 =1|e 1 ,e 2:3 ,e 4 ,\u0398), P (R 4 , S 4 =1|e 1 ,e 2:3 ,e 4 ,\u0398) and P (R 3:4 , S 3:4 =1|e 1 ,e 2 ,e 3:4 ,\u0398) computed from the three sequences correspond to the probability of the constituents At this point what is left to be explained is how we generate all possible sequences for a given number of EDUs in a sentence. Algorithm 1 demonstrates how we do that. More specifically, to compute the probabilities of each DT con-stituent R[i, k, j], we need to generate sequences like (e 1 , \u2022 \u2022 \u2022 , e i\u22121 , e i:k , e k+1:j , e j+1 , \u2022 \u2022 \u2022 , e n ) for 1 \u2264 i \u2264 k < j \u2264 n. In doing so, we may generate some duplicate sequences. Clearly, the sequence (e 1 , \u2022 \u2022 \u2022 , e i\u22121 , e i:i , e i+1:j , e j+1 , \u2022 \u2022 \u2022 , e n ) for 1 \u2264 i \u2264 k < j < n is already considered for computing the probability of R[i + 1, j, j + 1]. Therefore, it is a duplicate sequence that we exclude from our list of all possible sequences.",
"cite_spans": [],
"ref_spans": [
{
"start": 241,
"end": 249,
"text": "Figure 4",
"ref_id": null
},
{
"start": 513,
"end": 524,
"text": "Figure 5(a)",
"ref_id": "FIGREF5"
},
{
"start": 923,
"end": 931,
"text": "Figure 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Intra-Sentential Parsing Model",
"sec_num": "4.1"
},
{
"text": "U U U U U 2 2 2 3 j t-1 t S S S S S R R R R R 3 3 j j t-1 t-1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intra-Sentential Parsing Model",
"sec_num": "4.1"
},
{
"text": "R[1, 2, 3], R[1, 1, 3], R[2, 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intra-Sentential Parsing Model",
"sec_num": "4.1"
},
{
"text": "(i) (ii) (iii) (i) (ii) (iii)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intra-Sentential Parsing Model",
"sec_num": "4.1"
},
{
"text": "Input: Sequence of EDUs: (e 1 , e 2 , \u2022 \u2022 \u2022 , e n ) Output: List of sequences:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intra-Sentential Parsing Model",
"sec_num": "4.1"
},
{
"text": "L for i = 1 \u2192 n \u2212 1 do for j = i + 1 \u2192 n do if j == n then for k = i \u2192 j \u2212 1 do L.append ((e1, .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intra-Sentential Parsing Model",
"sec_num": "4.1"
},
{
"text": "., ei\u22121, e i:k , e k+1:j , ej+1, .., en)) end else for",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intra-Sentential Parsing Model",
"sec_num": "4.1"
},
{
"text": "k = i + 1 \u2192 j \u2212 1 do L.append ((e1, .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intra-Sentential Parsing Model",
"sec_num": "4.1"
},
{
"text": "., ei\u22121, e i:k , e k+1:j , ej+1, .., en)) end end end end Algorithm 1: Generating all possible sequences for a sentence with n EDUs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intra-Sentential Parsing Model",
"sec_num": "4.1"
},
{
"text": "Once we obtain the probability of all possible DT constituents, the discourse sub-trees for the sentences are built by applying an optimal probabilistic parsing algorithm (Section 4.4) using one of the methods described in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intra-Sentential Parsing Model",
"sec_num": "4.1"
},
{
"text": "Given the discourse units (sub-trees) for all the sentences of a document, a simple approach to build the rhetorical tree of the document would be to apply a new DCRF model, similar to the one in Figure 4 (with different parameters), to all the possible sequences generated from these units to infer the probability of all possible higher-order constituents. However, the number of possible sequences and their length increase with the number of sentences in a document. For example, assuming that each sentence has a well-formed DT, for a document with n sentences, Algorithm 1 generates O(n 3 ) sequences, where the sequence at the bottom level has n units, each of the sequences at the second level has n-1 units, and so on. Since the model in Figure 4 has a \"fat\" chain structure,",
"cite_spans": [],
"ref_spans": [
{
"start": 196,
"end": 204,
"text": "Figure 4",
"ref_id": null
},
{
"start": 747,
"end": 755,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multi-Sentential Parsing Model",
"sec_num": "4.2"
},
{
"text": "U U t-1 t S R t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Sentential Parsing Model",
"sec_num": "4.2"
},
{
"text": "Adjacent Units at level i Structure Relation t Figure 6 : A CRF as a multi-sentential parsing model.",
"cite_spans": [],
"ref_spans": [
{
"start": 47,
"end": 55,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multi-Sentential Parsing Model",
"sec_num": "4.2"
},
{
"text": "we could use forwards-backwards algorithm for exact inference in this model (Sutton and McCallum, 2012) . However, forwards-backwards on a sequence containing T units costs",
"cite_spans": [
{
"start": 76,
"end": 103,
"text": "(Sutton and McCallum, 2012)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Sentential Parsing Model",
"sec_num": "4.2"
},
{
"text": "O(T M 2 ) time,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Sentential Parsing Model",
"sec_num": "4.2"
},
{
"text": "where M is the number of relations in our relation set. This makes the chain-structured DCRF model impractical for multi-sentential parsing of long documents, since learning requires to run inference on every training sequence with an overall time complexity of O(T M 2 n 3 ) per document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Sentential Parsing Model",
"sec_num": "4.2"
},
{
"text": "Our model for multi-sentential parsing is shown in Figure 6 . The two observed nodes U t\u22121 and U t are two adjacent discourse units. The (hidden) structure node S\u2208{0, 1} denotes whether the two units should be connected or not. The hidden node R\u2208{1 . . . M } represents the relation between the two units. Notice that like the previous model, this is also an undirected graphical model. It becomes a CRF if we directly model the hidden (output) variables by conditioning its clique potential (or factor) \u03c6 on the observed (input) variables:",
"cite_spans": [],
"ref_spans": [
{
"start": 51,
"end": 59,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multi-Sentential Parsing Model",
"sec_num": "4.2"
},
{
"text": "P (R t , S t |x, \u0398) = 1 Z(x, \u0398) \u03c6(R t , S t |x, \u0398) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Sentential Parsing Model",
"sec_num": "4.2"
},
{
"text": "where x represents input features extracted from the observed variables U t\u22121 and U t , and Z(x, \u0398) is the partition function. We use a log-linear representation of the factor:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Sentential Parsing Model",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c6(R t , S t |x, \u0398) = exp(\u0398 T f (R t , S t , x))",
"eq_num": "(2)"
}
],
"section": "Multi-Sentential Parsing Model",
"sec_num": "4.2"
},
{
"text": "where f (R t , S t , x) is a feature vector derived from the input features x and the labels R t and S t , and \u0398 is the corresponding weight vector. Although, this model is similar in spirit to the model in Figure 4 , we now break the chain structure, which makes the inference much faster (i.e., complexity of O(M 2 )). Breaking the chain structure also allows us to balance the data for training (equal number instances with S=1 and S=0), which dramatically reduces the learning time of the model. We apply our model to all possible adjacent units at all levels for the multi-sentential case, and compute the posterior marginals of the relationstructure pairs P (R t , S t =1|U t\u22121 , U t , \u0398) to obtain the probability of all possible DT constituents. Table 1 summarizes the features used in our parsing models, which are extracted from two adjacent units U t\u22121 and U t . Since most of these features are adopted from previous studies (Joty et al., 2012; Hernault et al., 2010) , we briefly describe them.",
"cite_spans": [
{
"start": 937,
"end": 956,
"text": "(Joty et al., 2012;",
"ref_id": "BIBREF6"
},
{
"start": 957,
"end": 979,
"text": "Hernault et al., 2010)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 207,
"end": 215,
"text": "Figure 4",
"ref_id": null
},
{
"start": 754,
"end": 761,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Multi-Sentential Parsing Model",
"sec_num": "4.2"
},
{
"text": "Organizational features include the length of the units as the number of EDUs and tokens. It also includes the distances of the units from the beginning and end of the sentence (or text in the multi-sentential case). Text structural features indirectly capture the correlation between text structure and rhetorical structure by counting the number of sentence and paragraph boundaries in the units. Discourse markers (e.g., because, although) carry informative clues for rhetorical relations (Marcu, 2000a) . Rather than using a fixed list of discourse markers, we use an empirically learned lexical N-gram dictionary following (Joty et al., 2012) . This approach has been shown to be more robust and flexible across domains (Biran and Rambow, 2011; Hernault et al., 2010) . We also include part-of-speech (POS) tags for the beginning and end N tokens in a unit.",
"cite_spans": [
{
"start": 492,
"end": 506,
"text": "(Marcu, 2000a)",
"ref_id": "BIBREF11"
},
{
"start": 628,
"end": 647,
"text": "(Joty et al., 2012)",
"ref_id": "BIBREF6"
},
{
"start": 750,
"end": 772,
"text": "Hernault et al., 2010)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features Used in our Parsing Models",
"sec_num": "4.3"
},
{
"text": "Intra & Multi-Sentential Number of EDUs in unit 1 (or unit 2). Number of tokens in unit 1 (or unit 2). Distance of unit 1 in EDUs to the beginning (or to the end). Distance of unit 2 in EDUs to the beginning (or to the end).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Organizational features",
"sec_num": "8"
},
{
"text": "Multi-Sentential Number of sentences in unit 1 (or unit 2). Number of paragraphs in unit 1 (or unit 2). 8 N-gram features N \u2208{1, 2, 3} Intra & Multi-Sentential Beginning (or end) lexical N-grams in unit 1. Beginning (or end) lexical N-grams in unit 2. Beginning (or end) POS N-grams in unit 1. Beginning (or end) POS N-grams in unit 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text structural features",
"sec_num": "4"
},
{
"text": "Intra-Sentential Syntactic labels of the head node and the attachment node. Lexical heads of the head node and the attachment node. Dominance relationship between the two units.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dominance set features",
"sec_num": "5"
},
{
"text": "Multi-Sentential Number of chains start in unit 1 and end in unit 2. Number of chains start (or end) in unit 1 (or in unit 2). Number of chains skipping both unit 1 and unit 2. Number of chains skipping unit 1 (or unit 2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical chain features",
"sec_num": "8"
},
{
"text": "Intra & Multi-Sentential Previous and next feature vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual features",
"sec_num": "2"
},
{
"text": "Intra & Multi-Sentential Root nodes of the left and right rhetorical sub-trees. (Soricut and Marcu, 2003) are very effective for intra-sentential parsing. We include syntactic labels and lexical heads of head and attachment nodes along with their dominance relationship as features. Lexical chains (Morris and Hirst, 1991) are sequences of semantically related words that can indicate topic shifts. Features extracted from lexical chains have been shown to be useful for finding paragraph-level discourse structure (Sporleder and Lascarides, 2004) . We compute lexical chains for a document following the approach proposed in (Galley and McKeown, 2003) , that extracts lexical chains after performing word sense disambiguation. Following (Joty et al., 2012) , we also encode contextual and rhetorical sub-structure features in our models. The rhetorical sub-structure features incorporate hierarchical dependencies between DT constituents.",
"cite_spans": [
{
"start": 80,
"end": 105,
"text": "(Soricut and Marcu, 2003)",
"ref_id": "BIBREF16"
},
{
"start": 298,
"end": 322,
"text": "(Morris and Hirst, 1991)",
"ref_id": "BIBREF13"
},
{
"start": 515,
"end": 547,
"text": "(Sporleder and Lascarides, 2004)",
"ref_id": "BIBREF18"
},
{
"start": 626,
"end": 652,
"text": "(Galley and McKeown, 2003)",
"ref_id": "BIBREF4"
},
{
"start": 738,
"end": 757,
"text": "(Joty et al., 2012)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Substructure features",
"sec_num": "2"
},
{
"text": "Given the probability of all possible DT constituents in the intra-sentential and multi-sentential scenarios, the job of the parsing algorithm is to find the most probable DT for that scenario. Following (Joty et al., 2012) , we implement a probabilistic CKY-like bottom-up algorithm for computing the most likely parse using dynamic programming. Specifically, with n discourse units, we use the upper-triangular portion of the n\u00d7n dynamic programming table D. Given U x (0) and U x (1) are the start and end EDU Ids of unit U x :",
"cite_spans": [
{
"start": 204,
"end": 223,
"text": "(Joty et al., 2012)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Algorithm",
"sec_num": "4.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D[i, j] = P (R[U i (0), U k (1), U j (1)])",
"eq_num": "(3)"
}
],
"section": "Parsing Algorithm",
"sec_num": "4.4"
},
{
"text": "where, k = argmax",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Algorithm",
"sec_num": "4.4"
},
{
"text": "i\u2264p\u2264j P (R[U i (0), U p (1), U j (1)]).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Algorithm",
"sec_num": "4.4"
},
{
"text": "Note that, in contrast to previous studies on document-level parsing (Hernault et al., 2010; Subba and Di-Eugenio, 2009; Marcu, 2000b) , which use a greedy algorithm, our approach finds a discourse tree that is globally optimal.",
"cite_spans": [
{
"start": 69,
"end": 92,
"text": "(Hernault et al., 2010;",
"ref_id": "BIBREF5"
},
{
"start": 93,
"end": 120,
"text": "Subba and Di-Eugenio, 2009;",
"ref_id": "BIBREF20"
},
{
"start": 121,
"end": 134,
"text": "Marcu, 2000b)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Algorithm",
"sec_num": "4.4"
},
{
"text": "5 Document-level Parsing Approaches Now that we have presented our intra-sentential and our multi-sentential parsers, we are ready to describe how they can be effectively combined to perform document-level discourse analysis. Recall that a key motivation for a two-stage parsing is that it allows us to capture the correlation between text structure and discourse structure in a scalable, modular and flexible way. Below we describe two different approaches to model this correlation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Algorithm",
"sec_num": "4.4"
},
{
"text": "A key finding from several previous studies on sentence-level discourse analysis is that most sentences have a well-formed discourse sub-tree in the full document-level DT (Joty et al., 2012; Fisher and Roark, 2007) . For example, Figure 7 (a) shows 10 EDUs in 3 sentences (see boxes), where the DTs for the sentences obey their respective sentence boundaries. The 1S-1S approach aims to maximally exploit this finding. It first constructs a DT for every sentence using our intra-sentential parser, and then it provides our multi-sentential parser with the sentence-level DTs to build the rhetorical parse for the whole document. ",
"cite_spans": [
{
"start": 172,
"end": 191,
"text": "(Joty et al., 2012;",
"ref_id": "BIBREF6"
},
{
"start": 192,
"end": 215,
"text": "Fisher and Roark, 2007)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 231,
"end": 239,
"text": "Figure 7",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "1S-1S (1 Sentence-1 Sub-tree)",
"sec_num": "5.1"
},
{
"text": "While the assumption made by 1S-1S clearly simplifies the parsing process, it totally ignores the cases where discourse structures violate sentence boundaries. For example, in the DT shown in Figure 7(b) , sentence S 2 does not have a well-formed sub-tree because some of its units attach to the left (4-5, 6) and some to the right (7). Vliet and Redeker (2011) call these cases as 'leaky' boundaries. Even though less than 5% of the sentences have leaky boundaries in RST-DT, in other corpora this can be true for a larger portion of the sentences. For example, we observe over 12% sentences with leaky boundaries in the Instructional corpus of (Subba and Di-Eugenio, 2009) . However, we notice that in most cases where discourse structures violate sentence boundaries, its units are merged with the units of its adjacent sentences, as in Figure 7(b) . For example, this is true for 75% cases in our development set containing 20 news articles from RST-DT and for 79% cases in our development set containing 20 how-to-do manuals from the Instructional corpus. Based on this observation, we propose a sliding window approach.",
"cite_spans": [
{
"start": 646,
"end": 674,
"text": "(Subba and Di-Eugenio, 2009)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 192,
"end": 203,
"text": "Figure 7(b)",
"ref_id": "FIGREF7"
},
{
"start": 840,
"end": 851,
"text": "Figure 7(b)",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Sliding Window",
"sec_num": "5.2"
},
{
"text": "In this approach, our intra-sentential parser works with a window of two consecutive sentences, and builds a DT for the two sentences. For example, given the three sentences in Figure 7 , our intra-sentential parser constructs a DT for S 1 -S 2 and a DT for S 2 -S 3 . In this process, each sentence in a document except the first and the last will be associated with two DTs: one with the previous sentence (say DT p ) and one with the next (say DT n ). In other words, for each non-boundary sentence, we will have two decisions: one from DT p and one from DT n . Our parser consolidates the two decisions and generates one or more sub-trees for each sentence by checking the following three mutually exclusive conditions one after another:",
"cite_spans": [],
"ref_spans": [
{
"start": 177,
"end": 185,
"text": "Figure 7",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Sliding Window",
"sec_num": "5.2"
},
{
"text": "\u2022 Same in both: If the sentence has the same (in terms of both structure and labels) well-formed sub-tree in both DT p and DT n , we take this subtree for the sentence. For example, in Figure 8(a) , S 2 has the same sub-tree in the two DTs, i.e. a DT for S 1 -S 2 and a DT for S 2 -S 3 . The two decisions agree on the DT for the sentence.",
"cite_spans": [],
"ref_spans": [
{
"start": 185,
"end": 196,
"text": "Figure 8(a)",
"ref_id": "FIGREF8"
}
],
"eq_spans": [],
"section": "Sliding Window",
"sec_num": "5.2"
},
{
"text": "\u2022 Different but no cross: If the sentence has a well-formed sub-tree in both DT p and DT n , but the two sub-trees vary either in structure or in labels, we pick the most probable one. For example, consider the DT for S 1 -S 2 in Figure 8 (a) and the DT for S 2 -S 3 in Figure 8(b) . In both cases S 2 has a well-formed sub-tree, but they differ in structure. We pick the sub-tree which has the higher probability in the two dynamic programming tables. \u2022 Cross: If either or both of DT p and DT n segment the sentence into multiple sub-trees, we pick the one with more sub-trees. For example, consider the two DTs in Figure 8(c) . In the DT for S 1 -S 2 , S 2 has three sub-trees (4-5,6,7), whereas in the DT for S 2 -S 3 , it has two (4-6,7). So, we extract the three sub-trees for S 2 from the first DT. If the sentence has the same number of sub-trees in both DT p and DT n , we pick the one with higher probability in the dynamic programming tables.",
"cite_spans": [],
"ref_spans": [
{
"start": 230,
"end": 238,
"text": "Figure 8",
"ref_id": "FIGREF8"
},
{
"start": 270,
"end": 281,
"text": "Figure 8(b)",
"ref_id": "FIGREF8"
},
{
"start": 617,
"end": 628,
"text": "Figure 8(c)",
"ref_id": "FIGREF8"
}
],
"eq_spans": [],
"section": "Sliding Window",
"sec_num": "5.2"
},
{
"text": "At the end, the multi-sentential parser takes all these sentence-level sub-trees for a document, and builds a full rhetorical parse for the document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sliding Window",
"sec_num": "5.2"
},
{
"text": "While previous studies on document-level parsing only report their results on a particular corpus, to show the generality of our method, we experiment with texts from two very different genres. Our first corpus is the standard RST-DT (Carlson et al., 2002) , which consists of 385 Wall Street Journal articles, and is partitioned into a training set of 347 documents and a test set of 38 documents. 53 documents, selected from both sets were annotated by two annotators, based on which we measure human agreement. In RST-DT, the original 25 rhetorical relations defined by (Mann and Thompson, 1988) are further divided into a set of 18 coarser relation classes with 78 finer-grained relations. Our second corpus is the Instructional corpus prepared by (Subba and Di-Eugenio, 2009) , which contains 176 how-to-do manuals on homerepair. The corpus was annotated with 26 informational relations (e.g., Preparation-Act, Act-Goal).",
"cite_spans": [
{
"start": 234,
"end": 256,
"text": "(Carlson et al., 2002)",
"ref_id": "BIBREF1"
},
{
"start": 573,
"end": 598,
"text": "(Mann and Thompson, 1988)",
"ref_id": "BIBREF9"
},
{
"start": 752,
"end": 780,
"text": "(Subba and Di-Eugenio, 2009)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpora",
"sec_num": "6.1"
},
{
"text": "We experiment with our discourse parser on the two datasets using our two different parsing approaches, namely 1S-1S and the sliding window. We compare our approach with HILDA (Hernault et al., 2010) on RST-DT, and with the ILP-based approach of (Subba and Di-Eugenio, 2009) on the Instructional corpus, since they are the state-ofthe-art on the respective genres. On RST-DT, the standard split was used for training and testing purposes. The results for HILDA were obtained by running the system with default settings on the same inputs we provided to our system. Since we could not run the ILP-based system of (Subba and Di-Eugenio, 2009 ) (not publicly available) on the Instructional corpus, we report the performances presented in their paper. They used 151 documents for training and 25 documents for testing. Since we did not have access to their particular split, we took 5 random samples of 151 documents for training and 25 documents for testing, and report the average performance over the 5 test sets.",
"cite_spans": [
{
"start": 176,
"end": 199,
"text": "(Hernault et al., 2010)",
"ref_id": "BIBREF5"
},
{
"start": 246,
"end": 274,
"text": "(Subba and Di-Eugenio, 2009)",
"ref_id": "BIBREF20"
},
{
"start": 612,
"end": 639,
"text": "(Subba and Di-Eugenio, 2009",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "6.2"
},
{
"text": "To evaluate the parsing performance, we use the standard unlabeled (i.e., hierarchical spans) and labeled (i.e., nuclearity and relation) precision, recall and F-score as described in (Marcu, 2000b) . To compare with previous studies, our experiments on RST-DT use the 18 coarser relations. After attaching the nuclearity statuses (NS, SN, NN) to these relations, we get 41 distinct relations. Following (Subba and Di-Eugenio, 2009) on the Instructional corpus, we use 26 relations, and treat the reversals of non-commutative relations as separate relations. That is, Goal-Act and Act-Goal are considered as two different relations. Attaching the nuclearity statuses to these relations gives 76 distinct relations. Analogous to previous studies, we map the n-ary relations (e.g., Joint) into nested right-branching binary relations. Table 2 presents F-score parsing results for our parsers and the existing systems on the two corpora. 2 On both corpora, our parser, namely, 1S-1S (TSP 1-1) and sliding window (TSP SW), outperform existing systems by a wide margin (p<7.1e-05). 3 On RST-DT, our parsers achieve absolute F-score improvements of 8%, 9.4% and 11.4% in span, nuclearity and relation, respectively, over HILDA. This represents relative error reductions of 32%, 23% and 21% in span, nuclearity and relation, respectively. Our results are also close to the upper bound, i.e. human agreement on this corpus.",
"cite_spans": [
{
"start": 184,
"end": 198,
"text": "(Marcu, 2000b)",
"ref_id": "BIBREF12"
},
{
"start": 404,
"end": 432,
"text": "(Subba and Di-Eugenio, 2009)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 833,
"end": 840,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "6.2"
},
{
"text": "On the Instructional genre, our parsers deliver absolute F-score improvements of 10.5%, 13.6% and 8.14% in span, nuclearity and relations, respectively, over the ILP-based approach. Our parsers, therefore, reduce errors by 36%, 27% and 13% in span, nuclearity and relations, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Error Analysis",
"sec_num": "6.3"
},
{
"text": "If we compare the performance of our parsers on the two corpora, we observe higher results on RST-DT. This can be explained in at least two ways. First, the Instructional corpus has a smaller amount of data with a larger set of relations (76 when nuclearity attached). Second, some frequent relations are (semantically) very similar (e.g., Preparation-Act, Step1-Step2), which makes it difficult even for the human annotators to distinguish them (Subba and Di-Eugenio, 2009) .",
"cite_spans": [
{
"start": 446,
"end": 474,
"text": "(Subba and Di-Eugenio, 2009)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Error Analysis",
"sec_num": "6.3"
},
{
"text": "Comparison between our two models reveals that TSP SW significantly outperforms TSP 1-1 only in finding the right structure on both corpora (p<0.01). Not surprisingly, the improvement is higher on the Instructional corpus. A likely explanation is that the Instructional corpus contains more leaky boundaries (12%), allowing the sliding window approach to be more effective in finding those, without inducing much noise for the labels. This clearly demonstrates the potential of TSP SW for datasets with even more leaky boundaries e.g., the Dutch (Vliet and Redeker, 2011) and the German Potsdam (Stede, 2004) corpora. Error analysis reveals that although TSP SW finds more correct structures, a corresponding improvement in labeling relations is not present because in a few cases, it tends to induce noise from the neighboring sentences for the labels. For example, when parsing was performed on the first sentence in Figure 1 in isolation using 1S-1S, our parser rightly identifies the Contrast relation between EDUs 2 and 3. But, when it is considered with its neighboring sentences by the sliding window, the parser labels it as Elaboration. A promising strategy to deal with this and similar problems that we plan to explore in future, is to apply both approaches to each sentence and combine them by consolidating three probabilistic decisions, i.e. the one from 1S-1S and the two from sliding window.",
"cite_spans": [
{
"start": 546,
"end": 571,
"text": "(Vliet and Redeker, 2011)",
"ref_id": "BIBREF24"
},
{
"start": 595,
"end": 608,
"text": "(Stede, 2004)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 919,
"end": 927,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Error Analysis",
"sec_num": "6.3"
},
{
"text": "To further analyze the errors made by our parser on the hardest task of relation labeling, Figure 9 presents the confusion matrix for TSP 1-1 on the RST-DT test set. The relation labels are ordered according to their frequency in the RST-DT training set. In general, the errors are produced by two different causes acting together: (i) imbalanced distribution of the relations, and (ii) semantic similarity between the relations. The most frequent relation Elaboration tends to mislead others especially, the ones which are semantically similar (e.g., Explanation, Background) and less frequent (e.g., Summary, Evaluation). The relations which are semantically similar mislead each other (e.g., Temporal:Background, Cause:Explanation).",
"cite_spans": [],
"ref_spans": [
{
"start": 91,
"end": 99,
"text": "Figure 9",
"ref_id": "FIGREF9"
}
],
"eq_spans": [],
"section": "Results and Error Analysis",
"sec_num": "6.3"
},
{
"text": "These observations suggest two ways to improve our parser. We would like to employ a more robust method (e.g., ensemble methods with bagging) to deal with the imbalanced distribution of relations, along with taking advantage of a richer semantic knowledge (e.g., compositional semantics) to cope with the errors caused by semantic similarity between the rhetorical relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Error Analysis",
"sec_num": "6.3"
},
{
"text": "In this paper, we have presented a novel discourse parser that applies an optimal parsing algorithm to probabilities inferred from two CRF models: one for intra-sentential parsing and the other for multi-sentential parsing. The two models exploit their own informative feature sets and the distributional variations of the relations in the two parsing conditions. We have also presented two novel approaches to combine them effectively. Empirical evaluations on two different genres demonstrate that our approach yields substantial improvement over existing methods in discourse parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "For n + 1 EDUs, the number of valid discourse trees is actually the Catalan number Cn.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Precision, Recall and F-score are the same when manual segmentation is used (seeMarcu, (2000b), page 143).3 Since we did not have access to the output or to the system of (Subba and Di-Eugenio, 2009), we were not able to perform a significance test on the Instructional corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We are grateful to Frank Tompa and the anonymous reviewers for their comments, and the NSERC BIN and CGS-D for financial support.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Identifying Justifications in Written Dialogs by Classifying Text as Argumentative",
"authors": [
{
"first": "O",
"middle": [],
"last": "Biran",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Rambow",
"suffix": ""
}
],
"year": 2011,
"venue": "International Journal of Semantic Computing",
"volume": "5",
"issue": "4",
"pages": "363--381",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "O. Biran and O. Rambow. 2011. Identifying Justi- fications in Written Dialogs by Classifying Text as Argumentative. International Journal of Semantic Computing, 5(4):363-381.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "RST Discourse Treebank (RST-DT) LDC2002T07. Linguistic Data Consortium",
"authors": [
{
"first": "L",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Okurowski",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Carlson, D. Marcu, and M. Okurowski. 2002. RST Discourse Treebank (RST-DT) LDC2002T07. Lin- guistic Data Consortium, Philadelphia.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Text-level Discourse Parsing with Rich Linguistic Features",
"authors": [
{
"first": "V",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, ACL '12",
"volume": "",
"issue": "",
"pages": "60--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Feng and G. Hirst. 2012. Text-level Discourse Pars- ing with Rich Linguistic Features. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, ACL '12, pages 60-68, Jeju Island, Korea. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The Utility of Parsederived Features for Automatic Discourse Segmentation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Fisher",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Roark",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, ACL '07",
"volume": "",
"issue": "",
"pages": "488--495",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Fisher and B. Roark. 2007. The Utility of Parse- derived Features for Automatic Discourse Segmen- tation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, ACL '07, pages 488-495, Prague, Czech Republic. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Improving Word Sense Disambiguation in Lexical Chaining",
"authors": [
{
"first": "M",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 18th International Joint Conference on Artificial Intelligence, IJCAI '07",
"volume": "",
"issue": "",
"pages": "1486--1488",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Galley and K. McKeown. 2003. Improving Word Sense Disambiguation in Lexical Chaining. In Pro- ceedings of the 18th International Joint Conference on Artificial Intelligence, IJCAI '07, pages 1486- 1488, Acapulco, Mexico.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "HILDA: A Discourse Parser Using Support Vector Machine Classification",
"authors": [
{
"first": "H",
"middle": [],
"last": "Hernault",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Prendinger",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ishizuka",
"suffix": ""
}
],
"year": 2010,
"venue": "Dialogue and Discourse",
"volume": "1",
"issue": "3",
"pages": "1--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Hernault, H. Prendinger, D. duVerle, and M. Ishizuka. 2010. HILDA: A Discourse Parser Using Support Vector Machine Classification. Dia- logue and Discourse, 1(3):1-33.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Novel Discriminative Framework for Sentence-Level Discourse Analysis",
"authors": [
{
"first": "S",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "R",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "12",
"issue": "",
"pages": "904--915",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Joty, G. Carenini, and R. T. Ng. 2012. A Novel Discriminative Framework for Sentence-Level Dis- course Analysis. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Lan- guage Processing and Computational Natural Lan- guage Learning, EMNLP-CoNLL '12, pages 904- 915, Jeju Island, Korea. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Generating Discourse Structures for Written Texts",
"authors": [
{
"first": "H",
"middle": [],
"last": "Lethanh",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Abeysinghe",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Huyck",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th international conference on Computational Linguistics, COLING '04",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. LeThanh, G. Abeysinghe, and C. Huyck. 2004. Generating Discourse Structures for Written Texts. In Proceedings of the 20th international confer- ence on Computational Linguistics, COLING '04, Geneva, Switzerland. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Discourse Indicators for Content Selection in Summarization",
"authors": [
{
"first": "A",
"middle": [],
"last": "Louis",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGDIAL '10",
"volume": "",
"issue": "",
"pages": "147--156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Louis, A. Joshi, and A. Nenkova. 2010. Discourse Indicators for Content Selection in Summarization. In Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGDIAL '10, pages 147-156, Tokyo, Japan. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Rhetorical Structure Theory: Toward a Functional Theory of Text Organization",
"authors": [
{
"first": "W",
"middle": [],
"last": "Mann",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Thompson",
"suffix": ""
}
],
"year": 1988,
"venue": "Text",
"volume": "8",
"issue": "3",
"pages": "243--281",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Mann and S. Thompson. 1988. Rhetorical Struc- ture Theory: Toward a Functional Theory of Text Organization. Text, 8(3):243-281.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "An Unsupervised Approach to Recognizing Discourse Relations",
"authors": [
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Echihabi",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02",
"volume": "",
"issue": "",
"pages": "368--375",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Marcu and A. Echihabi. 2002. An Unsupervised Approach to Recognizing Discourse Relations. In Proceedings of the 40th Annual Meeting on Associa- tion for Computational Linguistics, ACL '02, pages 368-375. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The Rhetorical Parsing of Unrestricted Texts: A Surface-based Approach",
"authors": [
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational Linguistics",
"volume": "26",
"issue": "",
"pages": "395--448",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Marcu. 2000a. The Rhetorical Parsing of Unre- stricted Texts: A Surface-based Approach. Compu- tational Linguistics, 26:395-448.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The Theory and Practice of Discourse Parsing and Summarization",
"authors": [
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Marcu. 2000b. The Theory and Practice of Dis- course Parsing and Summarization. MIT Press, Cambridge, MA, USA.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Lexical Cohesion Computed by Thesaural Relations as an Indicator of Structure of Text",
"authors": [
{
"first": "J",
"middle": [],
"last": "Morris",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 1991,
"venue": "Computational Linguistics",
"volume": "17",
"issue": "1",
"pages": "21--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Morris and G. Hirst. 1991. Lexical Cohesion Computed by Thesaural Relations as an Indicator of Structure of Text. Computational Linguistics, 17(1):21-48.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The Penn Discourse Tree-Bank as a Resource for Natural Language Generation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Dinesh",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Miltsakaki",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Corpus Linguistics Workshop on Using Corpora for Natural Language Generation",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Prasad, A. Joshi, N. Dinesh, A. Lee, E. Miltsakaki, and B. Webber. 2005. The Penn Discourse Tree- Bank as a Resource for Natural Language Gener- ation. In Proceedings of the Corpus Linguistics Workshop on Using Corpora for Natural Language Generation, pages 25-32, Birmingham, U.K.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Discourse-Level Relations for Opinion Analysis",
"authors": [
{
"first": "S",
"middle": [],
"last": "Somasundaran",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Somasundaran, 2010. Discourse-Level Relations for Opinion Analysis. PhD thesis, University of Pitts- burgh.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Sentence Level Discourse Parsing Using Syntactic and Lexical Information",
"authors": [
{
"first": "R",
"middle": [],
"last": "Soricut",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, NAACL-HLT '03",
"volume": "",
"issue": "",
"pages": "149--156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Soricut and D. Marcu. 2003. Sentence Level Discourse Parsing Using Syntactic and Lexical In- formation. In Proceedings of the 2003 Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics on Human Lan- guage Technology, NAACL-HLT '03, pages 149- 156, Edmonton, Canada. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Discourse Chunking and its Application to Sentence Compression",
"authors": [
{
"first": "C",
"middle": [],
"last": "Sporleder",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "257--264",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Sporleder and M. Lapata. 2005. Discourse Chunk- ing and its Application to Sentence Compression. In Proceedings of the conference on Human Lan- guage Technology and Empirical Methods in Nat- ural Language Processing, pages 257-264, Van- couver, British Columbia, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Combining Hierarchical Clustering and Machine Learning to Predict High-Level Discourse Structure",
"authors": [
{
"first": "C",
"middle": [],
"last": "Sporleder",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lascarides",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th international conference on Computational Linguistics, COLING '04",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Sporleder and A. Lascarides. 2004. Combining Hi- erarchical Clustering and Machine Learning to Pre- dict High-Level Discourse Structure. In Proceed- ings of the 20th international conference on Compu- tational Linguistics, COLING '04, Geneva, Switzer- land. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The Potsdam Commentary Corpus",
"authors": [
{
"first": "M",
"middle": [],
"last": "Stede",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the ACL-04 Workshop on Discourse Annotation, Barcelona. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Stede. 2004. The Potsdam Commentary Corpus. In Proceedings of the ACL-04 Workshop on Dis- course Annotation, Barcelona. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "An Effective Discourse Parser that Uses Rich Linguistic Information",
"authors": [
{
"first": "R",
"middle": [],
"last": "Subba",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Di-Eugenio",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT-NAACL '09",
"volume": "",
"issue": "",
"pages": "566--574",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Subba and B. Di-Eugenio. 2009. An Effective Dis- course Parser that Uses Rich Linguistic Information. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Lin- guistics, HLT-NAACL '09, pages 566-574, Boul- der, Colorado. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "An Introduction to Conditional Random Fields. Foundations and Trends in Machine Learning",
"authors": [
{
"first": "C",
"middle": [],
"last": "Sutton",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "4",
"issue": "",
"pages": "267--373",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Sutton and A. McCallum. 2012. An Introduction to Conditional Random Fields. Foundations and Trends in Machine Learning, 4(4):267-373.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Dynamic Conditional Random Fields: Factorized Probabilistic Models for Labeling and Segmenting Sequence Data",
"authors": [
{
"first": "C",
"middle": [],
"last": "Sutton",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Rohanimanesh",
"suffix": ""
}
],
"year": 2007,
"venue": "Journal of Machine Learning Research",
"volume": "8",
"issue": "",
"pages": "693--723",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Sutton, A. McCallum, and K. Rohanimanesh. 2007. Dynamic Conditional Random Fields: Factorized Probabilistic Models for Labeling and Segmenting Sequence Data. Journal of Machine Learning Re- search (JMLR), 8:693-723.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Evaluating Discourse-based Answer Extraction for Why-question Answering",
"authors": [
{
"first": "S",
"middle": [],
"last": "Verberne",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Boves",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Oostdijk",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Coppen",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval",
"volume": "",
"issue": "",
"pages": "735--736",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Verberne, L. Boves, N. Oostdijk, and P. Coppen. 2007. Evaluating Discourse-based Answer Extrac- tion for Why-question Answering. In Proceedings of the 30th annual international ACM SIGIR confer- ence on Research and development in information retrieval, pages 735-736, Amsterdam, The Nether- lands. ACM.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Complex Sentences as Leaky Units in Discourse Parsing",
"authors": [
{
"first": "N",
"middle": [],
"last": "Vliet",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Redeker",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of Constraints in Discourse, Agay-Saint Raphael",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Vliet and G. Redeker. 2011. Complex Sentences as Leaky Units in Discourse Parsing. In Proceedings of Constraints in Discourse, Agay-Saint Raphael, September.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Distributions of six most frequent relations in intra-sentential and multi-sentential parsing scenarios."
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Discourse parsing framework. frequent relations on a development set containing 20 randomly selected documents from RST-DT."
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": ", 4] and R[2, 2, 4], respectively. Similarly, we attain the probability of the constituents R[1, 1, 4], R[1, 2, 4] and R[1, 3, 4] by computing their respective posterior marginals from the three possible sequences at the third (top) level."
},
"FIGREF5": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Our parsing model applied to the sequences at different levels of a sentence-level DT. (a) Only possible sequence at the first level, (b) Three possible sequences at the second level, (c) Three possible sequences at the third level."
},
"FIGREF7": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Two possible DTs for three sentences."
},
"FIGREF8": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Extracting sub-trees for S 2 ."
},
"FIGREF9": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "T-O T-CM M-M CMP EV SU CND EN CA TE EX BA CO JO S-Confusion matrix for relation labels on the RST-DT test set. Y-axis represents true and X-axis represents predicted relations. The relations are Topic-Change (T-C), Topic-Comment (T-CM), Textual Organization (T-O), Manner-Means (M-M), Comparison (CMP), Evaluation (EV), Summary (SU), Condition (CND), Enablement (EN), Cause (CA), Temporal (TE), Explanation (EX), Background (BA), Contrast (CO), Joint (JO), Same-Unit (S-U), Attribution (AT) and Elaboration (EL)."
},
"TABREF1": {
"num": null,
"content": "<table/>",
"text": "Features used in our parsing models.Lexico-syntactic features dominance sets",
"type_str": "table",
"html": null
},
"TABREF3": {
"num": null,
"content": "<table/>",
"text": "Parsing results of different models using manual (gold) segmentation. Performances significantly superior to HILDA (with p<7.1e-05) are denoted by *. Significant differences between TSP 1-1 and TSP SW (with p<0.01) are denoted by \u2020.",
"type_str": "table",
"html": null
}
}
}
}