ACL-OCL / Base_JSON /prefixP /json /P14 /P14-1048.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P14-1048",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:05:48.829553Z"
},
"title": "A Linear-Time Bottom-Up Discourse Parser with Constraints and Post-Editing",
"authors": [
{
"first": "Vanessa",
"middle": [
"Wei"
],
"last": "Feng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Toronto Toronto",
"location": {
"region": "ON",
"country": "Canada"
}
},
"email": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Toronto Toronto",
"location": {
"region": "ON",
"country": "Canada"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Text-level discourse parsing remains a challenge. The current state-of-the-art overall accuracy in relation assignment is 55.73%, achieved by Joty et al. (2013). However, their model has a high order of time complexity, and thus cannot be applied in practice. In this work, we develop a much faster model whose time complexity is linear in the number of sentences. Our model adopts a greedy bottom-up approach, with two linear-chain CRFs applied in cascade as local classifiers. To enhance the accuracy of the pipeline, we add additional constraints in the Viterbi decoding of the first CRF. In addition to efficiency, our parser also significantly outperforms the state of the art. Moreover, our novel approach of post-editing, which modifies a fully-built tree by considering information from constituents on upper levels, can further improve the accuracy.",
"pdf_parse": {
"paper_id": "P14-1048",
"_pdf_hash": "",
"abstract": [
{
"text": "Text-level discourse parsing remains a challenge. The current state-of-the-art overall accuracy in relation assignment is 55.73%, achieved by Joty et al. (2013). However, their model has a high order of time complexity, and thus cannot be applied in practice. In this work, we develop a much faster model whose time complexity is linear in the number of sentences. Our model adopts a greedy bottom-up approach, with two linear-chain CRFs applied in cascade as local classifiers. To enhance the accuracy of the pipeline, we add additional constraints in the Viterbi decoding of the first CRF. In addition to efficiency, our parser also significantly outperforms the state of the art. Moreover, our novel approach of post-editing, which modifies a fully-built tree by considering information from constituents on upper levels, can further improve the accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Discourse parsing is the task of identifying the presence and the type of the discourse relations between discourse units. While research in discourse parsing can be partitioned into several directions according to different theories and frameworks, Rhetorical Structure Theory (RST) (Mann and Thompson, 1988) is probably the most ambitious one, because it aims to identify not only the discourse relations in a small local context, but also the hierarchical tree structure for the full text: from the relations relating the smallest discourse units (called elementary discourse units, EDUs), to the ones connecting paragraphs.",
"cite_spans": [
{
"start": 284,
"end": 309,
"text": "(Mann and Thompson, 1988)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For example, Figure 1 shows a text fragment consisting of two sentences with four EDUs in total (e 1 -e 4 ). Its discourse tree representation is shown below the text, following the notation convention of RST: the two EDUs e 1 and e 2 are related by a mononuclear relation CONSEQUENCE, where e 2 is the more salient span (called nucleus, and e 1 is called satellite); e 3 and e 4 are related by another mononuclear relation CIRCUMSTANCE, with e 4 as the nucleus; the two spans e 1:2 and e 3:4 are further related by a multi-nuclear relation SE-QUENCE, with both spans as the nucleus.",
"cite_spans": [],
"ref_spans": [
{
"start": 13,
"end": 21,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Conventionally, there are two major sub-tasks related to text-level discourse parsing: (1) EDU segmentation: to segment the raw text into EDUs, and (2) tree-building: to build a discourse tree from EDUs, representing the discourse relations in the text. Since the first sub-task is considered relatively easy, with the state-of-art accuracy at above 90% (Joty et al., 2012) , the recent research focus is on the second sub-task, and often uses manual EDU segmentation.",
"cite_spans": [
{
"start": 354,
"end": 373,
"text": "(Joty et al., 2012)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The current state-of-the-art overall accuracy of the tree-building sub-task, evaluated on the RST Discourse Treebank (RST-DT, to be introduced in Section 8), is 55.73% by Joty et al. (2013) . However, as an optimal discourse parser, Joty et al.'s model is highly inefficient in practice, with respect to both their DCRF-based local classifiers, and their CKY-like bottom-up parsing algorithm. DCRF (Dynamic Conditional Random Fields) is a generalization of linear-chain CRFs, in which each time slice contains a set of state variables and edges (Sutton et al., 2007) . CKY parsing is a bottom-up parsing algorithm which searches all possible parsing paths by dynamic programming. Therefore, despite its superior performance, their model is infeasible in most realistic situations.",
"cite_spans": [
{
"start": 171,
"end": 189,
"text": "Joty et al. (2013)",
"ref_id": "BIBREF7"
},
{
"start": 545,
"end": 566,
"text": "(Sutton et al., 2007)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main objective of this work is to develop a more efficient discourse parser, with similar or even better performance with respect to Joty et al.'s optimal parser, but able to produce parsing results in real time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contribution is three-fold. First, with a greedy bottom-up strategy, we develop a discourse parser with a time complexity linear in the total number of sentences in the document. As a result of successfully avoiding the expensive nongreedy parsing algorithms, our discourse parser is very efficient in practice. Second, by using two linear-chain CRFs to label a sequence of discourse constituents, we can incorporate contextual information in a more natural way, compared to using traditional discriminative classifiers, such as SVMs. Specifically, in the Viterbi decoding of the first CRF, we include additional constraints elicited from common sense, to make more effective local decisions. Third, after a discourse (sub)tree is fully built from bottom up, we perform a novel post-editing process by considering information from the constituents on upper levels. We show that this post-editing can further improve the overall parsing performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Related work",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The HILDA discourse parser by Hernault et al. (2010) is the first attempt at RST-style text-level discourse parsing. It adopts a pipeline framework, and greedily builds the discourse tree from the bottom up. In particular, starting from EDUs, at each step of the tree-building, a binary SVM classifier is first applied to determine which pair of adjacent discourse constituents should be merged to form a larger span, and another multi-class SVM classifier is then applied to assign the type of discourse relation that holds between the chosen pair.",
"cite_spans": [
{
"start": 30,
"end": 52,
"text": "Hernault et al. (2010)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "HILDA discourse parser",
"sec_num": "2.1"
},
{
"text": "The strength of HILDA's greedy tree-building strategy is its efficiency in practice. Also, the employment of SVM classifiers allows the incorporation of rich features for better data representation (Feng and Hirst, 2012) . However, HILDA's approach also has obvious weakness: the greedy algorithm may lead to poor performance due to local optima, and more importantly, the SVM classifiers are not well-suited for solving structural problems due to the difficulty of taking context into account.",
"cite_spans": [
{
"start": 198,
"end": 220,
"text": "(Feng and Hirst, 2012)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "HILDA discourse parser",
"sec_num": "2.1"
},
{
"text": "Joty et al. (2013) approach the problem of textlevel discourse parsing using a model trained by Conditional Random Fields (CRF). Their model has two distinct features. First, they decomposed the problem of textlevel discourse parsing into two stages: intrasentential parsing to produce a discourse tree for each sentence, followed by multi-sentential parsing to combine the sentence-level discourse trees and produce the text-level discourse tree. Specifically, they employed two separate models for intra-and multi-sentential parsing. Their choice of two-stage parsing is well motivated for two reasons: (1) it has been shown that sentence boundaries correlate very well with discourse boundaries, and (2) the scalability issue of their CRFbased models can be overcome by this decomposition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joty et al.'s joint model",
"sec_num": "2.2"
},
{
"text": "Second, they jointly modeled the structure and the relation for a given pair of discourse units. For example, Figure 2 shows their intra-sentential model, in which they use the bottom layer to represent discourse units; the middle layer of binary nodes to predict the connection of adjacent discourse units; and the top layer of multi-class nodes to predict the type of the relation between two units. Their model assigns a probability to each possible constituent, and a CKY-like parsing algorithm finds the globally optimal discourse tree, given the computed probabilities.",
"cite_spans": [],
"ref_spans": [
{
"start": 110,
"end": 118,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Joty et al.'s joint model",
"sec_num": "2.2"
},
{
"text": "The strength of Joty et al.'s model is their joint modeling of the structure and the relation, such that information from each aspect can interact with the other. However, their model has a major defect in its inefficiency, or even infeasibility, for application in practice. The inefficiency lies in both their DCRF-based joint model, on which inference is usually slow, and their CKY-like parsing algorithm, whose issue is more prominent. Due to the O(n 3 ) time complexity, where n is the number",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joty et al.'s joint model",
"sec_num": "2.2"
},
{
"text": "R 2 S 2 U 2 U 1 R 3 S 3 U 3 R j S j U j R t-1 S t-1 U t-1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joty et al.'s joint model",
"sec_num": "2.2"
},
{
"text": "Relation sequence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joty et al.'s joint model",
"sec_num": "2.2"
},
{
"text": "Unit sequence at level i of input discourse units, for large documents, the parsing simply takes too long 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structure sequence",
"sec_num": null
},
{
"text": "3 Overall work flow Figure 3 demonstrates the overall work flow of our discourse parser. The general idea is that, similar to Joty et al. 2013, we perform a sentence-level parsing for each sentence first, followed by a textlevel parsing to generate a full discourse tree for the whole document. However, in addition to efficiency (to be shown in Section 6), our discourse parser has a distinct feature, which is the postediting component (to be introduced in Section 5), as outlined in dashes.",
"cite_spans": [],
"ref_spans": [
{
"start": 20,
"end": 28,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Structure sequence",
"sec_num": null
},
{
"text": "Our discourse parser works as follows. A document D is first segmented into a list of sentences. Each sentence S i , after being segmented into EDUs (not shown in the figure), goes through an intra-sentential bottom-up tree-building model M intra , to form a sentence-level discourse tree T S i , with the EDUs as leaf nodes. After that, we apply the intra-sentential post-editing model P intra to modify the generated tree T S i to T p S i , by considering upper-level information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structure sequence",
"sec_num": null
},
{
"text": "We then combine all sentence-level discourse tree T p S i 's using our multi-sentential bottom-up tree-building model M multi to generate the textlevel discourse tree T D . Similar to sentence-level parsing, we also post-edit T D using P multi to produce the final discourse tree T p D .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structure sequence",
"sec_num": null
},
{
"text": "1 The largest document in the RST-DT contains over 180 sentences, i.e., n > 180 for their multi-sentential CKY parsing. Intuitively, suppose the average time to compute the probability of each constituent is 0.01 second, then in total, the CKY-like parsing takes over 16 hours. It is possible to optimize Joty et al.'s CKY-like parsing by replacing their CRFbased computation for upper-level constituents with some local computation based on the probabilities of lower-level constituents. However, such optimization is beyond the scope of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structure sequence",
"sec_num": null
},
{
"text": "4 Bottom-up tree-building For both intra-and multi-sentential parsing, our bottom-up tree-building process adopts a similar greedy pipeline framework like the HILDA discourse parser (discussed in Section 2.1), to guarantee efficiency for large documents. In particular, starting from the constituents on the bottom level (EDUs for intra-sentential parsing and sentence-level discourse trees for multi-sentential parsing), at each step of the tree-building, we greedily merge a pair of adjacent discourse constituents such that the merged constituent has the highest probability as predicted by our structure model. The relation model is then applied to assign the relation to the new constituent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structure sequence",
"sec_num": null
},
{
"text": "Now we describe the local models we use to make decisions for a given pair of adjacent discourse constituents in the bottom-up tree-building. There are two dimensions for our local models: (1) scope of the model: intra-or multi-sentential, and (2) purpose of the model: for determining structures or relations. So we have four local models, M struct intra , M rel intra , M struct multi , and M rel multi . While our bottom-up tree-building shares the greedy framework with HILDA, unlike HILDA, our local models are implemented using CRFs. In this way, we are able to take into account the sequential information from contextual discourse constituents, which cannot be naturally represented in HILDA with SVMs as local classifiers. Therefore, our model incorporates the strengths of both HILDA and Joty et al.'s model, i.e., the efficiency of a greedy parsing algorithm, and the ability to incorporate sequential information with CRFs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear-chain CRFs as Local models",
"sec_num": "4.1"
},
{
"text": "As shown by Feng and Hirst (2012), for a pair of discourse constituents of interest, the sequential information from contextual constituents is crucial for determining structures. Therefore, it is well motivated to use Conditional Random Fields (CRFs) (Lafferty et al., 2001) , which is a discriminative probabilistic graphical model, to make predictions for a sequence of constituents surrounding the pair of interest.",
"cite_spans": [
{
"start": 252,
"end": 275,
"text": "(Lafferty et al., 2001)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linear-chain CRFs as Local models",
"sec_num": "4.1"
},
{
"text": "In this sense, our local models appear similar to Joty et al.'s non-greedy parsing models. However, the major distinction between our models and theirs is that we do not jointly model the structure and the relation; rather, we use two linear- Figure 3 : The work flow of our proposed discourse parser. In the figure, M intra and M multi stand for the intra-and multi-sentential bottom-up tree-building models, and P intra and P multi stand for the intra-and multi-sentential post-editing models. chain CRFs to model the structure and the relation separately. Although joint modeling has shown to be effective in various NLP and computer vision applications (Sutton et al., 2007; Yang et al., 2009; Wojek and Schiele, 2008) , our choice of using two separate models is for the following reasons:",
"cite_spans": [
{
"start": 657,
"end": 678,
"text": "(Sutton et al., 2007;",
"ref_id": "BIBREF19"
},
{
"start": 679,
"end": 697,
"text": "Yang et al., 2009;",
"ref_id": "BIBREF21"
},
{
"start": 698,
"end": 722,
"text": "Wojek and Schiele, 2008)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 243,
"end": 251,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Linear-chain CRFs as Local models",
"sec_num": "4.1"
},
{
"text": "D S 1 S i S n ... ... M intra M intra M intra M intra P intra ... ... ... P intra P intra P multi M multi ... ... 1 S T i S T n S T p S n T p S i T p S T 1 D T p D T",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear-chain CRFs as Local models",
"sec_num": "4.1"
},
{
"text": "First, it is not entirely appropriate to model the structure and the relation at the same time. For example, with respect to Figure 2 , it is unclear how the relation node R j is represented for a training instance whose structure node S j = 0, i.e., the units U j\u22121 and U j are disjoint. Assume a special relation NO-REL is assigned for R j . Then, in the tree-building process, we will have to deal with the situations where the joint model yields conflicting predictions: it is possible that the model predicts S j = 1 and R j = NO-REL, or vice versa, and we will have to decide which node to trust (and thus in some sense, the structure and the relation is no longer jointly modeled).",
"cite_spans": [],
"ref_spans": [
{
"start": 125,
"end": 133,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Linear-chain CRFs as Local models",
"sec_num": "4.1"
},
{
"text": "Secondly, as a joint model, it is mandatory to use a dynamic CRF, for which exact inference is usually intractable or slow. In contrast, for linearchain CRFs, efficient algorithms and implementations for exact inference exist. Figure 4a shows our intra-sentential structure model M struct intra in the form of a linear-chain CRF. Similar to Joty et al.'s intra-sentential model, the first layer of the chain is composed of discourse constituents U j 's, and the second layer is composed of binary nodes S j 's to indicate the probability of merging adjacent discourse constituents.",
"cite_spans": [],
"ref_spans": [
{
"start": 227,
"end": 236,
"text": "Figure 4a",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Linear-chain CRFs as Local models",
"sec_num": "4.1"
},
{
"text": "S 2 U 2 U 1 S 3 U 3 S j U j S t U t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear-chain CRFs as Local models",
"sec_num": "4.1"
},
{
"text": "All units in sentence at level i (a) Intra-sentential structure model M struct intra .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structure sequence",
"sec_num": null
},
{
"text": "S j-1 U j-1 U j-2 S j U j Structure sequence Adjacent units at level i U j+1 S j+1 S j-1 U j-3 S j+2 U j+2 C 1 C 2 C 3 (b) Multi-sentential structure model M struct",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structure sequence",
"sec_num": null
},
{
"text": "multi . C 1 , C 2 , and C 3 denote the three chains for predicting U j and U j+1 . At each step in the bottom-up tree-building process, we generate a single sequence E, consisting of U 1 ,U 2 , . . . ,U j , . . . ,U t , which are all the current discourse constituents in the sentence that need to be processed. For instance, initially, we have the sequence E 1 = {e 1 , e 2 , . . . , e m }, which are the EDUs of the sentence; after merging e 1 and e 2 on the second level, we have E 2 = {e 1:2 , e 3 , . . . , e m }; after merging e 4 and e 5 on the third level, we have E 3 = {e 1:2 , e 3 , e 4:5 , . . . , e m }, and so on. Because the structure model is the first component in our pipeline of local models, its accuracy is crucial. Therefore, to improve its accuracy, we enforce additional commonsense constraints in its Viterbi decoding. In particular, we disallow 1-1 transitions between adjacent labels (a discourse unit can be merged with at most one adjacent unit), and we disallow all-zero sequences (at least one pair must be merged).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structure sequence",
"sec_num": null
},
{
"text": "Since the computation of E i does not depend on a particular pair of constituents, we can use the same sequence E i to compute structural probabilities for all adjacent constituents. In contrast, Joty et al.'s computation of intra-sentential sequences depends on the particular pair of constituents: the sequence is composed of the pair in question, with other EDUs in the sentence, even if those EDUs have already been merged. Thus, different CRF chains have to be formed for different pairs of constituents. In addition to efficiency, our use of a single CRF chain for all constituents can better capture the sequential dependencies among context, by taking into account the information from partially built discourse constituents, rather than bottom-level EDUs only.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structure sequence",
"sec_num": null
},
{
"text": "For multi-sentential parsing, where the smallest discourse units are single sentences, as argued by Joty et al. 2013, it is not feasible to use a long chain to represent all constituents, due to the fact that it takes O(T M 2 ) time to perform the forwardbackward exact inference on a chain with T units and an output vocabulary size of M, thus the overall complexity for all possible sequences in their model is O(M 2 n 3 ) 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-sentential structure model",
"sec_num": "4.2.2"
},
{
"text": "Instead, we choose to take a sliding-window approach to form CRF chains for a particular pair of constituents, as shown in Figure 4b . For example, suppose we wish to compute the structural probability for the pair U j\u22121 and U j , we form three chains, each of which contains two contextual constituents:",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 132,
"text": "Figure 4b",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Multi-sentential structure model",
"sec_num": "4.2.2"
},
{
"text": "C 1 = {U j\u22123 ,U j\u22122 ,U j\u22121 ,U j }, C 2 = {U j\u22122 ,U j\u22121 ,U j ,U j+1 },",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-sentential structure model",
"sec_num": "4.2.2"
},
{
"text": "and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-sentential structure model",
"sec_num": "4.2.2"
},
{
"text": "C 3 = {U j\u22121 ,U j ,U j+1 ,U j+2 }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-sentential structure model",
"sec_num": "4.2.2"
},
{
"text": "We then find the chain C t , 1 \u2264 t \u2264 3, with the highest joint probability over the entire sequence, and assign its marginal probability P(S t j = 1) to P(S j = 1). Similar to M struct intra , for M struct multi , we also include additional constraints in the Viterbi decoding, by disallowing transitions between two ones, and disallowing the sequence to be all zeros if it contains all the remaining constituents in the document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-sentential structure model",
"sec_num": "4.2.2"
},
{
"text": "The intra-sentential relation model M rel intra , shown in Figure 5a , works in a similar way to M struct intra , as described in Section 4.2.1. The linear-chain CRF contains a first layer of all discourse constituents U j 's in the sentence on level i, and a second layer of relation nodes R j 's to represent the relation between a pair of discourse constituents. However, unlike the structure model, adjacent relation nodes do not share discourse constituents on the first layer. Rather, each relation node R j attempts to model the relation of one single constituent U j , by taking U j 's left and right subtrees U j,L and U j,R as its first-layer nodes; if U j is a single EDU, then the first-layer node of R j is simply U j , and R j is a special relation symbol LEAF 3 . Since we know, a priori, that the constituents in the chains are either leaf nodes or the ones that have been merged by our structure model, we never need to worry about the NO-REL issue as outlined in Section 4.1.",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 68,
"text": "Figure 5a",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Intra-sentential relation model",
"sec_num": "4.3.1"
},
{
"text": "In the bottom-up tree-building process, after merging a pair of adjacent constituents using M struct intra into a new constituent, say U j , we form a chain consisting of all current constituents in the sentence to decide the relation label for U j , i.e., the R j node in the chain. In fact, by performing inference on this chain, we produce predictions not only for R j , but also for all other R nodes in the chain, which correspond to all other constituents in the sentence. Since those non-leaf constituents are already labeled in previous steps in the tree-building, we can now re-assign their relations if the model predicts differently in this step. Therefore, this re-labeling procedure can compensate for the loss of accuracy caused by our greedy bottom-up strategy to some extent. Figure 5b shows our multi-sentential relation model. Like M rel intra , the first layer consists of adjacent discourse units, and the relation nodes on the second layer model the relation of each constituent separately.",
"cite_spans": [],
"ref_spans": [
{
"start": 792,
"end": 801,
"text": "Figure 5b",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Intra-sentential relation model",
"sec_num": "4.3.1"
},
{
"text": "Similar to M struct multi introduced in Section 4.2.2, M rel multi also takes a sliding-window approach to predict labels for constituents in a local context. For a constituent U j to be predicted, we form three chains, and use the chain with the highest joint probability to assign or re-assign relations to constituents in that chain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-sentential relation model",
"sec_num": "4.3.2"
},
{
"text": "All units in sentence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation sequence",
"sec_num": null
},
{
"text": "at level i R1 U1,R U1,L R2 U2 Rj Uj,R Uj,L Rt Ut,R Ut,L (a) Intra-sentential relation model M rel intra . Relation sequence Adjacent units at level i R1 Uj-2,R Uj-2,L Rj-1 Uj-1 Rj Uj,R Uj,L Rj+1 Uj+1,R Uj+1,L Rj+2 Uj+2 C 1 C 2 C 3 (b) Multi-sentential relation model M rel",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation sequence",
"sec_num": null
},
{
"text": "multi . C 1 , C 2 , and C 3 denote the three sliding windows for predicting U j,L and U j,R . ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation sequence",
"sec_num": null
},
{
"text": "After an intra-or multi-sentential discourse tree is fully built, we perform a post-editing to consider possible modifications to the current tree, by considering useful information from the discourse constituents on upper levels, which is unavailable in the bottom-up tree-building process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-editing",
"sec_num": "5"
},
{
"text": "The motivation for post-editing is that, some particular discourse relations, such as TEXTUAL-ORGANIZATION, tend to occur on the top levels of the discourse tree; thus, information such as the depth of the discourse constituent can be quite indicative. However, the exact depth of a discourse constituent is usually unknown in the bottom-up tree-building process; therefore, it might be beneficial to modify the tree by including top-down information after the tree is fully built.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-editing",
"sec_num": "5"
},
{
"text": "The process of post-editing is shown in Algorithm 1. For each input discourse tree T , which is already fully built by bottom-up tree-building models, we do the following: Lines 3 -9: Identify the lowest level of T on which the constituents can be modified according to the post-editing structure component, P struct . To do so, we maintain a list L to store the discourse constituents that need to be examined. Initially, L consists of all the bottom-level constituents in T . At each step of the loop, we consider merging the pair of adjacent units in L with the highest probability predicted by P struct . If the predicted pair is not merged in the original tree T , then a possible modification is located; otherwise, we merge the pair, and proceed to the next iteration. Lines 10 -12: If modifications have been proposed in the previous step, we build a new tree Algorithm 1 Post-editing algorithm. Input: A fully built discourse tree T .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-editing",
"sec_num": "5"
},
{
"text": "1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-editing",
"sec_num": "5"
},
{
"text": "if |T | = 1 then 2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-editing",
"sec_num": "5"
},
{
"text": "return T Do nothing if it is a single EDU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-editing",
"sec_num": "5"
},
{
"text": "3: L \u2190 [U 1 ,U 2 , . . . ,U t ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-editing",
"sec_num": "5"
},
{
"text": "The bottom-level constituents in T . 4: while |L| > 2 do",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-editing",
"sec_num": "5"
},
{
"text": "5: i \u2190 PREDICTMERGING(L, P struct ) 6: p \u2190 PARENT(L[i], L[i + 1], T ) 7:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-editing",
"sec_num": "5"
},
{
"text": "if p = NULL then ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-editing",
"sec_num": "5"
},
{
"text": "L \u2190 [U 1 ,U 2 , . . . ,U t ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-editing",
"sec_num": "5"
},
{
"text": "12: T p \u2190 BUILDTREE(L, P struct , P rel , T ) Output: T p T p using P struct as the structure model, and P rel as the relation model, from the constituents on which modifications are proposed. Otherwise, T p is built from the bottom-level constituents of T . The upper-level information, such as the depth of a discourse constituent, is derived from the initial tree T .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-editing",
"sec_num": "5"
},
{
"text": "The local models, P {struct|rel} {intra|multi} , for post-editing is almost identical to their counterparts of the bottom-up tree-building, except that the linearchain CRFs in post-editing includes additional features to represent information from constituents on higher levels (to be introduced in Section 7).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local models",
"sec_num": "5.1"
},
{
"text": "Here we analyze the time complexity of each component in our discourse parser, to quantitatively demonstrate the time efficiency of our model. The following analysis is focused on the bottom-up tree-building process, but a similar analysis can be carried out for the post-editing process. Since the number of operations in the post-editing process is roughly the same (1.5 times in the worst case) as in the bottom-up tree-building, post-editing shares the same complexity as the tree-building.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear time complexity",
"sec_num": "6"
},
{
"text": "Suppose the input document is segmented into n sentences, and each sentence S k contains m k EDUs. For each sentence S k with m k EDUs, the overall time complexity to perform intra-sentential parsing is O(m 2 k ). The reason is the following. On level i of the bottom-up tree-building, we generate a single chain to represent the structure or relation for all the m k \u2212 i constituents that are currently in the sentence. The time complexity for performing forward-backward inference on the single chain is O((m k \u2212 i) \u00d7 M 2 ) = O(m k \u2212 i), where the constant M is the size of the output vocabulary. Starting from the EDUs on the bottom level, we need to perform inference for one chain on each level during the bottom-up tree-building, and thus the total time complexity is \u03a3 m k i=1 O(m k \u2212 i) = O(m 2 k ). The total time to generate sentence-level discourse trees for n sentences is \u03a3 n k=1 O(m 2 k ). It is fairly safe to assume that each m k is a constant, in the sense that m k is independent of the total number of sentences in the document. Therefore, the total time complexity \u03a3 n k=1 O(m 2 k ) \u2264 n \u00d7 O(max 1\u2264 j\u2264n (m 2 j )) = n \u00d7 O(1) = O(n), i.e., linear in the total number of sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intra-sentential parsing",
"sec_num": "6.1"
},
{
"text": "For multi-sentential models, M struct multi and M rel multi , as shown in Figures 4b and 5b , for a pair of constituents of interest, we generate multiple chains to predict the structure or the relation.",
"cite_spans": [],
"ref_spans": [
{
"start": 74,
"end": 91,
"text": "Figures 4b and 5b",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Multi-sentential parsing",
"sec_num": "6.2"
},
{
"text": "By including a constant number k of discourse units in each chain, and considering a constant number l of such chains for computing each adjacent pair of discourse constituents (k = 4 for M struct multi and k = 3 for M rel multi ; l = 3), we have an overall time complexity of O(n). The reason is that it takes l \u00d7 O(kM 2 ) = O(1) time, where l, k, M are all constants, to perform exact inference for a given pair of adjacent constituents, and we need to perform such computation for all n \u2212 1 pairs of adjacent sentences on the first level of the treebuilding. Adopting a greedy approach, on an arbitrary level during the tree-building, once we decide to merge a certain pair of constituents, say U j and U j+1 , we only need to recompute a small number of chains, i.e., the chains which originally include U j or U j+1 , and inference on each chain takes O(1). Therefore, the total time complexity is (n \u2212 1) \u00d7 O(1) + (n \u2212 1) \u00d7 O(1) = O(n), where the first term in the summation is the complexity of computing all chains on the bottom level, and the second term is the complexity of computing the constant number of chains on higher levels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-sentential parsing",
"sec_num": "6.2"
},
{
"text": "We have thus showed that the time complexity is linear in n, which is the number of sentences in the document. In fact, under the assumption that the number of EDUs in each sentence is independent of n, it can be shown that the time complexity is also linear in the total number of EDUs 4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-sentential parsing",
"sec_num": "6.2"
},
{
"text": "In our local models, to encode two adjacent units, U j and U j+1 , within a CRF chain, we use the following 10 sets of features, some of which are modified from Joty et al.'s model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "7"
},
{
"text": "Organization features: Whether U j (or U j+1 ) is the first (or last) constituent in the sentence (for intra-sentential models) or in the document (for multi-sentential models); whether U j (or U j+1 ) is a bottom-level constituent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "7"
},
{
"text": "Textual structure features: Whether U j contains more sentences (or paragraphs) than U j+1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "7"
},
{
"text": "The beginning (or end) lexical n-grams in each unit; the beginning (or end) POS n-grams in each unit, where n \u2208 {1, 2, 3}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram features:",
"sec_num": null
},
{
"text": "Dominance features: The PoS tags of the head node and the attachment node; the lexical heads of the head node and the attachment node; the dominance relationship between the two units.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram features:",
"sec_num": null
},
{
"text": "The feature vector of the previous and the next constituent in the chain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual features:",
"sec_num": null
},
{
"text": "The root node of the left and right discourse subtrees of each unit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Substructure features:",
"sec_num": null
},
{
"text": "Syntactic features: whether each unit corresponds to a single syntactic subtree, and if so, the top PoS tag of the subtree; the distance of each unit to their lowest common ancestor in the syntax tree (intra-sentential only).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Substructure features:",
"sec_num": null
},
{
"text": "The type and the number of entity transitions across the two units. We adopt Barzilay and Lapata (2008) 's entitybased local coherence model to represent a document by an entity grid, and extract local transitions among entities in continuous discourse constituents. We use bigram and trigram transitions with syntactic roles attached to each entity.",
"cite_spans": [
{
"start": 77,
"end": 103,
"text": "Barzilay and Lapata (2008)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entity transition features:",
"sec_num": null
},
{
"text": "Cue phrase features: Whether a cue phrase occurs in the first or last EDU of each unit. The cue phrase list is based on the connectives collected by Knott and Dale (1994) Post-editing features: The depth of each unit in the initial tree.",
"cite_spans": [
{
"start": 149,
"end": 170,
"text": "Knott and Dale (1994)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entity transition features:",
"sec_num": null
},
{
"text": "For pre-processing, we use the Stanford CoreNLP (Klein and Manning, 2003; de Marneffe et al., 2006; Recasens et al., 2013) to syntactically parse the texts and extract coreference relations, and we use Penn2Malt 5 to lexicalize syntactic trees to extract dominance features. For local models, our structure models are trained using MALLET (McCallum, 2002) to include constraints over transitions between adjacent labels, and our relation models are trained using CRFSuite (Okazaki, 2007) , which is a fast implementation of linear-chain CRFs.",
"cite_spans": [
{
"start": 48,
"end": 73,
"text": "(Klein and Manning, 2003;",
"ref_id": "BIBREF8"
},
{
"start": 74,
"end": 99,
"text": "de Marneffe et al., 2006;",
"ref_id": "BIBREF3"
},
{
"start": 100,
"end": 122,
"text": "Recasens et al., 2013)",
"ref_id": "BIBREF17"
},
{
"start": 332,
"end": 355,
"text": "MALLET (McCallum, 2002)",
"ref_id": null
},
{
"start": 472,
"end": 487,
"text": "(Okazaki, 2007)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "8"
},
{
"text": "The data that we use to develop and evaluate our discourse parser is the RST Discourse Treebank (RST-DT) (Carlson et al., 2001) , which is a large corpus annotated in the framework of RST. The RST-DT consists of 385 documents (347 for training and 38 for testing) from the Wall Street Journal. Following previous work on the RST-DT (Hernault et al., 2010; Feng and Hirst, 2012; Joty et al., 2012; Joty et al., 2013) , we use 18 coarsegrained relation classes, and with nuclearity attached, we have a total set of 41 distinct relations. Non-binary relations are converted into a cascade of right-branching binary relations.",
"cite_spans": [
{
"start": 105,
"end": 127,
"text": "(Carlson et al., 2001)",
"ref_id": "BIBREF2"
},
{
"start": 332,
"end": 355,
"text": "(Hernault et al., 2010;",
"ref_id": "BIBREF5"
},
{
"start": 356,
"end": 377,
"text": "Feng and Hirst, 2012;",
"ref_id": "BIBREF4"
},
{
"start": 378,
"end": 396,
"text": "Joty et al., 2012;",
"ref_id": "BIBREF6"
},
{
"start": 397,
"end": 415,
"text": "Joty et al., 2013)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "8"
},
{
"text": "We compare four different models using manual EDU segmentation. In Table 1 , the jCRF model in the first row is the optimal CRF model proposed by Joty et al. (2013) . gSVM FH in the second row is our implementation of HILDA's greedy parsing algorithm using Feng and Hirst (2012)'s enhanced feature set. The third model, gCRF, represents our greedy CRF-based discourse parser, and the last row, gCRF PE , represents our parser with the postediting component included.",
"cite_spans": [
{
"start": 146,
"end": 164,
"text": "Joty et al. (2013)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 67,
"end": 74,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parsing accuracy",
"sec_num": "9.1"
},
{
"text": "In order to conduct a direct comparison with Joty et al.'s model, we use the same set of eval- 77.7 65.8 N/A * : significantly better than gSVM FH (p < .01) \u2020: significantly better than gCRF (p < .01) Table 1 : Performance of different models using gold-standard EDU segmentation, evaluated using the constituent accuracy (%) for span, nuclearity, and relation. For relation, we also report the macro-averaged F1-score (MAFS) for correctly retrieved constituents (before the slash) and for all constituents (after the slash). Statistical significance is verified using Wilcoxon's signed-rank test.",
"cite_spans": [],
"ref_spans": [
{
"start": 201,
"end": 208,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parsing accuracy",
"sec_num": "9.1"
},
{
"text": "uation metrics, i.e., the unlabeled and labeled precision, recall, and F-score 6 as defined by Marcu (2000) . For evaluating relations, since there is a skewed distribution of different relation types in the corpus, we also include the macro-averaged F1-score (MAFS) 7 as another metric, to emphasize the performance of infrequent relation types. We report the MAFS separately for the correctly retrieved constituents (i.e., the span boundary is correct) and all constituents in the reference tree.",
"cite_spans": [
{
"start": 95,
"end": 107,
"text": "Marcu (2000)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing accuracy",
"sec_num": "9.1"
},
{
"text": "As demonstrated by Table 1 , our greedy CRF models perform significantly better than the other two models. Since we do not have the actual output of Joty et al.'s model, we are unable to conduct significance testing between our models and theirs. But in terms of overall accuracy, our gCRF model outperforms their model by 1.5%. Moreover, with post-editing enabled, gCRF PE significantly (p < .01) outperforms our initial model gCRF by another 1% in relation assignment, and this overall accuracy of 58.2% is close to 90% of human performance. With respect to the macroaveraged F1-scores, adding the post-editing component also obtains about 1% improvement.",
"cite_spans": [],
"ref_spans": [
{
"start": 19,
"end": 26,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parsing accuracy",
"sec_num": "9.1"
},
{
"text": "However, the overall MAFS is still at the lower end of 30% for all constituents. Our error analysis shows that, for two relation classes, TOPIC-CHANGE and TEXTUAL-ORGANIZATION, our model fails to retrieve any instance, and for TOPIC-COMMENT and EVALUATION, our model scores a class-wise F 1 score lower than 5%. These four relation classes, apart from their infrequency in the corpus, are more abstractly defined, and thus are particularly challenging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing accuracy",
"sec_num": "9.1"
},
{
"text": "We further illustrate the efficiency of our parser by demonstrating the time consumption of different models. First, as shown in Table 2 , the average number of sentences in a document is 26.11, which is already too large for optimal parsing models, e.g., the CKY-like parsing algorithm in jCRF, let alone the fact that the largest document contains several hundred of EDUs and sentences. Therefore, it should be seen that non-optimal models are required in most cases.",
"cite_spans": [],
"ref_spans": [
{
"start": 129,
"end": 136,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Parsing efficiency",
"sec_num": "9.2"
},
{
"text": "In Table 3 , we report the parsing time 8 for the last three models, since we do not know the time of jCRF. Note that the parsing time excludes the time cost for any necessary pre-processing. As can be seen, our gCRF model is considerably faster than gSVM FH , because, on one hand, feature computation is expensive in gSVM FH , since gSVM FH utilizes a rich set of features; on the other hand, in gCRF, we are able to accelerate decoding by multi-threading MALLET (we use four threads). Even for the largest document with 187 sentences, gCRF is able to produce the final tree after about 40 seconds, while jCRF would take over 16 hours assuming each DCRF decoding takes only 0.01 second. Although enabling post-editing doubles the time consumption, the overall time is still acceptable in practice, and the loss of efficiency can be compensated by the improvement in accuracy.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parsing efficiency",
"sec_num": "9.2"
},
{
"text": "Parsing Time (seconds) 10.71 0.12 84.72 Table 3 : The parsing time (in seconds) for the 38 documents in the test set of RST-DT. Time cost of any pre-processing is excluded from the analysis.",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 47,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "In this paper, we presented an efficient text-level discourse parser with time complexity linear in the total number of sentences in the document. Our approach was to adopt a greedy bottomup tree-building, with two linear-chain CRFs as local probabilistic models, and enforce reasonable constraints in the first CRF's Viterbi decoding. While significantly outperforming the stateof-the-art model by Joty et al. (2013) , our parser is much faster in practice. In addition, we propose a novel idea of post-editing, which modifies a fully-built discourse tree by considering information from upper-level constituents. We show that, although doubling the time consumption, postediting can further boost the parsing performance to close to 90% of human performance. In future work, we wish to further explore the idea of post-editing, since currently we use only the depth of the subtrees as upper-level information. Moreover, we wish to study whether we can incorporate constraints into the relation models, as we do to the structure models. For example, it might be helpful to train the relation models using additional criteria, such as Generalized Expectation (Mann and McCallum, 2008) , to better take into account some prior knowledge about the relations. Last but not least, as reflected by the low MAFS in our experiments, some particularly difficult relation types might need specifically designed features for better recognition.",
"cite_spans": [
{
"start": 399,
"end": 417,
"text": "Joty et al. (2013)",
"ref_id": "BIBREF7"
},
{
"start": 1159,
"end": 1184,
"text": "(Mann and McCallum, 2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "10"
},
{
"text": "The time complexity will be reduced to O(M 2 n 2 ), if we use the same chain for all constituents as in our M struct intra .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "These leaf constituents are represented using a special feature vector is leaf = True; thus the CRF never labels them with relations other than LEAF.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We implicitly made an assumption that the parsing time is dominated by the time to perform inference on CRF chains. However, for complex features, the time required for feature computation might be dominant. Nevertheless, a careful caching strategy can accelerate feature computation, since a large number of multi-sentential chains overlap with each other.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For manual segmentation, precision, recall, and F-score are the same.7 MAFS is the F1-score averaged among all relation classes by equally weighting each class. Therefore, we cannot conduct significance test between different MAFS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Tested on a Linux system with four duo-core 3.0GHz processors and 16G memory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Professor Gerald Penn and the reviewers for their valuable advice and comments. This work was financially supported by the Natural Sciences and Engineering Research Council of Canada and by the University of Toronto.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Probabilistic head-driven parsing for discourse structure",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Lascarides",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005)",
"volume": "",
"issue": "",
"pages": "96--103",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Baldridge and Alex Lascarides. 2005. Proba- bilistic head-driven parsing for discourse structure. In Proceedings of the Ninth Conference on Compu- tational Natural Language Learning (CoNLL-2005), pages 96-103, Ann Arbor, Michigan, June. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Modeling local coherence: an entity-based approach",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "1",
"pages": "1--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: an entity-based approach. Compu- tational Linguistics, 34(1):1-34.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Building a discourse-tagged corpus in the framework of Rhetorical Structure Theory",
"authors": [
{
"first": "Lynn",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ellen"
],
"last": "Okurowski",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of Second SIGDial Workshop on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lynn Carlson, Daniel Marcu, and Mary Ellen Okurowski. 2001. Building a discourse-tagged cor- pus in the framework of Rhetorical Structure The- ory. In Proceedings of Second SIGDial Workshop on Discourse and Dialogue (SIGDial 2001), pages 1-10.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Generating typed dependency parses from phrase structure parses",
"authors": [
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Maccartney",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 5th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Text-level discourse parsing with rich linguistic features",
"authors": [
{
"first": "Vanessa",
"middle": [],
"last": "Wei Feng",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL 2012)",
"volume": "",
"issue": "",
"pages": "60--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vanessa Wei Feng and Graeme Hirst. 2012. Text-level discourse parsing with rich linguistic features. In Proceedings of the 50th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies (ACL 2012), pages 60-68, Jeju, Korea.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "HILDA: A discourse parser using support vector machine classification",
"authors": [
{
"first": "Hugo",
"middle": [],
"last": "Hernault",
"suffix": ""
},
{
"first": "Helmut",
"middle": [],
"last": "Prendinger",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "",
"suffix": ""
},
{
"first": "Mitsuru",
"middle": [],
"last": "Ishizuka",
"suffix": ""
}
],
"year": 2010,
"venue": "Dialogue and Discourse",
"volume": "1",
"issue": "3",
"pages": "1--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hugo Hernault, Helmut Prendinger, David A. duVerle, and Mitsuru Ishizuka. 2010. HILDA: A discourse parser using support vector machine classification. Dialogue and Discourse, 1(3):1-33.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A novel discriminative framework for sentence-level discourse analysis",
"authors": [
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "904--915",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shafiq Joty, Giuseppe Carenini, and Raymond T. Ng. 2012. A novel discriminative framework for sentence-level discourse analysis. In Proceed- ings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Com- putational Natural Language Learning, EMNLP- CoNLL 2012, pages 904-915.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Combining intra-and multisentential rhetorical parsing for document-level discourse analysis",
"authors": [
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Yashar",
"middle": [],
"last": "Mehdad",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL 2013)",
"volume": "",
"issue": "",
"pages": "486--496",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shafiq Joty, Giuseppe Carenini, Raymond Ng, and Yashar Mehdad. 2013. Combining intra-and multi- sentential rhetorical parsing for document-level dis- course analysis. In Proceedings of the 51st Annual Meeting of the Association for Computational Lin- guistics (ACL 2013), pages 486-496, Sofia, Bul- garia, August.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Accurate unlexicalized parsing",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics (ACL 2003), ACL 2003",
"volume": "",
"issue": "",
"pages": "423--430",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2003. Ac- curate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Computa- tional Linguistics (ACL 2003), ACL 2003, pages 423-430, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Using linguistic phenomena to motivate a set of coherence relations",
"authors": [
{
"first": "Alistair",
"middle": [],
"last": "Knott",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Dale",
"suffix": ""
}
],
"year": 1994,
"venue": "Discourse Processes",
"volume": "18",
"issue": "",
"pages": "35--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alistair Knott and Robert Dale. 1994. Using linguistic phenomena to motivate a set of coherence relations. Discourse Processes, 18(1):35-64.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [
"D"
],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [
"C N"
],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Eighteenth International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In Proceedings of the Eighteenth Inter- national Conference on Machine Learning, ICML 2001, pages 282-289, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Generalized Expectation Criteria for semi-supervised learning of Conditional Random Fields",
"authors": [
{
"first": "Gideon",
"middle": [
"S"
],
"last": "Mann",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL 2008)",
"volume": "",
"issue": "",
"pages": "870--878",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gideon S. Mann and Andrew McCallum. 2008. Gen- eralized Expectation Criteria for semi-supervised learning of Conditional Random Fields. In Proceed- ings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL 2008), pages 870-878, Colum- bus, Ohio, June. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Rhetorical structure theory: Toward a functional theory of text organization",
"authors": [
{
"first": "William",
"middle": [],
"last": "Mann",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "Thompson",
"suffix": ""
}
],
"year": 1988,
"venue": "Text",
"volume": "8",
"issue": "3",
"pages": "243--281",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Mann and Sandra Thompson. 1988. Rhetor- ical structure theory: Toward a functional theory of text organization. Text, 8(3):243-281.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The Theory and Practice of Discourse Parsing and Summarization",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Marcu. 2000. The Theory and Practice of Dis- course Parsing and Summarization. The MIT Press.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "MAL-LET: A machine learning for language toolkit",
"authors": [
{
"first": "Andrew Kachites",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Kachites McCallum. 2002. MAL- LET: A machine learning for language toolkit. http://mallet.cs.umass.edu.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Constrained decoding for text-level discourse parsing",
"authors": [
{
"first": "Philippe",
"middle": [],
"last": "Muller",
"suffix": ""
},
{
"first": "Stergos",
"middle": [],
"last": "Afantenos",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Denis",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
}
],
"year": 2012,
"venue": "The COLING 2012 Organizing Committee",
"volume": "",
"issue": "",
"pages": "1883--1900",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philippe Muller, Stergos Afantenos, Pascal Denis, and Nicholas Asher. 2012. Constrained decoding for text-level discourse parsing. In Proceedings of COLING 2012, pages 1883-1900, Mumbai, India, December. The COLING 2012 Organizing Commit- tee.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "CRFsuite: a fast implementation of conditional random fields (CRFs)",
"authors": [
{
"first": "Naoaki",
"middle": [],
"last": "Okazaki",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naoaki Okazaki. 2007. CRFsuite: a fast im- plementation of conditional random fields (CRFs). http://www.chokkan.org/software/crfsuite/.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The life and death of discourse entities: Identifying singleton mentions",
"authors": [
{
"first": "Marta",
"middle": [],
"last": "Recasens",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "627--633",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marta Recasens, Marie-Catherine de Marneffe, and Christopher Potts. 2013. The life and death of dis- course entities: Identifying singleton mentions. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 627-633, Atlanta, Georgia, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "An effective discourse parser that uses rich linguistic information",
"authors": [
{
"first": "Rajen",
"middle": [],
"last": "Subba",
"suffix": ""
},
{
"first": "Barbara",
"middle": [
"Di"
],
"last": "Eugenio",
"suffix": ""
}
],
"year": 2009,
"venue": "The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "566--574",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rajen Subba and Barbara Di Eugenio. 2009. An effec- tive discourse parser that uses rich linguistic infor- mation. In Proceedings of Human Language Tech- nologies: The 2009 Annual Conference of the North American Chapter of the Association for Compu- tational Linguistics, pages 566-574, Boulder, Col- orado, June. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Dynamic conditional random fields: Factorized probabilistic models for labeling and segmenting sequence data",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Sutton",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Khashayar",
"middle": [],
"last": "Rohanimanesh",
"suffix": ""
}
],
"year": 2007,
"venue": "The Journal of Machine Learning Research",
"volume": "8",
"issue": "",
"pages": "693--723",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Sutton, Andrew McCallum, and Khashayar Rohanimanesh. 2007. Dynamic conditional random fields: Factorized probabilistic models for labeling and segmenting sequence data. The Journal of Ma- chine Learning Research, 8:693-723, May.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A dynamic conditional random field model for joint labeling of object and scene classes",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Wojek",
"suffix": ""
},
{
"first": "Bernt",
"middle": [],
"last": "Schiele",
"suffix": ""
}
],
"year": 2008,
"venue": "European Conference on Computer Vision (ECCV 2008)",
"volume": "",
"issue": "",
"pages": "733--747",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Wojek and Bernt Schiele. 2008. A dynamic conditional random field model for joint labeling of object and scene classes. In European Conference on Computer Vision (ECCV 2008), pages 733-747, Marseille, France.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Combining a two-step conditional random field model and a joint source channel model for machine transliteration",
"authors": [
{
"first": "Dong",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Dixon",
"suffix": ""
},
{
"first": "Yi-Cheng",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Tasuku",
"middle": [],
"last": "Oonishi",
"suffix": ""
},
{
"first": "Masanobu",
"middle": [],
"last": "Nakamura",
"suffix": ""
},
{
"first": "Sadaoki",
"middle": [],
"last": "Furui",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Named Entities Workshop: Shared Task on Transliteration",
"volume": "",
"issue": "",
"pages": "72--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dong Yang, Paul Dixon, Yi-Cheng Pan, Tasuku Oon- ishi, Masanobu Nakamura, and Sadaoki Furui. 2009. Combining a two-step conditional random field model and a joint source channel model for machine transliteration. In Proceedings of the 2009 Named Entities Workshop: Shared Task on Translit- eration (NEWS 2009), pages 72-75, Suntec, Singa- pore, August. Association for Computational Lin- guistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "An example text fragment composed of two sentences and four EDUs, with its RST discourse tree representation shown below.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Joty et al. (2013)'s intra-sentential Condition Random Fields.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Structure models 4.2.1 Intra-sentential structure model",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "Local structure models.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "Local relation models.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF5": {
"text": "i] and L[i + 1] with p 10: if |L| = 2 then 11:",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"type_str": "table",
"html": null,
"content": "<table><tr><td/><td/><td/><td>wsj 1146</td></tr><tr><td/><td/><td>e 1:4</td><td/></tr><tr><td/><td/><td>sequence</td><td/></tr><tr><td>e 1:2</td><td/><td>e 3:4</td><td/></tr><tr><td colspan=\"2\">consequence</td><td colspan=\"2\">circumstance</td></tr><tr><td>e 1</td><td>e 2</td><td>e 3</td><td>e 4</td></tr></table>",
"num": null,
"text": "[On Aug. 1, the state tore up its controls,]e 1 [and food prices leaped]e 2 [Without buffer stocks,]e 3 [inflation exploded.]e 4"
},
"TABREF1": {
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"2\">Model Span</td><td>Nuc</td><td/><td>Relation</td></tr><tr><td/><td/><td/><td>Acc</td><td>MAFS</td></tr><tr><td>jCRF</td><td>82.5</td><td>68.4</td><td>55.7</td><td>N/A</td></tr><tr><td colspan=\"2\">gSVM FH 82.8</td><td>67.1</td><td>52.0</td><td>27.4/23.3</td></tr><tr><td colspan=\"4\">gCRF 57.2 Human 88.7 84.9 * 69.9 *</td><td/></tr></table>",
"num": null,
"text": "http://stp.lingfil.uu.se/\u02dcnivre/ research/Penn2Malt.html. 35.3/31.3 gCRF PE 85.7 * \u2020 71.0 * \u2020 58.2 * \u2020 36.2/32.3"
},
"TABREF3": {
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null,
"text": "Characteristics of the 38 documents in the test data."
}
}
}
}