ACL-OCL / Base_JSON /prefixL /json /lrec /2020.lrec-1.128.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T11:42:36.331974Z"
},
"title": "Chinese Discourse Parsing: Model and Evaluation",
"authors": [
{
"first": "Chuan-An",
"middle": [],
"last": "Lin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Taiwan University",
"location": {
"settlement": "Taipei",
"country": "Taiwan"
}
},
"email": "calin@nlg.csie.ntu.edu.tw"
},
{
"first": "Shyh-Shiun",
"middle": [],
"last": "Hung",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Taiwan University",
"location": {
"settlement": "Taipei",
"country": "Taiwan"
}
},
"email": "shhung@nlg.csie.ntu.edu.tw"
},
{
"first": "Hen-Hsen",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Chengchi University",
"location": {
"settlement": "Taipei",
"country": "Taiwan"
}
},
"email": "hhhuang@nccu.edu.tw"
},
{
"first": "Hsin-Hsi",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Taiwan University",
"location": {
"settlement": "Taipei",
"country": "Taiwan"
}
},
"email": "hhchen@ntu.edu.tw"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Chinese discourse parsing, which aims to identify the hierarchical relationships of Chinese elementary discourse units, has not yet a consistent evaluation metric. Although Parseval is commonly used, variations of evaluation differ from three aspects: micro vs. macro F1 scores, binary vs. multiway ground truth, and left-heavy vs. right-heavy binarization. In this paper, we first propose a neural network model that unifies a pre-trained transformer and CKY-like algorithm, and then compare it with the previous models with different evaluation scenarios. The experimental results show that our model outperforms the previous systems. We conclude that (1) the pre-trained context embedding provides effective solutions to deal with implicit semantics in Chinese texts, and (2) using multiway ground truth is helpful since different binarization approaches lead to significant differences in performance.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Chinese discourse parsing, which aims to identify the hierarchical relationships of Chinese elementary discourse units, has not yet a consistent evaluation metric. Although Parseval is commonly used, variations of evaluation differ from three aspects: micro vs. macro F1 scores, binary vs. multiway ground truth, and left-heavy vs. right-heavy binarization. In this paper, we first propose a neural network model that unifies a pre-trained transformer and CKY-like algorithm, and then compare it with the previous models with different evaluation scenarios. The experimental results show that our model outperforms the previous systems. We conclude that (1) the pre-trained context embedding provides effective solutions to deal with implicit semantics in Chinese texts, and (2) using multiway ground truth is helpful since different binarization approaches lead to significant differences in performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "As mentioned by Rhetorical Structure Theory (Mann and Thompson, 1988) , a discourse is composed of elementary discourse units (EDUs), which can be formed into a hierarchical structure by relating each other with discourse relations. This kind of discourse parsing tree provides a deep understanding of an article and benefits downstream NLP tasks. The majority of Chinese discourse relation is \"implicit\", lacking obvious lexical cues to discriminate the relation type (Li et al., 2014b ). It makes the task of Chinese discourse parsing more challenging as the parser has to catch the implicit meaning from the text instead of relying on lexical information. According to the Connective-Driven Dependency Tree (CDT) scheme (Li et al., 2014b) , there are four subtasks in Chinese discourse parsing, including EDU segmentation, tree structure construction, relation sense labeling, and relation center labeling. Figure 1 illustrates an example of Chinese discourse parsing tree. Note that a coordination relation may have more than two arguments while the other kinds of relations are always binary. Details of Chinese discourse parsing can be found in (Li et al., 2014b) . The CDT annotates the hierarchical discourse structure of a given article, which is different from the PDTB-style scheme proposed by (Zhou and Xue, 2012) . Although the annotation of the Chinese Discourse Tree-Bank (CDTB) is well-defined, the evaluation is divergent. Generally, PARSEVAL (Black et al., 1991) is used to evaluate the quality of a predicted parsing tree. For a golden standard discourse tree, we have a set of non-leaf nodes N = {n 1 , n 2 , ..., n k }. We also have a set of text spans T = {t 1 , t 2 , ..., t k } where n i dominates t i for all i (e.g., the node B in Figure 1 dominates the text spanning from EDU 1 to EDU 3, so if we use n j to represent node B, then t j should represent the text span from EDU 1 to 3). Similarly, for a predicted discourse parsing tree, we have non-leaf nodes N = {n 1 , n 2 , ..., n h } and text spans 1. \u4ec5\u53bb\u5e74\u4e2d\u56fd\u94f6\u884c\u5c31\u7d2f\u8ba1\u5411\u5916\u5546\u6295\u8d44\u4f01\u4e1a\u63d0\u4f9b\u4e86\u516d\u767e\u4e5d\u5341\u591a \u4ebf\u5143\u7684\u4eba\u6c11\u5e01\u8d37\u6b3e\uff0c Last year alone, the Bank of China provided more than 6.9 billion yuan in RMB loans to foreign-invested enterprises.",
"cite_spans": [
{
"start": 44,
"end": 69,
"text": "(Mann and Thompson, 1988)",
"ref_id": "BIBREF11"
},
{
"start": 469,
"end": 486,
"text": "(Li et al., 2014b",
"ref_id": "BIBREF8"
},
{
"start": 723,
"end": 741,
"text": "(Li et al., 2014b)",
"ref_id": "BIBREF8"
},
{
"start": 1151,
"end": 1169,
"text": "(Li et al., 2014b)",
"ref_id": "BIBREF8"
},
{
"start": 1305,
"end": 1325,
"text": "(Zhou and Xue, 2012)",
"ref_id": "BIBREF16"
},
{
"start": 1460,
"end": 1480,
"text": "(Black et al., 1991)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 910,
"end": 918,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 1757,
"end": 1765,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In addition, it also issued foreign exchange loans of more than US$4 billion to foreign-invested enterprises.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u53e6\u5916\u8fd8\u5411\u5916\u5546\u6295\u8d44\u4f01\u4e1a\u53d1\u653e\u5916\u6c47\u73b0\u6c47\u8d37\u6b3e\u56db\u5341\u591a\u4ebf\u7f8e\u5143\uff0c",
"sec_num": "2."
},
{
"text": "These loans focus on supporting basic raw materials, chemicals, machinery and other industries. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u8fd9\u4e9b\u8d37\u6b3e\u91cd\u70b9\u652f\u6301\u57fa\u7840\u539f\u6750\u6599\u3001\u5316\u5de5\u3001\u673a\u68b0\u7b49\u884c\u4e1a\u3002",
"sec_num": "3."
},
{
"text": "T = {t 1 , t 2 , ..., t h }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u8fd9\u4e9b\u8d37\u6b3e\u91cd\u70b9\u652f\u6301\u57fa\u7840\u539f\u6750\u6599\u3001\u5316\u5de5\u3001\u673a\u68b0\u7b49\u884c\u4e1a\u3002",
"sec_num": "3."
},
{
"text": "Assuming that we do not consider the relation label of each node, the recall, precision and F1 score can thus be calculated following the PARSEVAL criteria.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u8fd9\u4e9b\u8d37\u6b3e\u91cd\u70b9\u652f\u6301\u57fa\u7840\u539f\u6750\u6599\u3001\u5316\u5de5\u3001\u673a\u68b0\u7b49\u884c\u4e1a\u3002",
"sec_num": "3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Recall = |{t |t \u2208 T,t \u2208 T }| |T | (1) P recision = |{t |t \u2208 T,t \u2208 T }| |T | (2) F 1 = 2 \u2022 Recall \u2022 P recision Recall + P recision",
"eq_num": "(3)"
}
],
"section": "\u8fd9\u4e9b\u8d37\u6b3e\u91cd\u70b9\u652f\u6301\u57fa\u7840\u539f\u6750\u6599\u3001\u5316\u5de5\u3001\u673a\u68b0\u7b49\u884c\u4e1a\u3002",
"sec_num": "3."
},
{
"text": "Based on this metric, there are the following evaluation scenarios: Micro vs. macro F1 scores: a micro F1 is computed by taking each node across all test examples into account at once, while a macro F1 is simply the averaged F1 of all predicted parsing trees. Binary vs. multiway ground truth: Since both transitionbased and chart-based models often construct binary discourse parsing trees, we have to consider whether to use the original multiway golden standard tree or a binarized version for comparison. While only Coordination relation may have more than two arguments with equal weight, multinucleus relations may exist, e.g., about 9% in the experimental corpus of this paper. The choice of ground truth type may thus significantly affect the fairness of evaluation. right-heavy vs. left-heavy binarization: We need to choose the way of binarization even early in the preprocessing stage due to the same limitation for the models to learn and predict binary structures. When evaluating, we either binarize the multiway golden standard tree for comparison or try to reverse the predicted tree to a multiway tree. Therefore, the choice of binarization affects not only evaluation but also model training. Figure 2 illustrates the two ways of binarization. In the left-heavy version, the children of a multiway node M are re-merged from left to right. The left two children are merged to form a new binary node recursively until M also becomes binary. Right-heavy binarization is the reversed version of the left-heavy one where the children are re-merged from right to left. In this work, we develop a neural network model with pretrained context embedding to learn implicit semantics in Chinese discourse. Further, we directly compare our model to prior works with divergent evaluation scenarios. Our experiments lead to two main contributions: 1. Our model successfully utilizes the pretrained transformer to reach state-of-the-art performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 1211,
"end": 1219,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "\u8fd9\u4e9b\u8d37\u6b3e\u91cd\u70b9\u652f\u6301\u57fa\u7840\u539f\u6750\u6599\u3001\u5316\u5de5\u3001\u673a\u68b0\u7b49\u884c\u4e1a\u3002",
"sec_num": "3."
},
{
"text": "2. We give suggestions for future researches based on our analysis of different scenarios.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "left heavy right heavy",
"sec_num": null
},
{
"text": "Several benchmark datasets have been used to develop discourse parsers for English and Chinese respectively. For English, the most commonly used one is the Rhetorical Structure Theory Discourse Treebank (RST-DT) (Carlson et al., 2001 ). RST-DT follows the Rhetorical Structure Theory (RST) and is annotated from 385 Wall Street Journal articles. For Chinese, the Chinese discourse treebank (CDTB) (Li et al., 2014b ) is a hierarchically annotated corpus. We will use this corpus to conduct our experiments. CDTB follows the CDT scheme, where 500 Xinhua newswire documents selected from Chinese Treebank are annotated. Although many works have been done on RST-DT (Yu et al., 2018) 2018propose a unified framework based on recursive neural network to jointly parse the EDUs and the discourse structure with a probabilistic CKYlike algorithm. All the three proposed models construct binary parsing trees and thus require either a de-binarization step or binarizing ground truth for comparison. Morey et al. (2017) note that there is a discrepancy in evaluation among different works on RST-DT even though these works are also based on PARSEVAL. They thus reproduce these methods to make direct comparisons. For CDTB, there are two main branches of evaluation scenarios. Kong and Zhou (2017) BERT has been proved to perform prominently well on many NLP tasks such as language understanding, question answering, and commonsense inference (Devlin et al., 2018) . The pre-trained model is also suitable for tasks that have to understand the in-depth meaning of language but with training data of small scale like RST-DT or CDTB.",
"cite_spans": [
{
"start": 212,
"end": 233,
"text": "(Carlson et al., 2001",
"ref_id": "BIBREF18"
},
{
"start": 397,
"end": 414,
"text": "(Li et al., 2014b",
"ref_id": "BIBREF8"
},
{
"start": 663,
"end": 680,
"text": "(Yu et al., 2018)",
"ref_id": "BIBREF15"
},
{
"start": 992,
"end": 1011,
"text": "Morey et al. (2017)",
"ref_id": "BIBREF12"
},
{
"start": 1268,
"end": 1288,
"text": "Kong and Zhou (2017)",
"ref_id": "BIBREF5"
},
{
"start": 1434,
"end": 1455,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2."
},
{
"text": "This work is improved from our previous work (Lin et al., 2018) , and the resulting framework is shown in Figure 3 . We modify the original RvNN-based CKY-like construction process to be a new CKY phase (the cycle in the middle of Figure 3 ). Our core motivation is to utilize the pretrained neural transformers in the discourse tree construction process while keeping the new model still compatible with the original training procedure. Besides, we test different versions of binarization and de-binarization for comparisons. We discuss these two parts of our model in the following subsections.",
"cite_spans": [
{
"start": 45,
"end": 63,
"text": "(Lin et al., 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 106,
"end": 114,
"text": "Figure 3",
"ref_id": "FIGREF3"
},
{
"start": 231,
"end": 239,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Model Description",
"sec_num": "3."
},
{
"text": "After segmenting by punctuation from a given paragraph, we have a series of text segments S = {s 1 , s 2 , ..., s n }. Let sp i,j denote the text span ranging from s i to s j . In each iteration of the CKY phase, given (i, j, k), 1 \u2264 i \u2264 j < k \u2264 n, we apply the transformer to calculate the representation of text span sp i,j and sp j+1,k :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CKY Phase",
"sec_num": "3.1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "r i,j = T ransf ormer(sp i,j ) r j+1,k = T ransf ormer(sp j+1,k )",
"eq_num": "(4)"
}
],
"section": "CKY Phase",
"sec_num": "3.1."
},
{
"text": "Let T L denote the candidate tree derived from the text span sp i,j , T R denote the candidate tree derived from sp j+1,k , and T M denote the tree derived from merging T L and T R . In other words, we relate the root nodes of T L and T R with a discourse relation to form a new tree T M . T M is thus a candidate tree of sp i,k , and T L and T R are the left child and the right child of the root of T M , respectively. We can calculate the probability of this candidate tree based on conditional probability:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CKY Phase",
"sec_num": "3.1."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (T M ) = P (T M |T L , T R ) \u2022 P (T L , T R ) = P (T M |T L , T R ) \u2022 P (T L ) \u2022 P (T R )",
"eq_num": "(5)"
}
],
"section": "CKY Phase",
"sec_num": "3.1."
},
{
"text": "where we assume T L and T R are independent since they correspond to disjoint text spans. T L , T R , P (T L ), and P (T R ) are stored in the CKY table for dynamic programming, so we just need to calculate the probability of merging T L and T R with our merge scorer:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CKY Phase",
"sec_num": "3.1."
},
{
"text": "P (T M |T L , T R ) = M ergeScorer(r i,j , r j+1,k ) (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CKY Phase",
"sec_num": "3.1."
},
{
"text": "In this way, our model can perform CKY-like dynamic programming on (i, j, k) to find the candidate tree with the highest probability. The sense classifier and the center classifier are used to label discourse relations as well as EDU as in (Lin et al., 2018) . We use BERT (Devlin et al., 2018) as the transformer, which is suitable for fine-tuning with rather uncomplicated neural networks. Note that the CKY phase is designed to simplify the original RvNN framework that calculates discourse representations recursively according to the tree structure. Therefore, BERT, along with its pretrained linguistic knowledge, can learn the underlying discourse structure itself with raw text segments as inputs. We apply the implementation of (Wolf et al., 2019) , which contains a Chinese version of pre-trained BERT.",
"cite_spans": [
{
"start": 240,
"end": 258,
"text": "(Lin et al., 2018)",
"ref_id": "BIBREF10"
},
{
"start": 273,
"end": 294,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 737,
"end": 756,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CKY Phase",
"sec_num": "3.1."
},
{
"text": "We adopt different binarization and de-binarization approaches for preprocessing before training and evaluation. We first need to perform binarization before generating training instances. After the CKY phase, we need either to de-binarize the output binary parsing tree or to binarize the multiway golden standard tree for comparison. Similar to binarization, de-binarization can be left-heavy or right-heavy. For left-heavy de-binarization, we traverse the binary tree from the root node. If we find a node N labeled with discourse relation of coordination and with its arguments equally weighted, we check its left child L to see whether the discourse relation of L is labeled in the same way. If so, we merge N and L to be a new multiway node. We do it recursively until we cannot find such L. The rightheavy approach is the reverse version of the left-heavy one. The choice of left-heavy/right-heavy should be consistent between training and evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Binarization and De-binarization",
"sec_num": "3.2."
},
{
"text": "We conduct our experiments in CDTB and split the data into a training set and a test set following the policy of (Lin et al., 2018) . We use cross-entropy loss function with 0.01 weighted L2 regularization for training. The learning rate is set to be 10 \u22126 , and the batch size is set to be 10.",
"cite_spans": [
{
"start": 113,
"end": 131,
"text": "(Lin et al., 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.1."
},
{
"text": "The performances of our model compared to the previous works on two main evaluation scenarios are shown in Table 1 and Table 2 , respectively. While Table 1 stands for the evaluation scenario of micro F1 score, left-heavy binarization, and multiway ground truth, Table 2 corresponds to macro F1 score, right-heavy binarization, and binary ground truth. We denote the two variations of our models as Ours-L and Ours-R to represent the choice of left-heavy or right-heavy binarization while training. From both results, we can see that our model outperforms the previous works significantly. For comparing further the scores with gold EDUs or under an end-to-end parsing scenario, we can know that the improvements mainly come from better structure prediction of discourse parsing trees. These results show the effectiveness of the pretrained context and its ability to learn underlying discourse structures without explicit cues.",
"cite_spans": [],
"ref_spans": [
{
"start": 107,
"end": 126,
"text": "Table 1 and Table 2",
"ref_id": "TABREF3"
},
{
"start": 149,
"end": 156,
"text": "Table 1",
"ref_id": null
},
{
"start": 263,
"end": 270,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results and Discussions",
"sec_num": "4.2."
},
{
"text": "To fairly compare the binarization policy, we evaluate the performances of Our-L and Our-R under gold multiway ground truth. We can see from Table 3 that Ours-R outperforms Ours-L on almost all parts with micro or macro F1 scores. The gaps in macro scores are especially more significant than those on micro scores. It is known that under micro F1 evaluation, different paragraphs are weighted in proportion to the number of their nodes in the discourse trees, while each paragraph is equally weighted under macro evaluation. Therefore, we can infer that Our-R takes advantage of predicting local structures. This strength leads to higher performances in some paragraphs with small discourse trees. This explanation is supported by a better EDU score 93.5 of Ours-R compared to the 92.4 of Ours-L since EDUs, which are constructed from merging proper segments in the CKY-like process, can also be seen as local structures. Both Ours-L and Ours-R grasp the general structure of discourses, so the performance gap under macro evaluation is rather smaller. We further analyze the distribution of relation senses predicted by both models in Figure 4 and Figure 5 . We find that Ours-R is less biases to the majority of relation sense, which is Coordination. This tendency occurs even before the de-binarization process. We can see from Figure 6 that Ours-R's judgments lead to entirely higher performances on all relation senses. Overall, our experiments show that the right-heavy binarization policy makes the model learn to parse more effectively. Since different binarization choices fundamentally affect how models learn the knowledge about discourse structures, we suggest that multiway ground truth is more suitable for evaluation in order to allow future researches to explore different polices of binarization as well as debinarization. ",
"cite_spans": [],
"ref_spans": [
{
"start": 141,
"end": 148,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 1137,
"end": 1145,
"text": "Figure 4",
"ref_id": null
},
{
"start": 1150,
"end": 1158,
"text": "Figure 5",
"ref_id": null
},
{
"start": 1332,
"end": 1340,
"text": "Figure 6",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Results and Discussions",
"sec_num": "4.2."
},
{
"text": "In this work, we unify a pretrained neural context embedding and CKY-like construction algorithm to reach state-ofthe-art performance on Chinese discourse parsing. Further, we point out that current evaluation scenarios on CDTB are still divergent. By experimenting across different scenarios, we find that the choice of binarization is critical to the learning process. We thus suggest that multiway ground truth is more suitable for evaluation. For future work, we will continue exploring how underlying mechanisms of Chinese discourse structure interact with different parsing policies, from left-heavy/rightheavy binarization choice to more fundamental transitionbased/chart-based parsing approaches. We aim to conduct more extensive experiments to gain more insights into Chinese discourse parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A procedure for quantitatively comparing the syntactic coverage of English grammars",
"authors": [
{
"first": "E",
"middle": [],
"last": "Black",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Abney",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Flickenger",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Gdaniec",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Grishman",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Harrison",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Hindle",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ingria",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Klavans",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Liberman",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Strzalkowski",
"suffix": ""
}
],
"year": 1991,
"venue": "Speech and Natural Language: Proceedings of a Workshop Held at Pacific Grove, California",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Black, E., Abney, S., Flickenger, D., Gdaniec, C., Grish- man, R., Harrison, P., Hindle, D., Ingria, R., Jelinek, F., Klavans, J., Liberman, M., Marcus, M., Roukos, S., San- torini, B., and Strzalkowski, T. (1991). A procedure for quantitatively comparing the syntactic coverage of En- glish grammars. In Speech and Natural Language: Pro- ceedings of a Workshop Held at Pacific Grove, Califor- nia, February 19-22, 1991.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "J",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Devlin, J., Chang, M., Lee, K., and Toutanova, K. (2018). BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Fast rhetorical structure theory discourse parsing",
"authors": [
{
"first": "M",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Sagae",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heilman, M. and Sagae, K. (2015). Fast rhetorical struc- ture theory discourse parsing. CoRR, abs/1505.02425.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Representation learning for text-level discourse parsing",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "13--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ji, Y. and Eisenstein, J. (2014). Representation learning for text-level discourse parsing. In Proceedings of the 52nd Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 13- 24, Baltimore, Maryland, June. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Combining intra-and multi-sentential rhetorical parsing for document-level discourse analysis",
"authors": [
{
"first": "S",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Mehdad",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "486--496",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joty, S., Carenini, G., Ng, R., and Mehdad, Y. (2013). Combining intra-and multi-sentential rhetorical parsing for document-level discourse analysis. In Proceedings of the 51st Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 486-496, Sofia, Bulgaria, August. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A cdt-styled end-to-end chinese discourse parser",
"authors": [
{
"first": "F",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2017,
"venue": "ACM Trans. Asian Low-Resour",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kong, F. and Zhou, G. (2017). A cdt-styled end-to-end chi- nese discourse parser. ACM Trans. Asian Low-Resour.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Recursive deep models for discourse parsing",
"authors": [
{
"first": "J",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "2061--2069",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, J., Li, R., and Hovy, E. (2014a). Recursive deep models for discourse parsing. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Pro- cessing (EMNLP), pages 2061-2069, Doha, Qatar, Oc- tober. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Building chinese discourse corpus with connectivedriven dependency tree structure",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP'14)",
"volume": "",
"issue": "",
"pages": "2105--2114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, Y., Feng, W., Sun, J., Kong, F., and Zhou, G. (2014b). Building chinese discourse corpus with connective- driven dependency tree structure. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP'14), pages 2105-2114, Doha, Qatar, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Discourse parsing with attention-based hierarchical neural networks",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Chang",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "362--371",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, Q., Li, T., and Chang, B. (2016). Discourse pars- ing with attention-based hierarchical neural networks. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 362- 371, Austin, Texas, November. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A unified RvNN framework for end-to-end Chinese discourse parsing",
"authors": [
{
"first": "C.-A",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "H.-H",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Z.-Y",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "H.-H",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "73--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, C.-A., Huang, H.-H., Chen, Z.-Y., and Chen, H.-H. (2018). A unified RvNN framework for end-to-end Chi- nese discourse parsing. In Proceedings of the 27th Inter- national Conference on Computational Linguistics: Sys- tem Demonstrations, pages 73-77, Santa Fe, New Mex- ico, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Rhetorical structure theory: Toward a functional theory of text organization",
"authors": [
{
"first": "W",
"middle": [
"C"
],
"last": "Mann",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Thompson",
"suffix": ""
}
],
"year": 1988,
"venue": "Text-Interdisciplinary Journal for the Study of Discourse",
"volume": "8",
"issue": "3",
"pages": "243--281",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mann, W. C. and Thompson, S. A. (1988). Rhetorical structure theory: Toward a functional theory of text or- ganization. Text-Interdisciplinary Journal for the Study of Discourse, 8(3):243-281.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "How much progress have we made on rst discourse parsing? a replication study of recent results on the rst-dt",
"authors": [
{
"first": "M",
"middle": [],
"last": "Morey",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Muller",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Asher",
"suffix": ""
}
],
"year": 2017,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morey, M., Muller, P., and Asher, N. (2017). How much progress have we made on rst discourse parsing? a repli- cation study of recent results on the rst-dt. In EMNLP.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A transition-based framework for chinese discourse structure parsing",
"authors": [
{
"first": "C",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Kong",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Chinese Imformation Processing",
"volume": "32",
"issue": "12",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sun, C. and Kong, F. (2018). A transition-based frame- work for chinese discourse structure parsing. Journal of Chinese Imformation Processing, 32(12):48.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "T",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtow- icz, M., and Brew, J. (2019). Huggingface's transform- ers: State-of-the-art natural language processing. ArXiv, abs/1910.03771.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Transition-based neural RST parsing with implicit syntax features",
"authors": [
{
"first": "N",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Fu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "559--570",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu, N., Zhang, M., and Fu, G. (2018). Transition-based neural RST parsing with implicit syntax features. In Pro- ceedings of the 27th International Conference on Com- putational Linguistics, pages 559-570, Santa Fe, New Mexico, USA, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "PDTB-style discourse annotation of Chinese text",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "69--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhou, Y. and Xue, N. (2012). PDTB-style discourse anno- tation of Chinese text. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 69-77, Jeju Island, Korea, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Building a discourse-tagged corpus in the framework of rhetorical structure theory",
"authors": [
{
"first": "L",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "M",
"middle": [
"E"
],
"last": "Okurovsky",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Second SIGdial Workshop on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlson, L., Marcu, D., and Okurovsky, M. E. (2001). Building a discourse-tagged corpus in the framework of rhetorical structure theory. In Proceedings of the Second SIGdial Workshop on Discourse and Dialogue.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Building chinese discourse corpus with connectivedriven dependency tree structure",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP'14)",
"volume": "",
"issue": "",
"pages": "2105--2114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, Y., Feng, W., Sun, J., Kong, F., and Zhou, G. (2014). Building chinese discourse corpus with connective- driven dependency tree structure. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP'14), pages 2105-2114, Doha, Qatar, October. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "An example of Chinese discourse parsing tree from the Chinese Discourse Treebank(Li et al., 2014b)"
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "left-heavy and right-heavy binarization."
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "The framework of our model."
},
"FIGREF4": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Distribution of predicted relations before debinarization. Distribution of predicted relations after debinarization."
},
"FIGREF5": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Micro F1 scores of predicted relations after debinarization."
},
"TABREF1": {
"content": "<table/>",
"text": "and Lin et al. (2018) adopt micro F1 score, multiway gold parsing tree, and left-heavy binarization. In contrast,Sun and Kong (2018) use macro F1, binary gold tree, and right-heavy binarization.",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF2": {
"content": "<table><tr><td/><td colspan=\"2\">EDU Span</td><td>+S</td><td>+C</td><td>Join</td></tr><tr><td>Zhou</td><td/><td colspan=\"3\">52.3 33.8 23.9 23.2</td></tr><tr><td>Lin</td><td>gold</td><td colspan=\"3\">64.6 42.7 38.5 35.0</td></tr><tr><td>Ours-L</td><td/><td colspan=\"3\">76.5 50.8 48.5 43.1</td></tr><tr><td>Zhou</td><td>93.8</td><td colspan=\"3\">46.4 28.8 23.1 22.0</td></tr><tr><td>Lin</td><td>87.2</td><td colspan=\"3\">49.5 32.6 28.8 26.8</td></tr><tr><td colspan=\"2\">Ours-L 92.4</td><td colspan=\"3\">68.9 43.3 42.0 37.0</td></tr><tr><td colspan=\"5\">Table 1: Performance with micro F1 score, left-heavy bina-</td></tr><tr><td colspan=\"5\">rization, and multiway ground truth. The upper rows show</td></tr><tr><td colspan=\"4\">the results where gold EDUs are given. Model EDU Span +S</td><td>+C</td><td>Join</td></tr><tr><td>Sun Ours-R</td><td>gold</td><td colspan=\"3\">84.0 87.2 61.4 60.1 55.0 53.9</td></tr><tr><td>Sun</td><td>93.0</td><td>78.2</td><td/><td>53.2</td></tr><tr><td colspan=\"2\">Ours-R 93.5</td><td colspan=\"3\">81.3 56.9 54.6 50.0</td></tr></table>",
"text": "Span is the F1 score of structure prediction. +S is the F1 score of both structure and relation senses are predicted correctly. +C measures the F1 score that both structure and relation centers are predicted correctly. Join corresponds to predictions that are correct for both structure, senses, and centers.",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF3": {
"content": "<table><tr><td colspan=\"6\">: Performance with macro F1 score, right-heavy bi-</td></tr><tr><td colspan=\"4\">narization, and binarized ground truth.</td><td/></tr><tr><td>Model</td><td colspan=\"3\">mi/ma EDU Span</td><td>+S</td><td>+C</td><td>Join</td></tr><tr><td>Ours-L</td><td>micro macro</td><td>gold</td><td colspan=\"3\">76.5 50.8 48.5 43.1 83.9 54.8 52.4 45.5</td></tr><tr><td>Ours-R</td><td>micro macro</td><td>gold</td><td colspan=\"3\">75.1 51.5 49.5 44.8 82.8 57.3 56.0 50.7</td></tr><tr><td>Ours-L</td><td>micro macro</td><td>92.4</td><td colspan=\"3\">68.9 43.3 42.0 37.0 76.6 48.3 46.4 40.6</td></tr><tr><td>Ours-R</td><td>micro macro</td><td>93.5</td><td colspan=\"3\">69.3 45.9 43.1 38.6 77.8 53.3 50.9 46.1</td></tr></table>",
"text": "",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF4": {
"content": "<table/>",
"text": "Comparison between models trained with leftheavy and right-heavy binarization policies.",
"html": null,
"num": null,
"type_str": "table"
}
}
}
}