| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T11:51:51.603716Z" |
| }, |
| "title": "Joint Learning of Syntactic Features helps Discourse Segmentation", |
| "authors": [ |
| { |
| "first": "Takshak", |
| "middle": [], |
| "last": "Desai", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "The University of Texas at Dallas", |
| "location": {} |
| }, |
| "email": "takshak.desai@utdallas.edu" |
| }, |
| { |
| "first": "Parag", |
| "middle": [], |
| "last": "Dakle", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "The University of Texas at Dallas", |
| "location": {} |
| }, |
| "email": "paragpravin.dakle@utdallas.edu" |
| }, |
| { |
| "first": "Dan", |
| "middle": [ |
| "I" |
| ], |
| "last": "Moldovan", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "The University of Texas at Dallas", |
| "location": {} |
| }, |
| "email": "moldovan@utdallas.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper describes an accurate framework for carrying out multilingual discourse segmentation with BERT (Devlin et al., 2019). The model is trained to identify segments by casting the problem as a token classification problem and jointly learning syntactic features like part-of-speech tags and dependency relations. This leads to significant improvements in performance. Experiments are performed in different languages, such as English, Dutch, German, Portuguese Brazilian and Basque to highlight the cross-lingual effectiveness of the segmenter. In particular, the model achieves a state-of-the-art F-score of 96.7 for the RST-DT corpus (Carlson et al., 2003) improving on the previous best model by 7.2%. Additionally, a qualitative explanation is provided for how proposed changes contribute to model performance by analyzing errors made on the test data.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper describes an accurate framework for carrying out multilingual discourse segmentation with BERT (Devlin et al., 2019). The model is trained to identify segments by casting the problem as a token classification problem and jointly learning syntactic features like part-of-speech tags and dependency relations. This leads to significant improvements in performance. Experiments are performed in different languages, such as English, Dutch, German, Portuguese Brazilian and Basque to highlight the cross-lingual effectiveness of the segmenter. In particular, the model achieves a state-of-the-art F-score of 96.7 for the RST-DT corpus (Carlson et al., 2003) improving on the previous best model by 7.2%. Additionally, a qualitative explanation is provided for how proposed changes contribute to model performance by analyzing errors made on the test data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Discourse Segmentation refers to the task of fragmenting a document into minimal disjoint chunks of text called Elementary Discourse Units (EDUs). In the context of Rhetorical Structure Theory (Mann and Thompson, 1988) or RST, EDUs form the nodes of a discourse tree; while relations between EDUs form arcs or edges between nodes. As a motivating example, consider the discourse tree given in Figure 1 . EDUs labeled 1, 2 and 3 form nodes of the tree; and arcs are labeled with ATTRIBUTION (used to indicate instances of reported speech) and PURPOSE relations. This example was taken from the RST-DT corpus (Carlson et al., 2003) and the tree was constructed using the tool provided by Gessler et al. (2019) . Discourse segmentation is considered a challenging problem for several reasons. First, the boundary between syntax and discourse is blurry (Carlson and Marcu, 2001) . While the clause is considered a basic EDU, segment boundaries are often determined using lexical and syntactic clues. Additionally, the amount of annotated data available is min-imal and this makes training of data-hungry models like neural networks difficult. To assuage the problem of data insufficiency, syntax-free models that leverage pre-trained representations (Wang et al., 2018; Muller et al., 2019) from sentence encoders like ElMo and BERT were proposed: these models achieved very good results for the segmentation task. Likewise, the use of syntactic features such as part-of-speech tags and parse tree features helped achieve better results (Braud et al., 2017; Lin et al., 2019) . In fact, the latter achieved state-of-the-art results using pointer networks (Vinyals et al., 2015) with parse tree features. In this paper, we propose a few changes that leverage BERT's (Devlin et al., 2019) structure in performing segmentation. Our main contributions are:", |
| "cite_spans": [ |
| { |
| "start": 193, |
| "end": 218, |
| "text": "(Mann and Thompson, 1988)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 607, |
| "end": 629, |
| "text": "(Carlson et al., 2003)", |
| "ref_id": null |
| }, |
| { |
| "start": 686, |
| "end": 707, |
| "text": "Gessler et al. (2019)", |
| "ref_id": null |
| }, |
| { |
| "start": 849, |
| "end": 874, |
| "text": "(Carlson and Marcu, 2001)", |
| "ref_id": null |
| }, |
| { |
| "start": 1246, |
| "end": 1265, |
| "text": "(Wang et al., 2018;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 1266, |
| "end": 1286, |
| "text": "Muller et al., 2019)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 1533, |
| "end": 1553, |
| "text": "(Braud et al., 2017;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 1554, |
| "end": 1571, |
| "text": "Lin et al., 2019)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 1651, |
| "end": 1673, |
| "text": "(Vinyals et al., 2015)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 1754, |
| "end": 1782, |
| "text": "BERT's (Devlin et al., 2019)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 393, |
| "end": 401, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "1. We cast discourse segmentation as a token classification problem, as opposed to sequence tagging. This allows BERT to attend to one token at a time and not the entire sequence, thereby making better decisions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "2. We suggest a simple multi-task learning approach that uses the intermediate layers of BERT to carry out part-of-speech tag prediction, and dependency relation classification. This improves model performance, particularly for languages other than English.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "3. Experiments are performed for different languages to demonstrate the cross-lingual effectiveness of our framework. We use multilingual BERT (abbreviated as bert-multilingual-base 1 ) as our sentence encoder.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "4. We also provide a qualitative explanation for what worked in our favour via error analysis. This provides deeper insights into the model's behaviour and explains why we achieved better results 2 . Figure 2 provides a logical view of our model and its components. Subsequent sub-sections describe the model and related modifications/tasks.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 200, |
| "end": 208, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "At the heart of our model lies BERT, a powerful language model (Devlin et al., 2019) that provides universal sentence representations. Using BERT offers two advantages. First, it can effectively capture syntactic, semantic and positional dependencies between tokens and/or sub-tokens in a sentence; leaving little to no room for feature engineering. Second, it has been trained on a very large corpus: this allows us to fine-tune pre-trained BERT models on downstream tasks, especially when training data is minimal. This allows us to get away with the data insufficiency problem since the size of training corpora is small. To work with languages other than English, Devlin et al. (2019) released multilingual BERT: a single language model pre-trained from multi-lingual corpora in 104 languages; using a shared multi-lingual vocabulary. Despite being a shared language model, which could raise concerns about its cross-lingual effectiveness, multilingual BERT has given surprisingly good results for different language tasks (Pires et al., 2019; Wu and Dredze, 2019) .", |
| "cite_spans": [ |
| { |
| "start": 1027, |
| "end": 1047, |
| "text": "(Pires et al., 2019;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 1048, |
| "end": 1068, |
| "text": "Wu and Dredze, 2019)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multilingual BERT encoder", |
| "sec_num": "2.1." |
| }, |
| { |
| "text": "Deep learning frameworks (Braud et al., 2017; Wang et al., 2018) Figure 3 : An example showing the features captured for each token in the sentence: we consider the gold part-of-speech tags and dependency relations with respect to the masked token.", |
| "cite_spans": [ |
| { |
| "start": 25, |
| "end": 45, |
| "text": "(Braud et al., 2017;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 46, |
| "end": 64, |
| "text": "Wang et al., 2018)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 65, |
| "end": 73, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Problem Formulation", |
| "sec_num": "2.2." |
| }, |
| { |
| "text": "cates the continuation of a previous span. As opposed to casting it as a sequence tagging problem, we cast it as a token classification problem. Specifically, given a sequence of tokens T = t 0 , t 1 . . . t n , for each token t i , we present the input to BERT as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formulation", |
| "sec_num": "2.2." |
| }, |
| { |
| "text": "[CLS] t 1 t 2 . . . [MASK] . . . t n [SEP] t i", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formulation", |
| "sec_num": "2.2." |
| }, |
| { |
| "text": "where the token t i is replaced by the [MASK] token. Here, [CLS] , [SEP] and [MASK] are special tokens used by BERT. Note that the token t i is masked so as to prevent model overfitting.", |
| "cite_spans": [ |
| { |
| "start": 59, |
| "end": 64, |
| "text": "[CLS]", |
| "ref_id": null |
| }, |
| { |
| "start": 67, |
| "end": 72, |
| "text": "[SEP]", |
| "ref_id": null |
| }, |
| { |
| "start": 77, |
| "end": 83, |
| "text": "[MASK]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formulation", |
| "sec_num": "2.2." |
| }, |
| { |
| "text": "A similar idea was applied to sentence boundary detection (Schweter and Ahmed, 2019) where a window of k characters is defined for markers such as periods, question marks, etc., and a deep network was trained to identify if a marker demarcates a sentence boundary or not. In this case, markers are well-defined (i.e. a sentence must end with a period, question mark, exclamation point or quotation marks). Unfortunately, for discourse segmentation, such markers are not well-defined which required us to check all tokens in the sentence for EDU boundaries. We conjecture that casting the problem this way allows BERT to make better decisions as it attends to each token and not the full sequence. This is particularly useful for discourse segmentation as Wang et al. (2018) observed that segmentation is a local problem and demarcating a segment requires only on a small window of neighbouring tokens. Trying to tag the full sequence may introduce unnecessary noise and lead to errors. Additionally, we define a positional vector p (Zhang et al., 2017) for T = t 1 , t 2 ...t j relative to the masked token t i as:", |
| "cite_spans": [ |
| { |
| "start": 58, |
| "end": 84, |
| "text": "(Schweter and Ahmed, 2019)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 755, |
| "end": 773, |
| "text": "Wang et al. (2018)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 1032, |
| "end": 1052, |
| "text": "(Zhang et al., 2017)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formulation", |
| "sec_num": "2.2." |
| }, |
| { |
| "text": "p j = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 i \u2212 j if j <i 0 if j = i j \u2212 i if j >i", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formulation", |
| "sec_num": "2.2." |
| }, |
| { |
| "text": "(1) Following Shi and Lin (2019), we embed the positional vector and concatenate it to the encoder representations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formulation", |
| "sec_num": "2.2." |
| }, |
| { |
| "text": "Syntactic features are very helpful in learning language tasks. Strubell et al. (2018) particularly observed that jointly learning syntactic features improved the performance of semantic role labeling systems. We extend the idea to discourse segmentation, by jointly predicting the following features for the sentence:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Joint Learning of Syntactic Features", |
| "sec_num": "2.3." |
| }, |
| { |
| "text": "1. Part-of-speech tags of all tokens in the sentence 2. Dependency parent, child(ren) and sibling(s) of the masked token 3. Dependency relation of parent token with respect to the masked token An example is provided in Figure 3 . We extract partof-speech tags of all tokens; and the dependency parent (and corresponding relation), child(ren) and siblings of the masked token. For example, if the token 'spend' is masked, we train the classifier to learn CCOMP relation between 'said' and 'spend'; CHILD relations with respect to the tokens 'it', 'will', '$', and 'advertising'; SIBLING relation with respect to the token 'Pepsi' and NOREL with respect to every other token. As shown in Figure 2 , the first p layers of BERT learn the part-of-speech tags of the words under consideration. This layer passes information to the upper layers: d layers learn dependency relations of other words with respect to the token. The final layer (n = 12 in case of BERT) provides the final hidden representation of the sentence, that is fed to the decoder for classification. Joint training offers several advantages. First, multi-task learning helps improve model performance as features learned during training for syntactic features aids segmentation. Second, features are only required during training, and not during testing or tagging. This gives us an added third advantages: we can make use of gold POS tags and parse-trees. This is particularly advantageous because, as Braud et al. (2017) observed, system-generated parse-trees adversely affect segmentation as opposed to gold trees that improved performance significantly.", |
| "cite_spans": [ |
| { |
| "start": 1466, |
| "end": 1485, |
| "text": "Braud et al. (2017)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 219, |
| "end": 227, |
| "text": "Figure 3", |
| "ref_id": null |
| }, |
| { |
| "start": 686, |
| "end": 694, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Joint Learning of Syntactic Features", |
| "sec_num": "2.3." |
| }, |
| { |
| "text": "Our decoder design is fairly simple. We place a fully connected layer on top of BERT that accepts the final hidden representation of the sentence and predicts whether the token represents the beginning of a new segment or not. We use the softmax activation function to convert the linear Table 2 : Empirical results and ablation study: As is evident from the obtained results, casting the problem as a token classification (Token) problem led to significant improvement. Likewise, training for POS tags (Post), dependency relations (Depend) or both (Both) improved model performance across all languages.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 288, |
| "end": 295, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Decoder", |
| "sec_num": "2.4." |
| }, |
| { |
| "text": "layer's output to a probability distribution (over two classes i.e. B and I).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decoder", |
| "sec_num": "2.4." |
| }, |
| { |
| "text": "To test the performance of our model, we experimented with datasets in 5 different languages:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "3.1." |
| }, |
| { |
| "text": "1. The RST-DT corpus (Carlson et al., 2003) ", |
| "cite_spans": [ |
| { |
| "start": 21, |
| "end": 43, |
| "text": "(Carlson et al., 2003)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "3.1." |
| }, |
| { |
| "text": "We implemented our tool in PyTorch 3 and used the API provided by researchers at HuggingFace 4 to fine-tune pretrained BERT. All experiments were performed using 8 NVIDIA-GTX 1080 Ti GPUs in parallel. For training, we constructed batches of size 16. We used cross-entropy for calculating network loss and the Adam optimizer (Kingma and Ba, 2015) for updating network weights. The learning rate is set to 3e \u2212 5. To tune the hyper-parameters, we held-out a portion of the training data as validation set (see Table 1 ): all hyper-parameters were tuned on this validation set. We experimented for different values of p, d \u2208 {9, 10, 11} but we did not observe any significant difference in the F-scores (\u00b10.15). We report the best results for each dataset.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 508, |
| "end": 515, |
| "text": "Table 1", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Setup", |
| "sec_num": "3.2." |
| }, |
| { |
| "text": "We report our empirical results for discourse segmentation in Table 2 . As a baseline, we cast segmentation as a BI tagging problem, following the guidelines provided by Devlin et al. (2019) . One can see that by casting it instead as token classification (Token), the F-score improved significantly. In fact, simply casting the problem as token classification got us very close to the state-of-the-art for the English RST-DT corpus (Muller et al., 2019) . Likewise, training for part-of-speech tags (Post) and dependency relations (Depend) also improved the F-score for each language. The best improvements were observed for the German and Dutch datasets; with F-scores improving by 4.47 and 2.87 points respectively. It can also be observed that training either for POS tags or for dependency relations improves the F-score more significantly as compared to training for both (Both). A likely explanation for this is that training for both leads to poor generalization thereby leading to comparatively poor improvements. To assuage the problem, we increased the number of training iterations, but this led to severe overfitting. In general, ensembling (we did not consider Baseline and Both) helped and gave us better scores when compared to these models in isolation.", |
| "cite_spans": [ |
| { |
| "start": 170, |
| "end": 190, |
| "text": "Devlin et al. (2019)", |
| "ref_id": null |
| }, |
| { |
| "start": 433, |
| "end": 454, |
| "text": "(Muller et al., 2019)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 62, |
| "end": 69, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Empirical Results", |
| "sec_num": "3.3." |
| }, |
| { |
| "text": "Empirical results summarized in Table 2 show that performing token classification and not sequence tagging improves model performance. We saw an absolute improvement of 6.46 in the F-score showing that change in formulation alone helped achieve impressive results. To understand the impact of casting the problem in an alternate fashion and of the syntactic features considered, we analyze errors made by the model on the eng.rst.rstdt dataset.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 32, |
| "end": 39, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Analysis of the English dataset", |
| "sec_num": "4." |
| }, |
| { |
| "text": "We first compare the results obtained for the eng.rst.rstdt corpus with previous work done. To ensure fair comparison, we report results for discourse segmentation at the sentence-level and not at the document-level (Braud et al., 2017) . Table 3 reports the performance of our model and other competing systems.", |
| "cite_spans": [ |
| { |
| "start": 216, |
| "end": 236, |
| "text": "(Braud et al., 2017)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 239, |
| "end": 246, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Comparison with Existing Models", |
| "sec_num": "4.1." |
| }, |
| { |
| "text": "Model P R F Soricut and Marcu (2003) 84.1 85.4 84.7 Subba and Di Eugenio (2007) Table 3 : Performance of our model and other systems on the RST-DT dataset. Results are reported assuming parse trees are extracted using the BLLIP parser (as used by authors in the paper) SPADE is a probabilistic system developed by Soricut and Marcu (2003) that makes use of lexical and syntactic information to predict segment boundaries. Subba and Di Eugenio (2007) designed a simple neural network framework that makes use of similar features to carry out text segmentation. Hernault et al. (2010) and Bach et al. (2012) were the first to cast discourse segmentation as a sequence classification problem and use biLSTM-CRFs with parse tree features. Feng and Hirst (2014) performed segmentation in two passes: the second pass uses global features extracted from the results of the first pass to segment sentences. Wang et al. (2018) used ElMo embeddings with restricted self-attention: a mechanism that computes attention score for each token with respect to a small context and not the full sequence. Lin et al. (2019) used pointer networks with parse tree information to perform join discourse segmentation and relation classification. Muller et al. (2019) used BERT contextual embeddings with convolutional character embeddings as input to a biLSTM architecture to obtain accurate segments.", |
| "cite_spans": [ |
| { |
| "start": 12, |
| "end": 36, |
| "text": "Soricut and Marcu (2003)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 52, |
| "end": 79, |
| "text": "Subba and Di Eugenio (2007)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 314, |
| "end": 338, |
| "text": "Soricut and Marcu (2003)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 422, |
| "end": 449, |
| "text": "Subba and Di Eugenio (2007)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 560, |
| "end": 582, |
| "text": "Hernault et al. (2010)", |
| "ref_id": null |
| }, |
| { |
| "start": 587, |
| "end": 605, |
| "text": "Bach et al. (2012)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 899, |
| "end": 917, |
| "text": "Wang et al. (2018)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 1087, |
| "end": 1104, |
| "text": "Lin et al. (2019)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 1223, |
| "end": 1243, |
| "text": "Muller et al. (2019)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 80, |
| "end": 87, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Comparison with Existing Models", |
| "sec_num": "4.1." |
| }, |
| { |
| "text": "As is evident from Table 3 , we were able to achieve stateof-the-art results on the RST-DT corpus, beating the previous state-of-the-art model by an absolute 0.7 points and by a relative 7.2 points. It can also be observed that many of these systems were high-recall systems i.e. they end up predicting more EDUs than necessary. Our system achieved a higher precision than all, again beating the state-of-the-art by an absolute 1.0 points (relative 10 points).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 19, |
| "end": 26, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Comparison with Existing Models", |
| "sec_num": "4.1." |
| }, |
| { |
| "text": "We compare the proportion of errors made by the models with respect to the length of the sentence i.e. the number of tokens in the sentence. For the purpose of evaluation, we consider a sentence to be incorrectly segmented regardless of whether the type of error (Bach et al., 2012) is over (i.e. a sentence is segmented when it should not be) or miss (a sentence is not segmented when it should be). In Figure 4 , we provide a graph showing the proportion of errors made by the models with respect to the sentence length. The proportion of errors is calculated as the ratio of sentences that were incorrectly tagged to the total number of sentences, grouped by the sentence length. As suspected, the baseline model performs poorly when the sentences are longer. However, formulating the problem in an alternate fashion and injecting syntax make the model perform much better. This effect is more discernible for sentence lengths between 30 and 45, indicating that the baseline model could not segment a large fraction of longer sentences correctly.", |
| "cite_spans": [ |
| { |
| "start": 263, |
| "end": 282, |
| "text": "(Bach et al., 2012)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 404, |
| "end": 412, |
| "text": "Figure 4", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Sentence length v/s Number of errors", |
| "sec_num": "4.2." |
| }, |
| { |
| "text": "In Table 4 , we list the 8 most frequent tokens that were incorrectly segmented; and the total number of errors made by each model. As one can infer from Table 4 , all models give fewer errors than the baseline model. The reduction in total number of errors is more than 62% highlight the efficacy of our models. Additionally, training for syntax further reduced these errors by more than 5%. Baseline Token Post Depend Both and 40 21 15 20 16 that 32 11 8 6 7 to 31 22 18 21 16 the 17 9 9 8 7 as 14 4 2 3 1 in 10 3 4 3 2 for 10 7 7 6 6 if 10 3 4 3 3 Total 608 231 187 201 200 Table 4 : Table showing the number of errors made by all models when tagging the 8 most frequent tokens.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 3, |
| "end": 10, |
| "text": "Table 4", |
| "ref_id": null |
| }, |
| { |
| "start": 154, |
| "end": 161, |
| "text": "Table 4", |
| "ref_id": null |
| }, |
| { |
| "start": 393, |
| "end": 640, |
| "text": "Baseline Token Post Depend Both and 40 21 15 20 16 that 32 11 8 6 7 to 31 22 18 21 16 the 17 9 9 8 7 as 14 4 2 3 1 in 10 3 4 3 2 for 10 7 7 6 6 if 10 3 4 3 3 Total 608 231 187 201 200 Table 4", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 643, |
| "end": 656, |
| "text": "Table showing", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Frequent Error Patterns", |
| "sec_num": "4.3." |
| }, |
| { |
| "text": "On mapping these errors to rules (Carlson and Marcu, 2001) , we identified that the following were most frequently violated:", |
| "cite_spans": [ |
| { |
| "start": 33, |
| "end": 58, |
| "text": "(Carlson and Marcu, 2001)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Token Absolute number of Errors", |
| "sec_num": null |
| }, |
| { |
| "text": "1. Confusion between infinitival complements and infinitival clauses: Infinitival components of verbs are never fragmented into separate EDUs, whereas infinitival clauses are segmented only if that clause functions as the satellite of a PURPOSE relation. The model often confuses infinitival complements for infinitival clauses, which leads to tagging errors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Token Absolute number of Errors", |
| "sec_num": null |
| }, |
| { |
| "text": "2. Coordination in Sentences and Clauses: Coordinated sentences and clauses are broken into separate EDUs, while coordinated verb phrases are not. Additionally, when coordination occurs in subordinate clauses, segmentation depends on whether or not the subordinate construction would normally be segmented as an EDU if it were a single clause, rather than a number of coordinated clauses. Our model made some errors in identifying such patterns.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Token Absolute number of Errors", |
| "sec_num": null |
| }, |
| { |
| "text": "3. Confusion among correlative subordinators: Correlative subordinators consist of a combination of two markers, one in the subordinate clause and the other in the superordinate clause. Examples include 'as ... long as', 'either ... or', etc. These should be broken into separate EDUs, provided the subordinate clause contains a verb. There was some confusion in correctly identifying such constructs and thus performing accurate segmentation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Token Absolute number of Errors", |
| "sec_num": null |
| }, |
| { |
| "text": "Punctuation symbols often indicates segment boundaries. However, there may be cases where EDUs are not segmented. For instance, parenthetical expressions are usually segmented as EDUs, but if the expression is used to indicate missing information, segmentation must not be carried out. Likewise, phrases separated by semi-colons and commas are not EDUs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Punctuation:", |
| "sec_num": "4." |
| }, |
| { |
| "text": "While modeling segmentation as token classification helped remove these errors, injecting syntax helped remove these errors further. In particular, we observed that jointly training for part-of-speech tags helped remove punctuation errors and resolve confusions between infinitival complements and clauses. Likewise, jointly training for dependency relations helped remove errors related to coordination and correlative subordinators. Concrete examples of each error type are provided in Table 5 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 488, |
| "end": 495, |
| "text": "Table 5", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Punctuation:", |
| "sec_num": "4." |
| }, |
| { |
| "text": "Results obtained and analysis performed show how injecting syntax into the model helped achieve better results. In particular, the joint learning of syntactic features allowed the model uncover complex syntactic patterns that could not be captured by simply fine-tuning BERT. Further, the use of syntactic features helped achieve solid gains for languages such as German and Dutch; highlighting both the importance of syntax; and also certain limitations of multilingual BERT. Several complex cases of discourse segmentation could be effectively captured by our model. We believe that having knowledge of sentence-level semantics (Moldovan and Blanco, 2012) may help identify such nuanced patterns even better. This was in fact empirically proven by Lin et al. (2019) who jointly carried out discourse segmentation and coherence relation classification, observing an incremental improvement in model performance. A potential drawback of our system is that the time taken to tag a full sequence is quite large as the model performs sentence segmentation in O(n) time while other models take O(S) time, n being the number of tokens in the document, and S << n being the number of sentences in the document. However, with a sufficiently large batch size; and the availability of multiple GPUs, this bottleneck can be practically resolved by performing sentence segmentation in parallel. Explanation: Infinitival complements of verbs are not segmented as separate EDUs. However, both Baseline and Token confuse the infinitival clause in the sentence for a infinitival complement and end up leaving the sentence as a single EDU. Training for dependency relations allowed the model to identify this correctly as an infinitival clause and perform correct segmentation. Explanation: The baseline makes an error as it cannot identify that the sentence contains a superordinate and a subordinate clause; and therefore must be segmented. Training for both part-of-speech tags and dependency relations allowed the model to correctly identify these as two different clauses and segment them. Explanation: The baseline model fails to identify the comparative 'enough . . . to' as a correlative and does not segment the sentence. However, training for syntactic features allowed the model to correctly identify this construct and hence perform correct segmentation. Explanation: Both baseline and token incorrectly assume that the parenthetical expression expresses missing information and must not be segmented. However, by predicting the POS tag of CFD as NNP which is the same as the POS tags of the words preceding it, learning POS tags allowed the model to correctly segment this sentence. Carlson, L., Marcu, D., and Okurowski, M. E. (2003) .", |
| "cite_spans": [ |
| { |
| "start": 630, |
| "end": 657, |
| "text": "(Moldovan and Blanco, 2012)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 750, |
| "end": 767, |
| "text": "Lin et al. (2019)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 2680, |
| "end": 2731, |
| "text": "Carlson, L., Marcu, D., and Okurowski, M. E. (2003)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "5." |
| }, |
| { |
| "text": "Building a discourse-tagged corpus in the framework of rhetorical structure theory. In Current and new directions in discourse and dialogue, pages 85-112. Springer. Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "5." |
| }, |
| { |
| "text": "https://pytorch.org/ 4 https://github.com/huggingface/transformers", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We thank the anonymous reviewers for their valuable comments and suggestions. We also thank the organizers of the shared task of DISRPT-2019 (Zeldes et al., 2019) for providing all data in a uniform format.", |
| "cite_spans": [ |
| { |
| "start": 141, |
| "end": 162, |
| "text": "(Zeldes et al., 2019)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": "6." |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "A reranking model for discourse segmentation using subtree features", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [ |
| "X" |
| ], |
| "last": "Bach", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [ |
| "L" |
| ], |
| "last": "Minh", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Shimazu", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue", |
| "volume": "", |
| "issue": "", |
| "pages": "160--168", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bach, N. X., Minh, N. L., and Shimazu, A. (2012). A reranking model for discourse segmentation using sub- tree features. In Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 160-168. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Does syntax help discourse segmentation? not so much", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Braud", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Lacroix", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "S\u00f8gaard", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "2432--2442", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Braud, C., Lacroix, O., and S\u00f8gaard, A. (2017). Does syn- tax help discourse segmentation? not so much. In Pro- ceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2432-2442.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Cstnews-a discourse-annotated corpus for single and multi-document summarization of news texts in brazilian portuguese", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [ |
| "C" |
| ], |
| "last": "Cardoso", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [ |
| "G" |
| ], |
| "last": "Maziero", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "L" |
| ], |
| "last": "Jorge", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [ |
| "M" |
| ], |
| "last": "Seno", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Di Felippo", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [ |
| "H" |
| ], |
| "last": "Rino", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "G" |
| ], |
| "last": "Nunes", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [ |
| "A" |
| ], |
| "last": "Pardo", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 3rd RST Brazilian Meeting", |
| "volume": "", |
| "issue": "", |
| "pages": "88--105", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cardoso, P. C., Maziero, E. G., Jorge, M. L., Seno, E. M., Di Felippo, A., Rino, L. H., Nunes, M. G., and Pardo, T. A. (2011). Cstnews-a discourse-annotated corpus for single and multi-document summarization of news texts in brazilian portuguese. In Proceedings of the 3rd RST Brazilian Meeting, pages 88-105.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "3rd International Conference on Learning Representations", |
| "authors": [], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "3rd International Conference on Learning Representa- tions, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "A unified linear-time framework for sentence-level discourse parsing", |
| "authors": [ |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Joty", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Jwalapuram", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "S" |
| ], |
| "last": "Bari", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "4190--4200", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lin, X., Joty, S., Jwalapuram, P., and Bari, M. S. (2019). A unified linear-time framework for sentence-level dis- course parsing. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 4190-4200.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Rhetorical structure theory: Toward a functional theory of text organization", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [ |
| "C" |
| ], |
| "last": "Mann", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "A" |
| ], |
| "last": "Thompson", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "Text-interdisciplinary Journal for the Study of Discourse", |
| "volume": "8", |
| "issue": "3", |
| "pages": "243--281", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mann, W. C. and Thompson, S. A. (1988). Rhetorical structure theory: Toward a functional theory of text orga- nization. Text-interdisciplinary Journal for the Study of Discourse, 8(3):243-281.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Polaris: Lymba's semantic parser", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Moldovan", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Blanco", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012)", |
| "volume": "", |
| "issue": "", |
| "pages": "66--72", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Moldovan, D. and Blanco, E. (2012). Polaris: Lymba's semantic parser. In Proceedings of the Eighth Interna- tional Conference on Language Resources and Evalua- tion (LREC-2012), pages 66-72.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Tony: Contextual embeddings for accurate multilingual discourse segmentation of full documents", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Muller", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Braud", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Morey", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the Workshop on Discourse Relation Parsing and Treebanking 2019", |
| "volume": "", |
| "issue": "", |
| "pages": "115--124", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Muller, P., Braud, C., and Morey, M. (2019). Tony: Con- textual embeddings for accurate multilingual discourse segmentation of full documents. In Proceedings of the Workshop on Discourse Relation Parsing and Treebank- ing 2019, pages 115-124.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "How multilingual is multilingual bert?", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Pires", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Schlinger", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Garrette", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "4996--5001", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pires, T., Schlinger, E., and Garrette, D. (2019). How mul- tilingual is multilingual bert? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996-5001.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Multi-layer discourse annotation of a dutch text corpus", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Redeker", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Berzl\u00e1novich", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Van Der Vliet", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Bouma", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Egg", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Redeker, G., Berzl\u00e1novich, I., van der Vliet, N., Bouma, G., and Egg, M. (2012). Multi-layer discourse annotation of a dutch text corpus. In Proceedings of the Eighth Inter- national Conference on Language Resources and Evalu- ation (LREC-2012).", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Deep-EOS: General-Purpose Neural Networks for Sentence Boundary Detection", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Schweter", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Ahmed", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 15th Conference on Natural Language Processing (KONVENS)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Schweter, S. and Ahmed, S. (2019). Deep-EOS: General- Purpose Neural Networks for Sentence Boundary Detec- tion. In Proceedings of the 15th Conference on Natural Language Processing (KONVENS). accepted.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Simple bert models for relation extraction and semantic role labeling", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Shi", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1904.05255" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shi, P. and Lin, J. (2019). Simple bert models for rela- tion extraction and semantic role labeling. arXiv preprint arXiv:1904.05255.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Sentence level discourse parsing using syntactic and lexical information", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Soricut", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology", |
| "volume": "1", |
| "issue": "", |
| "pages": "149--156", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Soricut, R. and Marcu, D. (2003). Sentence level discourse parsing using syntactic and lexical information. In Pro- ceedings of the 2003 Conference of the North Ameri- can Chapter of the Association for Computational Lin- guistics on Human Language Technology -Volume 1, NAACL '03, pages 149-156, Stroudsburg, PA, USA. As- sociation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Potsdam commentary corpus 2.0: Annotation for discourse research", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Stede", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Neumann", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "LREC", |
| "volume": "", |
| "issue": "", |
| "pages": "925--929", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stede, M. and Neumann, A. (2014). Potsdam commentary corpus 2.0: Annotation for discourse research. In LREC, pages 925-929.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Linguistically-informed self-attention for semantic role labeling", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Strubell", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Verga", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Andor", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Weiss", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "5027--5038", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Strubell, E., Verga, P., Andor, D., Weiss, D., and McCal- lum, A. (2018). Linguistically-informed self-attention for semantic role labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5027-5038.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Automatic discourse segmentation using neural networks", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Subba", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Di Eugenio", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proc. of the 11th Workshop on the Semantics and Pragmatics of Dialogue", |
| "volume": "", |
| "issue": "", |
| "pages": "189--190", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Subba, R. and Di Eugenio, B. (2007). Automatic discourse segmentation using neural networks. In Proc. of the 11th Workshop on the Semantics and Pragmatics of Dialogue, pages 189-190.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Pointer networks", |
| "authors": [ |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Fortunato", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Jaitly", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "2692--2700", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vinyals, O., Fortunato, M., and Jaitly, N. (2015). Pointer networks. In Advances in Neural Information Process- ing Systems, pages 2692-2700.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Toward fast and accurate neural discourse segmentation", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "962--967", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wang, Y., Li, S., and Yang, J. (2018). Toward fast and accurate neural discourse segmentation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 962-967.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Beto, bentz, becas: The surprising cross-lingual effectiveness of bert", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Dredze", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "833--844", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wu, S. and Dredze, M. (2019). Beto, bentz, becas: The surprising cross-lingual effectiveness of bert. In Pro- ceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 833-844.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Introduction to discourse relation parsing and treebanking (DISRPT): 7th workshop on rhetorical structure theory and related formalisms", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Zeldes", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [ |
| "G" |
| ], |
| "last": "Maziero", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Antonio", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Iruskieta", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the Workshop on Discourse Relation Parsing and Treebanking", |
| "volume": "", |
| "issue": "", |
| "pages": "1--6", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zeldes, A., Das, D., Maziero, E. G., Antonio, J., and Iruski- eta, M. (2019). Introduction to discourse relation pars- ing and treebanking (DISRPT): 7th workshop on rhetor- ical structure theory and related formalisms. In Proceed- ings of the Workshop on Discourse Relation Parsing and Treebanking 2019, pages 1-6, Minneapolis, MN, June. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Position-aware attention and supervised data improve slot filling", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Zhong", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Angeli", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "35--45", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhang, Y., Zhong, V., Chen, D., Angeli, G., and Manning, C. D. (2017). Position-aware attention and supervised data improve slot filling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 35-45.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "text": "Discourse tree for a portion of text in wsj 2336: EDUs form the nodes of the tree, and arcs represents relations.", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "text": "Model Architecture: The model attempts to classify if the masked word 'that' represents the beginning of a new segment or not. The intermediate layers carry out feature prediction i.e. the POS tags and dependency relations of all tokens with respect to masked token in the sentence.", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF2": { |
| "type_str": "figure", |
| "text": "Graphs showing the proportion of errors made v/s the length of sentence.", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF3": { |
| "type_str": "figure", |
| "text": "Segments: [The government directly owns 51.4%] [and Factorex, a financial services company, holds 8.42%]", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF4": { |
| "type_str": "figure", |
| "text": "Segments: [A private market like this just isn't big enough] [to absorb all the business.]", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF5": { |
| "type_str": "figure", |
| "text": "Segments: [On the Big Board, Crawford & Co., Atlanta,] [(CFD)] begins trading today]", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF6": { |
| "type_str": "figure", |
| "text": "2019). Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186. Feng, V. W. and Hirst, G. (2014). Two-pass discourse segmentation with pairing and global features. arXiv preprint arXiv:1407.8215. Gessler, L., Liu, Y., and Zeldes, A. (2019). A discourse signal annotation system for rst trees. In Proceedings of the Workshop on Discourse Relation Parsing and Treebanking 2019, pages 56-61, Minneapolis, MN, June. Association for Computational Linguistics. Hernault, H., Bollegala, D., and Ishizuka, M. (2010). A sequential model for discourse segmentation. In International Conference on Intelligent Text Processing and Computational Linguistics, pages 315-326. Springer. Iruskieta, M., Aranzabe, M. J., de Ilarraza, A. D., Gonzalez, I., Lersundi, M., and de Lacalle, O. L. (2013). The rst basque treebank: an online search interface to check rhetorical relations. In 4th workshop RST and discourse studies, pages 40-49. Kingma, D. P. and Ba, J. (2015). Adam: A method for stochastic optimization. In Yoshua Bengio et al., editors,", |
| "uris": null, |
| "num": null |
| }, |
| "TABREF2": { |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td/><td colspan=\"2\">Dataset</td><td colspan=\"13\">Training # Docs # Sents # EDUs # Docs # Sents # EDUs # Docs # Sents # EDUs Validation Test</td></tr><tr><td colspan=\"3\">eng.rst.rstdt</td><td>309</td><td/><td>6,672</td><td>17,646</td><td/><td>38</td><td>717</td><td colspan=\"2\">1,797</td><td>38</td><td>929</td><td/><td>2,346</td></tr><tr><td colspan=\"3\">deu.rst.pcc</td><td>142</td><td/><td>1,773</td><td>1,788</td><td/><td>17</td><td>207</td><td>275</td><td/><td>17</td><td>213</td><td/><td>294</td></tr><tr><td colspan=\"3\">nld.rst.nldt</td><td>56</td><td/><td>1,202</td><td>1,350</td><td/><td>12</td><td>257</td><td>347</td><td/><td>12</td><td>248</td><td/><td>344</td></tr><tr><td colspan=\"3\">por.rst.cstn</td><td>110</td><td/><td>1,595</td><td>1,772</td><td/><td>14</td><td>232</td><td>552</td><td/><td>12</td><td>123</td><td/><td>265</td></tr><tr><td colspan=\"3\">eus.rst.rstdt</td><td>84</td><td/><td>990</td><td>1,517</td><td/><td>28</td><td>350</td><td>604</td><td/><td>28</td><td>320</td><td/><td>593</td></tr><tr><td>Model</td><td>P</td><td colspan=\"2\">eng.rst.rstdt R</td><td>F</td><td>P</td><td>deu.rst.pcc R</td><td>F</td><td>P</td><td>nld.rst.rstdt R</td><td>F</td><td>P</td><td>por.rst.cstn R</td><td>F</td><td>P</td><td>eus.rst.rstdt R</td><td>F</td></tr><tr><td colspan=\"2\">Baseline 86.86</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>", |
| "text": "Description of how the data is distributed in each dataset. Notice that the amount of training data available for languages like Dutch and Basque is too small. 90.41 88.60 84.91 91.84 88.24 84.87 88.08 86.44 84.67 83.40 84.03 75.85 77.46 75.66 Token 95.45 94.67 95.06 94.76 86.05 90.20 97.69 86.05 91.49 92.88 88.68 90.73 87.25 80.78 83.89 Post 96.17 96.21 96.19 93.34 95.58 94.45 93.86 93.31 93.59 87.72 94.34 90.91 84.63 84.49 84.56 Depend 94.41 97.19 95.78 92.74 95.58 94.14 95.73 91.28 93.45 90.98 91.32 91.15 88.87 82.13 85.36 Both 95.24 95.48 95.36 93.81 92.86 93.33 93.02 93.02 93.02 91.60 90.57 91.08 86.02 81.96 83.94 Ensemble 96.32 97.02 96.67 92.81 96.60 94.67 94.72 93.90 94.31 89.53 93.59 91.51 85.59 85.16 85.38" |
| }, |
| "TABREF5": { |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "text": "Segments: [With 700 branches in Spain and 12 banking subsidiaries, five branches and 12 representativeoffices abroad, the Banco Exterior group has a lot] [to offer to a potential suitor.]" |
| }, |
| "TABREF6": { |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "text": "We highlight some of the common errors made by the model and how learning syntactic features helped eliminate such errors. Carlson, L. and Marcu, D. (2001). Discourse tagging reference manual. ISI Technical Report ISI-TR-545, 54:56." |
| } |
| } |
| } |
| } |