| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T09:01:09.931598Z" |
| }, |
| "title": "Combining Thai EDUs: Principle and Implementation", |
| "authors": [ |
| { |
| "first": "Chanatip", |
| "middle": [], |
| "last": "Saetia", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Kasikorn Labs Kasikorn Business Technology Group (KBTG) Nontaburi", |
| "institution": "", |
| "location": { |
| "country": "Thailand" |
| } |
| }, |
| "email": "chanatip.sae@kbtg.tech" |
| }, |
| { |
| "first": "Supawat", |
| "middle": [], |
| "last": "Taerungruang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Kasikorn Labs Kasikorn Business Technology Group (KBTG) Nontaburi", |
| "institution": "", |
| "location": { |
| "country": "Thailand" |
| } |
| }, |
| "email": "supawat.t@kbtg.tech" |
| }, |
| { |
| "first": "Tawunrat", |
| "middle": [], |
| "last": "Chalothorn", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Kasikorn Labs Kasikorn Business Technology Group (KBTG) Nontaburi", |
| "institution": "", |
| "location": { |
| "country": "Thailand" |
| } |
| }, |
| "email": "tawunrat.c@kbtg.tech" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Due to the lack of explicit end-of-sentence marker in Thai, Elementary Discourse Units (EDUs) are usually preferred over sentences as the basic linguistic units for processing Thai language. However, some segmented EDUs lack of structural or semantic information. To obtain a well-form unit, which represents a complete idea, this paper proposes combining EDUs with rhetorical relations selected depending on our proposed syntactic and semantic criteria. The combined EDUs can be then used without considering other parts of the text. Moreover, we also annotated data with the criteria. After that, we trained a deep learning model inspired by coreference resolution and dependency parsing models. As a result, our model achieves the F1 score of 82.72%.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Due to the lack of explicit end-of-sentence marker in Thai, Elementary Discourse Units (EDUs) are usually preferred over sentences as the basic linguistic units for processing Thai language. However, some segmented EDUs lack of structural or semantic information. To obtain a well-form unit, which represents a complete idea, this paper proposes combining EDUs with rhetorical relations selected depending on our proposed syntactic and semantic criteria. The combined EDUs can be then used without considering other parts of the text. Moreover, we also annotated data with the criteria. After that, we trained a deep learning model inspired by coreference resolution and dependency parsing models. As a result, our model achieves the F1 score of 82.72%.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Generally, sentences have served as the basic linguistic units required by many tasks in NLP (e.g., text summarization and question answering) for processing long bodies of text (Mihalcea, 2004; Raiman & Miller, 2017; Van Lierde & Chow, 2019) . Basic linguistic units must contain complete propositional content that represents a single piece of idea. However, in Thai language, the sentence cannot clearly specify the boundary (Intasaw & Aroonmanakun, 2013, p. 491; Lertpiya et al., 2018) since there is no explicit end-of-sentence marker like a period in English.", |
| "cite_spans": [ |
| { |
| "start": 178, |
| "end": 194, |
| "text": "(Mihalcea, 2004;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 195, |
| "end": 217, |
| "text": "Raiman & Miller, 2017;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 218, |
| "end": 242, |
| "text": "Van Lierde & Chow, 2019)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 428, |
| "end": 466, |
| "text": "(Intasaw & Aroonmanakun, 2013, p. 491;", |
| "ref_id": null |
| }, |
| { |
| "start": 467, |
| "end": 489, |
| "text": "Lertpiya et al., 2018)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "For this reason, prior works (Ketui et al., 2015; Singkul et al., 2019; Sinthupoun & Sornil, 2010; Sukvaree et al., 2004) have suggested to use the Elementary Discourse Unit (EDU), which is the basic unit in discourse based on Rhetorical Structure Theory (RST) (Mann & Thompson, 1988; Taboada & Mann, 2006) , as a processing unit for Thai language. However, a single EDU, on its own, may not contain complete information to be understandable. Generally, EDUs are segmented based solely on syntactic criteria Intasaw & Aroonmanakun, 2013; Ketui et al., 2012) and used alongside RST relation (which contains semantic and structural information) to represent complete discourse structure. Figure 1 . EDU that is separated due to embedded clause To highlight that the EDU segmentation based on syntactic criteria creates individual EDUs with incomplete in structure and meaning, an example is shown in Figure 1 . The figure illustrates RST d-tree (Morey et al., 2018) As such, combining incomplete EDUs is necessary for obtaining a complete idea that is to represent through a well-formed EDU.", |
| "cite_spans": [ |
| { |
| "start": 29, |
| "end": 49, |
| "text": "(Ketui et al., 2015;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 50, |
| "end": 71, |
| "text": "Singkul et al., 2019;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 72, |
| "end": 98, |
| "text": "Sinthupoun & Sornil, 2010;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 99, |
| "end": 121, |
| "text": "Sukvaree et al., 2004)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 261, |
| "end": 284, |
| "text": "(Mann & Thompson, 1988;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 285, |
| "end": 306, |
| "text": "Taboada & Mann, 2006)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 508, |
| "end": 537, |
| "text": "Intasaw & Aroonmanakun, 2013;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 538, |
| "end": 557, |
| "text": "Ketui et al., 2012)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 943, |
| "end": 963, |
| "text": "(Morey et al., 2018)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 686, |
| "end": 694, |
| "text": "Figure 1", |
| "ref_id": null |
| }, |
| { |
| "start": 898, |
| "end": 906, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This paper states the syntactic and semantic criteria for considering rhetorical relations used to combine EDUs into a well-formed EDU, which represents a complete idea. After that, rhetorical relations that correspond to those criteria are proposed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We conducted experiments by building a dataset based on our proposed criteria and relations, which is then trained on a deep learning model. Our first experiment investigates methods to score the combination of EDU pairs. Since there is no prior work on this exact task, two methods from dependency parser task (Dozat & Manning, 2016) and coreference resolution task (Lee et al., 2017) were adapted on this task. In the second experiment, various EDU representations constructed from contextual word vectors are compared against one another. Lastly, we discuss how the result of our method covers the proposed rhetorical relations stated above.", |
| "cite_spans": [ |
| { |
| "start": 311, |
| "end": 334, |
| "text": "(Dozat & Manning, 2016)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 367, |
| "end": 385, |
| "text": "(Lee et al., 2017)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The rest of this paper is structured as follows. In Section 2, the background knowledge related to this work is reviewed. The type of rhetorical relation to combine EDUs is explained in Section 3. While, in Section 4, we describe the architecture of our deep learning model. The dataset, implementation detail, and evaluation metrics are mentioned in Section 5. The results are shown in Section 6 and discussed in Section 7. Finally, Section 8 concludes the paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "For processing to obtain single pieces of information from text, the text needs to be segmented into units that are related to each other. For this reason, Rhetorical structure theory (RST), a text organization theory, was proposed by Mann and Thompson (1988) . RST is widely applied in text and discourse processing. The essence of this theory is to analyze relationships between subu-nits in the discourse, which is based on the intention of the messenger and the content of text. A tree diagram is then used to represent the relationships between subunits.", |
| "cite_spans": [ |
| { |
| "start": 235, |
| "end": 259, |
| "text": "Mann and Thompson (1988)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Elementary discourse unit", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "RST's theoretical concept was developed to be more practical by . In the discourse structure, there is the smallest unit that can convey complete content and meaning, called Elementary Discourse Unit (EDU). EDU can determine the nuclearity status of each unit and can also specify the type of rhetorical relation between them. After that, the criteria used to segment EDUs in different languages and sets of rhetorical relations were proposed in large numbers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Elementary discourse unit", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "In this paper, the criteria for segmenting Thai EDUs that proposed by Intasaw and Aroonmanakun (2013) was applied.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Elementary discourse unit", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Schauer 2000states that not only clauses but sometimes prepositional phrases can also be considered as discourse units. To support this statement, three principles for including phrases as discourse units are proposed as follows.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The prior principle to combine discourse units.", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "First, consideration of the complement-status and adjunct-status of a clause or phrase is the principle that if any discourse unit has a status as a complement of another discourse unit, they will be combined to express the complete meaning. On the other hand, if any discourse unit has a status as an adjunct of another discourse unit, there is no need to combine them into a larger unit, since the main discourse unit is already meaningful.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The prior principle to combine discourse units.", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Second, consideration of semantic specification of lexeme is to consider whether each word in discourse unit needs to rely on other linguistic units for expressing its lexical meaning. For example, the unit that indicates the agent, the patient, the instrument, the location, or the time frame. If the words in the discourse unit need to rely on another discourse unit in order to express their lexical meaning, those units will be combined.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The prior principle to combine discourse units.", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Finally, consideration of discourse relation between units is the principle that the combined discourse units always show the discourse relation between each other. These relations can be identified by the function of conjunctions at the beginning of the unit.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The prior principle to combine discourse units.", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "In this paper, the principles proposed by Schauer (2000) are applied as criteria to combine EDUs by considering the rhetorical relations between EDUs that are related to syntactic and semantic characteristics of text. The details on applying those criteria are presented in the next section.", |
| "cite_spans": [ |
| { |
| "start": 42, |
| "end": 56, |
| "text": "Schauer (2000)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The prior principle to combine discourse units.", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "By applying the concepts from Schauer 2000, EDUs occurred in discourse structure can be considered as a clause or a phrase with the status of complement or adjunct. Besides that, four types of rhetorical relations only used to link between the matrix clause and unit with complement-status. The units with complementstatus are necessary to express the complete meaning. If such units are omitted, the expressed meaning is inadequate. The criteria for combing EDUs are applied by using four types of rhetorical relations that are linked between units with complement-status. The details of each type are explained in the following section.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Types of Rhetorical Relation to Combine", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The attribution relation is used to link between a clause containing a reporting verb or attribution verb, e.g. \u0e1e\u0e39 \u0e14 'speak', \u0e23\u0e32\u0e22\u0e07\u0e32\u0e19 'report', \u0e1a\u0e2d\u0e01 'tell', \u0e04\u0e34 \u0e14 'think', \u0e2a\u0e31 \u0e48 \u0e07 'order', and a clause containing the content of the reported message (Carlson & Marcu, 2001, p. 46 ).", |
| "cite_spans": [ |
| { |
| "start": 245, |
| "end": 274, |
| "text": "(Carlson & Marcu, 2001, p. 46", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Attribution", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "As shown in Figure 2 , EDU 2 contains the content of the reporting verb \u0e04\u0e32\u0e14 'predict' in EDU 1 . In terms of syntactic and semantic characteristics, EDU 2 is a complement clause of the main verb in EDU 1 . If there is any part missing, the incomplete meaning will be conveyed. For this reason, both EDUs must be combined.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 12, |
| "end": 20, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 2. EDUs that linked by attribution relation", |
| "sec_num": null |
| }, |
| { |
| "text": "The attribution-negative has properties similar to the attribution relation. The difference is that the attribution-negative is used for marking a negative attribution. Figure 3 , EDU 2 contains the content which cannot be omitted of an attributive verb \u0e1b\u0e0f\u0e34 \u0e40\u0e2a\u0e18 'deny' in EDU 1 . But, in this case, the main verb appeared in EDU 1 is semantically negative. Therefore, the attribution-negative is applied.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 169, |
| "end": 177, |
| "text": "Figure 3", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Attribution-negative", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Elaboration-object-attribute is a relation involving a clause, usually a postmodifier of a noun phrase in the matrix clause, that is required to give meaning to an animate or inanimate object (Carlson & Marcu, 2001, p. 54) . In this case, omitting modifier clauses may result in incomplete meaning. Figure 4 shows that EDU 1.1 need EDU 2 as a modifier for a noun phrase \u0e08\u0e31 \u0e07\u0e2b\u0e27\u0e31 \u0e14\u0e43\u0e2b\u0e0d\u0e48 \u0e17\u0e35 \u0e48 \u0e2a\u0e38 \u0e14 'the largest province'. If EDU 2 is omitted, the combination of EDU 1.1 and EDU 1.2 will mean 'The largest province is Surat Thani', which is the meaning that causes misunderstandings. Therefore, to become a meaningful unit, EDU 2 is a complement that must always be combined to EDU 1.1 .", |
| "cite_spans": [ |
| { |
| "start": 192, |
| "end": 222, |
| "text": "(Carlson & Marcu, 2001, p. 54)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 299, |
| "end": 307, |
| "text": "Figure 4", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Elaboration-object-attribute", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Same-unit is pseudo-relation used as a device for linking two discontinuous text fragments that are really a single EDU, but which are broken up by an embedded unit (Carlson & Marcu, 2001, p. 66 ). Figure 5 . Single EDU that is separated due to embedded clause Considering Figure 5, ", |
| "cite_spans": [ |
| { |
| "start": 165, |
| "end": 194, |
| "text": "(Carlson & Marcu, 2001, p. 66", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 198, |
| "end": 206, |
| "text": "Figure 5", |
| "ref_id": null |
| }, |
| { |
| "start": 273, |
| "end": 282, |
| "text": "Figure 5,", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Same-unit", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "The architecture of our deep learning model for combining EDUs is presented in this section. The architecture, as shown in Figure 6 , is separated into four parts: Contextual word representation, EDU representation, EDU combination scoring, and the training and inference process. The detail of each part is elaborated in the following subsections.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 123, |
| "end": 131, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model architecture", |
| "sec_num": "4" |
| }, |
| { |
| "text": "This module is responsible for converting a sequence of words and the corresponding part-of-speech tags (POS) into a sequence of contextual word vectors", |
| "cite_spans": [ |
| { |
| "start": 104, |
| "end": 109, |
| "text": "(POS)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Contextual word representation", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "where is the word sequence length, as illustrated in Figure 6 . First, each word and its POS in the sequence is converted into embedding vectors where is the concatenation of word embeddings and POS embeddings. After that, the sequence of concatenated embedding vectors is fed into Bidirectional Long-short memory network (Bi-LSTM) to create contextual word vectors .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 53, |
| "end": 61, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Contextual word representation", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "This module aggregates the contextual word vectors", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "EDU representation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "into EDU representation where is the number of EDUs ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "EDU representation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "). For this module, we adopt two methods to create the EDU representation. These methods were proposed by Lee et al. (2017) to create span representation for performing coreference resolution. Therefore, in this module, each EDU vector is concatenated from an end-point vector and a self-attention vector ,which are described in the following subsections.", |
| "cite_spans": [ |
| { |
| "start": 106, |
| "end": 123, |
| "text": "Lee et al. (2017)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "and represents one EDU (", |
| "sec_num": null |
| }, |
| { |
| "text": "End-point representation presents each by concatenating with three parts, as shown in Eq. 1. The beginning and end contextual word vectors ( ) are concatenated with the length of the EDU in words (", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "and represents one EDU (", |
| "sec_num": null |
| }, |
| { |
| "text": "). Therefore, this representation captures the keyword at the beginning and the end alongside its length.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "and represents one EDU (", |
| "sec_num": null |
| }, |
| { |
| "text": ", as shown in Eq. 2. Each word is assigned a weight which is computed from . First, each contextual word vector ( ) is fed to a linear function ( ). Second, the output vector is applied with Softmax function to calculate the weight for each word.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Self-attention representation is the weighted summation of contextual word vectors in", |
| "sec_num": null |
| }, |
| { |
| "text": "This module is responsible for calculating the scores for where is a score between and . The two proposed methods are inspired by dependency parser and coreference resolution tasks. Dozat and Manning (2016) . First, each EDU representation is embedded into the child vector and the parent vector by linear functions ( and ), as shown in Eq. 3 and Eq. 4.", |
| "cite_spans": [ |
| { |
| "start": 182, |
| "end": 206, |
| "text": "Dozat and Manning (2016)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "EDU combination scoring", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "After that, both sequences of vectors ( and ) are applied to bilinear matrix attention to compute the scores , which are computed as shown in Eq. 5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dependency-parsing-based method is based on biaffine dependency parsing, proposed by", |
| "sec_num": null |
| }, |
| { |
| "text": "Coreference-based method is based on end-toend coreference resolution, Lee et al. (2017) , as shown in Eq. 6. The score of each pair of EDU ( , ) is composed of three parts: two individual scores ( , ) of EDUs and the antecedent score calculated from both EDUs ( ).", |
| "cite_spans": [ |
| { |
| "start": 71, |
| "end": 88, |
| "text": "Lee et al. (2017)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dependency-parsing-based method is based on biaffine dependency parsing, proposed by", |
| "sec_num": null |
| }, |
| { |
| "text": "Both types of scores are calculated based on linear functions ( , ), which are describes below:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dependency-parsing-based method is based on biaffine dependency parsing, proposed by", |
| "sec_num": null |
| }, |
| { |
| "text": "where denotes the dot product, \u2022 denotes element-wise multiplication, and are weights for calculating an individual score and an antecedent score respectively, and is the distance between and .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dependency-parsing-based method is based on biaffine dependency parsing, proposed by", |
| "sec_num": null |
| }, |
| { |
| "text": "This process is fed with combination scores of each EDU. In this case, zero is added to the list of scores to represent an individual EDU. Then, Softmax function is applied to the scores to find the probability distribution of combining . The probability indicates if the EDU should be combined and which EDU is combined with. The answer is the position of highest probability , as shown in Eq. 9. If is the index of added zero, the considered EDU is individual. In the other hand, if is other position, the considered EDU is combined with the EDU at .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference and training process", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "In the training process, the marginal loglikelihood of all correct combined EDU position ( ), which is indicated from the label. The calculation of the loss is shown in Eq. 10.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference and training process", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "In this work, the dataset is collected from social media sources, including conversations and posts. After that, the text is segmented into EDUs by our in-house model and then combined EDUs from mentioned criteria by linguists. The dataset contains 161,515 arbitrary texts, which can be segmented into 847,186 EDUs. There are 59,564 pairs of EDUs that are combined.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "In the annotation process, we also specify which relation in the criteria is used to combine. The number of each relation in the data is shown in Table In pre-processing, in-house models that are trained by social media data are used (Lertpiya et al., 2018) . The text is tokenized into a sequence of words. After that, each word is tagged with part-of-speech.", |
| "cite_spans": [ |
| { |
| "start": 234, |
| "end": 257, |
| "text": "(Lertpiya et al., 2018)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 146, |
| "end": 151, |
| "text": "Table", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "In the training and inference process, the dataset is randomly split with a ratio 9:1 for training set and testing set, respectively. After that, we split 10% of training set for validation set. Meanwhile, the rest of training set is truly used for training the model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "The hyperparameters of the trained model and the optimizer are described in this section. The word and POS embedding sizes are 300 and 100. The word embedding is pre-trained on social media data with the Skip-gram technique (Mikolov et al., 2013) . Four layers of Bi-LSTM are stacked. The hidden size of each Bi-LSTM is 32, and there are dropout layers whose rate is 0.1 between the layers. In end-point representation, the size of length embedding is 20. In the dependency-based scoring method, the hidden sizes of fully connected layers and are both 64. Meanwhile, in the coreference-based scoring method, the embedding size of the distance is 20. The hidden size of fully connected layers and are 30 and 150, respectively.", |
| "cite_spans": [ |
| { |
| "start": 224, |
| "end": 246, |
| "text": "(Mikolov et al., 2013)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation details", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The optimizer is ADAM (Zhang, 2018) , whose initial learning rate is 0.001. The learning rate is reduced by 0.5 when the F1 score has stopped improving for five epochs. The batch size is 16. The model is trained 40 epochs and selects the model, which gains the highest F1 score on validation set.", |
| "cite_spans": [ |
| { |
| "start": 22, |
| "end": 35, |
| "text": "(Zhang, 2018)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation details", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The metric for this paper should have three characteristics. First, the metric reflects the intersection between combined EDUs label and prediction to measure the performance of the model. Second, since a combined EDUs is not always limited to be constructed from only two EDUs, so the metric needs to consider combining more than two EDUs. Finally, the metric ignores the order or direction of relations that are used to combine because combining EDUs does not need to be interested in the relations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "According to, a coreference resolution task also considers those characteristics for evaluation. Therefore, the metrics from coreference resolution are chosen for evaluation for this paper. However, there are many methods proposed for calculating F1 on coreference resolution. Each method has different advantages and drawbacks. Thus, the average of F1 that used in this paper are calculated from three methods: MUC (Vilain et al., 1995) , (Bagga & Baldwin, 1998) , and (Luo, 2005) as same as Lee et al. (2017) .", |
| "cite_spans": [ |
| { |
| "start": 416, |
| "end": 437, |
| "text": "(Vilain et al., 1995)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 440, |
| "end": 463, |
| "text": "(Bagga & Baldwin, 1998)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 470, |
| "end": 481, |
| "text": "(Luo, 2005)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 493, |
| "end": 510, |
| "text": "Lee et al. (2017)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "The results of two experiments are discussed in this section. The first experiment compares the different EDU combination scoring method. Meanwhile, the second experiment shows the performance of each EDU representation constructed from contextual word vectors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The different of EDU combination scoring methods are compared in the experiment. In this case, EDU representation module is composed of only end-point module. Table 2 shows that scoring with the coreference-based method outperforms the dependency-based method in terms of F1 score. The result occurs because the coreference-based method includes the distance between considered EDUs in a score calculation. Meanwhile, the dependency-based method exploits only the representation of the considered EDUs. Table 2 . The comparison of each combining scoring method Figure 7 shows the frequency of the distance between combined EDUs to prove the aforementioned statement. The result indicates that the distance of combined EDUs is usually short.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 159, |
| "end": 166, |
| "text": "Table 2", |
| "ref_id": null |
| }, |
| { |
| "start": 503, |
| "end": 510, |
| "text": "Table 2", |
| "ref_id": null |
| }, |
| { |
| "start": 561, |
| "end": 569, |
| "text": "Figure 7", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "EDU combination scoring", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "In other words, if EDUs are far from each other, those EDUs should not be combined. Therefore, the distance is an important feature for the model on combining EDUs. As a result, the coreference-based method, which includes the distance feature, can perform better than the dependency-based method. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "EDU combination scoring", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "In this section, the difference between an endpoint representation and self-attention representation is shown. Table 3 shows the results that, by using end-point representation, the F1 is slightly higher than using self-attention representation.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 111, |
| "end": 118, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "EDU representation", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "The reason is that most of combined EDUs contain a complementizer ('\u0e27\u0e48 \u0e32', '\u0e17\u0e35 \u0e48 ') as a marker, which is usually the first word of preceded EDU in the combined EDUs. The statement can be proved in Figure 8 . This figure shows that the two most frequent words at the beginning of the preceded EDU in combined EDU are the mentioned complementizers. Therefore, the endpoint representation can be trained easier because the representation focuses on the beginning and end words of the EDU. Instead of using each representation separately, both representations are trained in the model. The model achieves the highest score at 82.72% in terms of F1. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 198, |
| "end": 206, |
| "text": "Figure 8", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Table 3. The comparison of each representation", |
| "sec_num": null |
| }, |
| { |
| "text": "According to the results of the above, the discussion of the model is presented in this section. The details are focused on the process that the best model works on each relation for combining EDUs. Table 4 shows the recall of each relation. In this case, because the model does not concern about which class is predicted, so the precision of each relation is not evaluated. The model achieves 82.17% and 95.02% on 'attribution' and 'attribution-n'. According to, there is a marker like a complementizer '\u0e27\u0e48 \u0e32' to indicate that they should be combined.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 199, |
| "end": 206, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Meanwhile, 'elaboration-object attribute' is usually indicated with a complementizer '\u0e17\u0e35 \u0e48 ' as a marker. However, this complementizer is also included as a marker in other RST relations. Therefore, the model achieves only 71.35% recalls on this relation due to the various usage of the marker.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Besides that, the model cannot perform well on 'same-unit' relation, of which recall is only 53.01%. The reason is that there is no marker to classify this relation. Moreover, this relation is rarely found on our dataset. Therefore, there is insufficient information for the model to learn this relation without any marker.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "In this paper, we propose the criteria of both the syntactic and semantic way for selecting rhetorical relations, which are used to combine EDUs. Moreover, the dataset was annotated by using those criteria before being used to train a deep learning model. Two experiments were conducted to find the best configuration of the model. In the EDU combination scoring, the coreference-based method achieved a better F1 score. We suspect this because a distance between EDUs is correlated to its likeliness to combine. Meanwhile, using both proposed EDU representations lead to the best F1 score for combining EDUs. The best model achieves 82.72% in terms of F1 score. In an ablation study, the model works well on 'attribution' and 'attribution-n' relation due to the keywords at the beginning of EDU. However, 'same-unit' relation is hard for the model to predict as a relation for combining EDUs because there is no keyword to guide the model and lack of annotated data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8" |
| }, |
| { |
| "text": "In this paper, we have primarily focused on the combining EDUs task. However, further experiments should be performed to evaluate how the use of combined EDUs affects downstream tasks (e.g., intention classification and sentiment analysis).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8" |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Entity-Based Cross-Document Coreferencing Using the Vector Space Model", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Bagga", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Baldwin", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "79--85", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bagga, A., & Baldwin, B. (1998). Entity-Based Cross-Document Coreferencing Using the Vector Space Model. 36th Annual Meeting of the Associ- ation for Computational Linguistics and 17th In- ternational Conference on Computational Linguis- tics, Volume 1, 79-85.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Discourse tagging reference manual", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Carlson", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Carlson, L., & Marcu, D. (2001). Discourse tagging reference manual. University of Southern Califor- nia Information Sciences Institute.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Building a discourse-tagged corpus in the framework of Rhetorical Structure Theory", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Carlson", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "E" |
| ], |
| "last": "Okurowski", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the Second SIGdial Workshop on Discourse and Dialogue", |
| "volume": "16", |
| "issue": "", |
| "pages": "1--10", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Carlson, L., Marcu, D., & Okurowski, M. E. (2001). Building a discourse-tagged corpus in the frame- work of Rhetorical Structure Theory. Proceedings of the Second SIGdial Workshop on Discourse and Dialogue, 16, 1-10.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Deep biaffine attention for neural dependency parsing", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Dozat", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1611.01734" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dozat, T., & Manning, C. D. (2016). Deep biaffine attention for neural dependency parsing. arXiv preprint arXiv:1611.01734.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Basic principles for segmenting Thai EDUs", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Intasaw", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Aroonmanakun", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 27th Pacific Asia Conference on Language, Information, and Computation", |
| "volume": "", |
| "issue": "", |
| "pages": "491--498", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Intasaw, N., & Aroonmanakun, W. (2013). Basic principles for segmenting Thai EDUs. Proceedings of the 27th Pacific Asia Conference on Language, Information, and Computation (PACLIC 27), 491- 498.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "A rule-based method for thai elementary discourse unit segmentation (ted-seg)", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Ketui", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Theeramunkong", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Onsuwan", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Seventh International Conference on Knowledge, Information and Creativity Support Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "195--202", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ketui, N., Theeramunkong, T., & Onsuwan, C. (2012). A rule-based method for thai elementary discourse unit segmentation (ted-seg). 2012 Sev- enth International Conference on Knowledge, In- formation and Creativity Support Systems, 195- 202.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "An EDU-Based Approach for Thai Multi-Document Summarization and Its Application", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Ketui", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Theeramunkong", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Onsuwan", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "ACM Trans. Asian Low-Resour. Lang. Inf. Process", |
| "volume": "14", |
| "issue": "1", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ketui, N., Theeramunkong, T., & Onsuwan, C. (2015). An EDU-Based Approach for Thai Multi- Document Summarization and Its Application. ACM Trans. Asian Low-Resour. Lang. Inf. Pro- cess., 14(1), Article 4.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "End-to-end neural coreference resolution", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1707.07045" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lee, K., He, L., Lewis, M., & Zettlemoyer, L. (2017). End-to-end neural coreference resolution. arXiv preprint arXiv:1707.07045.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "A preliminary study on fundamental thai nlp tasks for usergenerated web content", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Lertpiya", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Chaiwachirasak", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Maharattanamalai", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Lapjaturapit", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Chalothorn", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Tirasaroj", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Chuangsuwanich", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "International Joint Symposium on Artificial Intelligence and Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1--8", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lertpiya, A., Chaiwachirasak, T., Maharattanamalai, N., Lapjaturapit, T., Chalothorn, T., Tirasaroj, N., & Chuangsuwanich, E. (2018). A preliminary study on fundamental thai nlp tasks for user- generated web content. 2018 International Joint Symposium on Artificial Intelligence and Natural Language Processing (ISAI-NLP), 1-8.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "On coreference resolution performance metrics", |
| "authors": [ |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Luo", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "25--32", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Luo, X. (2005). On coreference resolution perfor- mance metrics. Proceedings of Human Language Technology Conference and Conference on Em- pirical Methods in Natural Language Processing, 25-32.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Rhetorical structure theory: Toward a functional theory of text organization", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [ |
| "C" |
| ], |
| "last": "Mann", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "A" |
| ], |
| "last": "Thompson", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "Text", |
| "volume": "8", |
| "issue": "3", |
| "pages": "243--281", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mann, W. C., & Thompson, S. A. (1988). Rhetorical structure theory: Toward a functional theory of text organization. Text, 8(3), 243-281.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Graph-based ranking algorithms for sentence extraction, applied to text summarization", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Mihalcea", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the ACL Interactive Poster and Demonstration Sessions", |
| "volume": "", |
| "issue": "", |
| "pages": "170--173", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mihalcea, R. (2004). Graph-based ranking algorithms for sentence extraction, applied to text summariza- tion. Proceedings of the ACL Interactive Poster and Demonstration Sessions, 170-173.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Distributed representations of words and phrases and their compositionality. Advances in Neural Information Processing Systems", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [ |
| "S" |
| ], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "26", |
| "issue": "", |
| "pages": "3111--3119", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., & Dean, J. (2013). Distributed representations of words and phrases and their compositionality. Ad- vances in Neural Information Processing Systems, 26, 3111-3119.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "A dependency perspective on rst discourse parsing and evaluation", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Morey", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Muller", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Asher", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Computational Linguistics", |
| "volume": "44", |
| "issue": "2", |
| "pages": "197--235", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Morey, M., Muller, P., & Asher, N. (2018). A de- pendency perspective on rst discourse parsing and evaluation. Computational Linguistics, 44(2), 197-235.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Globally normalized reader", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Raiman", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1709.02828" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Raiman, J., & Miller, J. (2017). Globally normalized reader. arXiv preprint arXiv:1709.02828.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "From elementary discourse units to complex ones", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Schauer", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "46--55", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Schauer, H. (2000). From elementary discourse units to complex ones. 1st SIGdial Workshop on Dis- course and Dialogue, 46-55.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Parsing Thai Social Data: A New Challenge for Thai NLP", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Singkul", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Khampingyot", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Maharattamalai", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Taerungruang", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Chalothorn", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "14th International Joint Symposium on Artificial Intelligence and Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1--7", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Singkul, S., Khampingyot, B., Maharattamalai, N., Taerungruang, S., & Chalothorn, T. (2020). Pars- ing Thai Social Data: A New Challenge for Thai NLP. 2019 14th International Joint Symposium on Artificial Intelligence and Natural Language Pro- cessing (ISAI-NLP), 1-7.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Thai rhetorical structure analysis", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Sinthupoun", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Sornil", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "International Journal of Computer Science and Information Security (IJCSIS)", |
| "volume": "7", |
| "issue": "1", |
| "pages": "95--105", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sinthupoun, S., & Sornil, O. (2010). Thai rhetorical structure analysis. International Journal of Com- puter Science and Information Security (IJCSIS), 7(1), 95-105.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "RST based Text Summarization with Ontology Driven in Agriculture Domain", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Sukvaree", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Charoensuk", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Wattanamethanont", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Kultrakul", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sukvaree, T., Charoensuk, J., Wattanamethanont, M., & Kultrakul, A. (2004). RST based Text Summa- rization with Ontology Driven in Agriculture Do- main. Department of Computer Engineering, Kasetsart University, Bangkok, Thailand.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Rhetorical Structure Theory: looking back and moving ahead", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Taboada", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [ |
| "C" |
| ], |
| "last": "Mann", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Discourse Studies", |
| "volume": "8", |
| "issue": "3", |
| "pages": "423--459", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Taboada, M., & Mann, W. C. (2006). Rhetorical Structure Theory: looking back and moving ahead. Discourse Studies, 8(3), 423-459.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Queryoriented text summarization based on hypergraph transversals", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Van Lierde", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [ |
| "W" |
| ], |
| "last": "Chow", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Information Processing & Management", |
| "volume": "56", |
| "issue": "4", |
| "pages": "1317--1338", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Van Lierde, H., & Chow, T. W. (2019). Query- oriented text summarization based on hypergraph transversals. Information Processing & Manage- ment, 56(4), 1317-1338.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "A model-theoretic coreference scoring scheme", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Vilain", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Burger", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Aberdeen", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Connolly", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Hirschman", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Sixth Message Understanding Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "45--52", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vilain, M., Burger, J., Aberdeen, J., Connolly, D., & Hirschman, L. (1995). A model-theoretic coreference scoring scheme. Sixth Message Un- derstanding Conference (MUC-6): Proceedings of a Conference Held in Columbia, Maryland, No- vember 6-8, 1995, 45-52.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Improved adam optimizer for deep neural networks", |
| "authors": [ |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "IEEE/ACM 26th International Symposium on Quality of Service (IWQoS)", |
| "volume": "", |
| "issue": "", |
| "pages": "1--2", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhang, Z. (2018). Improved adam optimizer for deep neural networks. 2018 IEEE/ACM 26th Interna- tional Symposium on Quality of Service (IWQoS), 1-2.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "EDUs that linked by attributionnegative relation Like the attribution relation, in", |
| "num": null |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "EDUs that linked by elaborationobject-attribute", |
| "num": null |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "Figure 6. Model Architecture", |
| "num": null |
| }, |
| "FIGREF3": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "Distance between combined EDU", |
| "num": null |
| }, |
| "FIGREF4": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "The frequency of the first word on the preceded EDU in the combined EDUs", |
| "num": null |
| }, |
| "TABREF0": { |
| "content": "<table><tr><td/><td/><td/><td>of</td></tr><tr><td colspan=\"4\">EDU, which is separated by an embedded</td></tr><tr><td>clause.</td><td>A</td><td>matrix</td><td>clause</td></tr><tr><td colspan=\"3\">[\u0e1c\u0e39 \u0e49 \u0e0a\u0e32\u0e22\u0e44\u0e1b\u0e17 \u0e32\u0e07\u0e32\u0e19\u0e40\u0e21\u0e37 \u0e48 \u0e2d\u0e40\u0e0a\u0e49 \u0e32\u0e19\u0e35 \u0e49</td><td/></tr></table>", |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "text": "] 'A man went to work this morning', which should be one unit of EDU, is separated into two units, [\u0e1c\u0e39 \u0e49 \u0e0a\u0e32\u0e22] 'A man' and [\u0e44\u0e1b\u0e17 \u0e32\u0e07\u0e32\u0e19\u0e40\u0e21\u0e37 \u0e48 \u0e2d\u0e40\u0e0a\u0e49 \u0e32\u0e19\u0e35 \u0e49 ] 'went to work this morning', because there is embedded clause [\u0e17\u0e35 \u0e48 \u0e40\u0e1b\u0e47 \u0e19\u0e17\u0e19\u0e32\u0e22] 'who is a lawyer' modifying a noun [\u0e1c\u0e39 \u0e49 \u0e0a\u0e32\u0e22] 'A man' in matrix clause." |
| }, |
| "TABREF1": { |
| "content": "<table/>", |
| "html": null, |
| "num": null, |
| "type_str": "table", |
| "text": "a matrix clause \u0e09\u0e31 \u0e19\u0e2b\u0e32\u0e23\u0e49 \u0e32\u0e19\u0e2d\u0e32\u0e2b\u0e32\u0e23\u0e44\u0e21\u0e48 \u0e44\u0e14\u0e49 \u0e40\u0e25\u0e22 'I cannot find a restaurant', which is a single EDU, is broken up into 2 units by an embedded clause with adjunctstatus \u0e17\u0e35 \u0e48 \u0e40\u0e1b\u0e34 \u0e14\u0e16\u0e36 \u0e07\u0e40\u0e17\u0e35 \u0e48 \u0e22\u0e07\u0e04\u0e37 \u0e19 'that is open until midnight' modifying a noun \u0e23\u0e49 \u0e32\u0e19\u0e2d\u0e32\u0e2b\u0e32\u0e23 'a restaurant' in a matrix clause. In this case, EDU 1.1 [\u0e09\u0e31 \u0e19\u0e2b\u0e32\u0e23\u0e49 \u0e32\u0e19\u0e2d\u0e32\u0e2b\u0e32\u0e23] 'I find a restaurant' and EDU 1.2 [\u0e44\u0e21\u0e48 \u0e44\u0e14\u0e49 \u0e40\u0e25\u0e22] 'cannot' have to be combined to represent the meaning as a single EDU.The types of rhetorical relations presented in this section are used as theoretical backgrounds for implementing combining EDUs model. Details are discussed in the next section." |
| } |
| } |
| } |
| } |