| { |
| "paper_id": "D11-1015", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T16:32:08.707615Z" |
| }, |
| "title": "Unsupervised Discovery of Discourse Relations for Eliminating Intra-sentence Polarity Ambiguities", |
| "authors": [ |
| { |
| "first": "Lanjun", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "ljzhou@se.cuhk.edu.hk" |
| }, |
| { |
| "first": "Binyang", |
| "middle": [], |
| "last": "Li", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "byli@se.cuhk.edu.hk" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "wgao@se.cuhk.edu.hk" |
| }, |
| { |
| "first": "Zhongyu", |
| "middle": [], |
| "last": "Wei", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "zywei@se.cuhk.edu.hk" |
| }, |
| { |
| "first": "Kam-Fai", |
| "middle": [], |
| "last": "Wong", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "kfwong@se.cuhk.edu.hk" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Polarity classification of opinionated sentences with both positive and negative sentiments 1 is a key challenge in sentiment analysis. This paper presents a novel unsupervised method for discovering intra-sentence level discourse relations for eliminating polarity ambiguities. Firstly, a discourse scheme with discourse constraints on polarity was defined empirically based on Rhetorical Structure Theory (RST). Then, a small set of cuephrase-based patterns were utilized to collect a large number of discourse instances which were later converted to semantic sequential representations (SSRs). Finally, an unsupervised method was adopted to generate, weigh and filter new SSRs without cue phrases for recognizing discourse relations. Experimental results showed that the proposed methods not only effectively recognized the defined discourse relations but also achieved significant improvement by integrating discourse information in sentence-level polarity classification.", |
| "pdf_parse": { |
| "paper_id": "D11-1015", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Polarity classification of opinionated sentences with both positive and negative sentiments 1 is a key challenge in sentiment analysis. This paper presents a novel unsupervised method for discovering intra-sentence level discourse relations for eliminating polarity ambiguities. Firstly, a discourse scheme with discourse constraints on polarity was defined empirically based on Rhetorical Structure Theory (RST). Then, a small set of cuephrase-based patterns were utilized to collect a large number of discourse instances which were later converted to semantic sequential representations (SSRs). Finally, an unsupervised method was adopted to generate, weigh and filter new SSRs without cue phrases for recognizing discourse relations. Experimental results showed that the proposed methods not only effectively recognized the defined discourse relations but also achieved significant improvement by integrating discourse information in sentence-level polarity classification.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "As an important task of sentiment analysis, polarity classification is critically affected by discourse structure (Polanyi and Zaenen, 2006) . Previous research developed discourse schema (Asher et al., 2008) (Somasundaran et al., 2008) and proved that the utilization of discourse relations could improve the performance of polarity classification on dialogues (Somasundaran et al., 2009) . However, cur- 1 Defined as ambiguous sentences in this paper rent state-of-the-art methods for sentence-level polarity classification are facing difficulties in ascertaining the polarity of some sentences. For example: Example (a) is a positive sentence holding a Contrast relation between first two segments and a Cause relation between last two segments. The polarity of \"criticized\", \"hated\" and \"corrupted\" are recognized as negative expressions while \"loved\" is recognized as a positive expression. Example (a) is difficult for existing polarity classification methods for two reasons: (1) the number of positive expressions is less than negative expressions; (2) the importance of each sentiment expression is unknown. However, consider Figure 1 , if we know that the polarity of the first two segments holding a Contrast relation is determined by the nucleus (Mann and Thompson, 1988) segment and the polarity of the last two segments holding a Cause relation is also determined by the nucleus segment, the polarity of the sentence will be determined by the polarity of \"[he...population]\". Thus, the polarity of Example (a) is positive.", |
| "cite_spans": [ |
| { |
| "start": 114, |
| "end": 140, |
| "text": "(Polanyi and Zaenen, 2006)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 188, |
| "end": 208, |
| "text": "(Asher et al., 2008)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 209, |
| "end": 236, |
| "text": "(Somasundaran et al., 2008)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 362, |
| "end": 389, |
| "text": "(Somasundaran et al., 2009)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 406, |
| "end": 407, |
| "text": "1", |
| "ref_id": null |
| }, |
| { |
| "start": 1258, |
| "end": 1283, |
| "text": "(Mann and Thompson, 1988)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1135, |
| "end": 1143, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Statistics showed that 43% of the opinionated sentences in NTCIR 2 MOAT (Multilingual Opinion Analysis Task) Chinese corpus 3 are ambiguous. Existing sentence-level polarity classification methods ignoring discourse structure often give wrong results for these sentences. We implemented state-of-the- art method (Xu and Kit, 2010) in NTCIR-8 Chinese MOAT as the baseline polarity classifier (BPC) in this paper. Error analysis of BPC showed that 49% errors came from ambiguous sentences.", |
| "cite_spans": [ |
| { |
| "start": 312, |
| "end": 330, |
| "text": "(Xu and Kit, 2010)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we focused on the automation of recognizing intra-sentence level discourse relations for polarity classification. Based on the previous work of Rhetorical Structure Theory (RST) (Mann and Thompson, 1988) , a discourse scheme with discourse constraints on polarity was defined empirically (see Section 3). The scheme contains 5 relations: Contrast, Condition, Continuation, Cause and Purpose. From a raw corpus, a small set of cuephrase-based patterns were used to collect discourse instances. These instances were then converted to semantic sequential representations (SSRs). Finally, an unsupervised SSR learner was adopted to generate, weigh and filter high quality new SSRs without cue phrases. Experimental results showed that the proposed methods could effectively recognize the defined discourse relations and achieve significant improvement in sentence-level polarity classification comparing to BPC.", |
| "cite_spans": [ |
| { |
| "start": 193, |
| "end": 218, |
| "text": "(Mann and Thompson, 1988)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The remainder of this paper is organized as follows. Section 2 introduces the related work. Section 3 presents the discourse scheme with discourse constraints on polarity. Section 4 gives the detail of proposed method. Experimental results are reported and discussed in Section 5 and Section 6 concludes this paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Research on polarity classification were generally conducted on 4 levels: document-level (Pang et al., 2002) , sentence-level (Riloff et al., 2003) , phraselevel (Wilson et al., 2009) and feature-level (Hu and Liu, 2004; Xia et al., 2007) .", |
| "cite_spans": [ |
| { |
| "start": 89, |
| "end": 108, |
| "text": "(Pang et al., 2002)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 126, |
| "end": 147, |
| "text": "(Riloff et al., 2003)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 162, |
| "end": 183, |
| "text": "(Wilson et al., 2009)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 202, |
| "end": 220, |
| "text": "(Hu and Liu, 2004;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 221, |
| "end": 238, |
| "text": "Xia et al., 2007)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "There was little research focusing on the automatic recognition of intra-sentence level discourse relations for sentiment analysis in the literature. Polanyi and Zaenen (2006) argued that valence calculation is critically affected by discourse structure. Asher et al. (2008) proposed a shallow semantic representation using a feature structure and use five types of rhetorical relations to build a finegrained corpus for deep contextual sentiment analysis. Nevertheless, they did not propose a computational model for their discourse scheme. Snyder and Barzilay (2007) combined an agreement model based on contrastive RST relations with a local aspect model to make a more informed overall decision for sentiment classification. Nonetheless, contrastive relations were only one type of discourse relations which may help polarity classification. Sadamitsu et al. (2008) modeled polarity reversal using HCRFs integrated with inter-sentence discourse structures. However, our work is on intrasentence level and our purpose is not to find polarity reversals but trying to adapt general discourse schemes (e.g., RST) to help determine the overall polarity of ambiguous sentences.", |
| "cite_spans": [ |
| { |
| "start": 150, |
| "end": 175, |
| "text": "Polanyi and Zaenen (2006)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 255, |
| "end": 274, |
| "text": "Asher et al. (2008)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 553, |
| "end": 568, |
| "text": "Barzilay (2007)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 846, |
| "end": 869, |
| "text": "Sadamitsu et al. (2008)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The most closely related works were (Somasundaran et al., 2008) and (Somasundaran et al., 2009) , which proposed opinion frames as a representation of discourse-level associations on dialogue and modeled the scheme to improve opinion polarity classification. However, opinion frames was difficult to be implemented because the recognition of opinion target was very challenging in general text. Our work differs from their approaches in two key aspects: (1) we distinguished nucleus and satellite in discourse but opinion frames did not; (2) our method for discourse discovery was unsupervised while their method needed annotated data.", |
| "cite_spans": [ |
| { |
| "start": 36, |
| "end": 63, |
| "text": "(Somasundaran et al., 2008)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 68, |
| "end": 95, |
| "text": "(Somasundaran et al., 2009)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Most research works about discourse classification were not related to sentiment analysis. Supervised discourse classification methods (Soricut and Marcu, 2003; Duverle and Prendinger, 2009) needed manually annotated data. Marcu and Echihabi (2002) presented an unsupervised method to recognize discourse relations held between arbitrary spans of text. They showed that lexical pairs extracted from massive amount of data can have a major impact on discourse classification. Blair-Goldensohn et al. (2007) extended Marcu's work by using parameter opitimization, topic segmentation and syntactic parsing. However, syntactic parsers were usually costly and impractical when dealing with large scale of text. Thus, in additional to lexical features, we incorporated sequential and semantic information in proposed method for discourse relation classification. Moreover, our method kept the characteristic of language independent, so it could be applied to other languages.", |
| "cite_spans": [ |
| { |
| "start": 135, |
| "end": 160, |
| "text": "(Soricut and Marcu, 2003;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 161, |
| "end": 190, |
| "text": "Duverle and Prendinger, 2009)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 223, |
| "end": 248, |
| "text": "Marcu and Echihabi (2002)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 475, |
| "end": 505, |
| "text": "Blair-Goldensohn et al. (2007)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Since not all of the discourse relations in RST would help eliminate polarity ambiguities, the discourse scheme defined in this paper was on a much coarser level. In order to ascertain which relations should be included in our scheme, 500 ambiguous sentences were randomly chosen from NTCIR MOAT Chinese corpus and the most common discourse relations for connecting independent clauses in compound sentences were annotated. We found that 13 relations from RST occupied about 70% of the annotated discourse relations which may help eliminate polarity ambiguities. Inspired by Marcu and Echihabi (2002) , to construct relatively lownoise discourse instances for unsupervised methods using cue phrases, we grouped the 13 relations into the following 5 relations:", |
| "cite_spans": [ |
| { |
| "start": 575, |
| "end": 600, |
| "text": "Marcu and Echihabi (2002)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discourse Scheme for Eliminating Polarity Ambiguities", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Contrast is a union of Antithesis, Concession, Otherwise and Contrast from RST. Condition is selected from RST. Continuation is a union of Continuation, Parallel from RST. Cause is a union of Evidence, Volitional-Cause, Nonvolitional-Cause, Volitional-result and Nonvolitional-result from RST. Purpose is selected from RST.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discourse Scheme for Eliminating Polarity Ambiguities", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The discourse constraints on polarity presented here were based on the observation of annotated discourse instances: (1) discourse instances holding Contrast relation should contain two segments with opposite polarities; (2) discourse instances holding Continuation relation should contain two segments with the same polarity; (3) the polarity of discourse instances holding Contrast, Condition, Cause or Purpose was determined by the nucleus segment; (4) the polarity of discourse instances holding Continuation was determined by either segment.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discourse Scheme for Eliminating Polarity Ambiguities", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Cue Phrases (English Translation) Contrast", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relation", |
| "sec_num": null |
| }, |
| { |
| "text": "although 1 , but 2 , however 2 Condition if 1 , (if 1 \uff0cthen 2 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relation", |
| "sec_num": null |
| }, |
| { |
| "text": "Continuation and, further more, (not only, but also) Cause because 1 , thus 2 , accordingly 2 , as a result 2 Purpose in order to 2 , in order that 2 , so that 2 1 means CUE1 and 2 means CUE2 ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relation", |
| "sec_num": null |
| }, |
| { |
| "text": "The proposed methods were based on two assumptions: (1) Cue-phrase-based patterns could be used to find limited number of high quality discourse instances;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods", |
| "sec_num": "4" |
| }, |
| { |
| "text": "(2) discourse relations were determined by lexical, structural and semantic information between two segments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Cue-phrase-based patterns could find only limited number of discourse instances with high precision (Marcu and Echihabi, 2002) . Therefore, we could not rely on cue-phrase-based patterns alone. Moreover, there was no annotated corpus similar to Penn Discourse TreeBank (Miltsakaki et al., 2004) in other languages such as Chinese. Thus, we proposed a language independent unsupervised method to identify discourse relations without cue phrases while maintaining relatively high precision. For each discourse relation, we started with several cuephrase-based patterns and collected a large number of discourse instances from raw corpus. Then, discourse instances were converted to semantic sequential representations (SSRs). Finally, an unsupervised method was adopted to generate, weigh and filter common SSRs without cue phrases. The mined common SSRs could be directly used in our SSR-based classifier in unsupervised manner or be employed as effective features for supervised methods.", |
| "cite_spans": [ |
| { |
| "start": 100, |
| "end": 126, |
| "text": "(Marcu and Echihabi, 2002)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 269, |
| "end": 294, |
| "text": "(Miltsakaki et al., 2004)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods", |
| "sec_num": "4" |
| }, |
| { |
| "text": "A discourse instance, denoted by D i , consists of two successive segments (D i[1] , D i[2]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Gathering and representing discourse instances", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": ") within a sentence. For example: In D 1 , \"although\" indicated the satellite section while in D 2 , \"but\" indicated the nucleus section. Accordingly, different cue phrases may indicate different segment type. Table 1 listed some examples of cue phrases for each discourse relation. Some cue phrases were singleton (e.g. \"although\" and \"as a result\") and some were used as a pair (e.g. \"not only, but also\"). \"CUE1\" indicated satellite segments and \"CUE2\" indicated nucleus segments. Note that we did not distinguish satellite from nucleus for Continuation in this paper because the polarity could be determined by either segment. Table 2 listed cue-phrase-based patterns for all relations. To simplify the problem of discourse segmentation, we split compound sentences into discourse segments using commas and semicolons. Although we collected discourse instances from compound sentences only, the number of instances for each discourse relation was large enough for the proposed unsupervised method. Note that we only collected instances containing at least one sentiment word in each segment.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 210, |
| "end": 217, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| }, |
| { |
| "start": 631, |
| "end": 638, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Gathering and representing discourse instances", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "D 1 : [Although Boris is very brilliant at math] s , [he BOS... \uff0c[CUE2]...EOS BOS [CUE1]... \uff0c...EOS BOS... \uff0c[CUE1]...EOS BOS [CUE1]... \uff0c[CUE2]...EOS", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Gathering and representing discourse instances", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "In order to incorporate lexical and semantic information in our method, we represented each word in a discourse instance using a part-of-speech tag, a semantic label and a sentiment tag. Then, all discourse instances were converted to SSRs. The rules for converting were as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Gathering and representing discourse instances", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "(1) Cue phrases and punctuations were ingored. But the information of nucleus(n) and satellite(s) was preserved.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Gathering and representing discourse instances", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "(2) Adverbs(RB) appearing in sentiment lexicon, verbs(V ), adjectives(JJ ) and nouns(NN) were represented by their part-of-speech (pos) tag with semantic label (semlabel) if available.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Gathering and representing discourse instances", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "(3) Named entities (NE; PER: person name; ORG: organization), pronouns (PRP), and function words were represented by their corresponding named entity tags and part-of-speech tags, respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Gathering and representing discourse instances", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "(4) Added sentiment tag (P : Positive; N : Negative) to all sentiment words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Gathering and representing discourse instances", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "By applying above rules, the SSRs for D 1 and D 2 would be:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Gathering and representing discourse instances", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "d 1 : [PER V|Ja01 RB|Ka01 JJ|Ee14|P IN NN|Dk03] s , [PRP V|Ja01 DT JJ|Ga16|N NN|Ae13 ] n d 2 : [PER V|Ja01 JJ|Ee14|P IN NN|Bp12] s , [PRP V|He15|N NN|Di10 NN|Dd08 ] n", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Gathering and representing discourse instances", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Refer to d 1 and d 2 , \"Boris\" could match \"John\" in SSRs because they were converted to \"PER\" and they all appeared at the beginning of discourse instances. \"Ja01\", \"Ee14\" etc. were semantic labels from Chinese synonym list extended version (Che et al., 2010) . There were similar resources in other languages such as Wordnet (Fellbaum, 1998) in English. The next problem became how to start from current SSRs and generate new SSRs for recognizing discourse relations without cue phrases.", |
| "cite_spans": [ |
| { |
| "start": 242, |
| "end": 260, |
| "text": "(Che et al., 2010)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 327, |
| "end": 343, |
| "text": "(Fellbaum, 1998)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Gathering and representing discourse instances", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Recall assumption (2), in order to incorporate lexical, structural and semantic information for the similarity calculation of two SSRs holding the same discourse relation, three types of matches were defined for ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Mining common SSRs", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "{(u, v)|u \u2208 d i[k] , v \u2208 d j[k] , k = 1, 2}: (1)Full match: (i) u = v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Mining common SSRs", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Intuitively, a simple way of estimating the similarity between two SSRs was using the number of mismatches. Therefore, we utilized match(", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Generating common SSRs", |
| "sec_num": null |
| }, |
| { |
| "text": "d i , d j )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Generating common SSRs", |
| "sec_num": null |
| }, |
| { |
| "text": "where i \u0338 = j, which integrated the three types of matches defined above to calculate the number of mismatches and generate common SSRs. Consider Table 3 , in common SSRs, full matches were preserved, partial matches were replaced by part of speech tags and mismatches were replaced by '*'s. The common SSRs generated during the calculation of match(d i , d j ) consisted of two parts. The first part was generated by d i[1] and d j[1] and the second part was generated by In order to guarantee relatively high quality common SSRs, we empirically set the upper threshold of the number of mismatches as 0.5 (i.e., \u2264 1/2 of the number of words in the generated SSR). It's not difficult to figure out that the number of mismatches generated in Table 3 satisfied this requirement. As a result, for each discourse relation r n , a corresponding common SSR set S n could be obtained by adopting match(d i , d j ) where i \u0338 = j for all discourse instances. An advantage of match (d 1 , d 2 ) was that the generated common SSRs preserved the sequential structure of original discourse instances. And common SSRs allows us to build high precision discourse classifiers (See Section 5).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 146, |
| "end": 153, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 741, |
| "end": 748, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 972, |
| "end": 980, |
| "text": "(d 1 , d", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Generating common SSRs", |
| "sec_num": null |
| }, |
| { |
| "text": "d i[2] and d j[2] . We stipulated d 1 d 2 mis conf ssr PER PER 0 0 PER V|Ja01 V|Ja01 0 0 V|Ja01 RB|Ka01 +1 \u22120.298 * JJ|Ee14|P JJ|Ee14|P 0 0 JJ|Ee14|P IN IN 0 0 IN NN|Dk03 NN|Bp12 0 \u22120.50 NN conf (ssr [1] ) = \u22120.798 PRP PRP 0 0 PRP V|Ja01 V|He15|N 0 \u22120.50 V DT +1 \u22120.184 * JJ|Ga16|N +1 \u22121.0 * NN|Ae13 NN|Di10 0 \u22120.50 NN NN|Dd08 +1 \u22121.0 * conf (ssr [2] ) = \u22123.184", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Generating common SSRs", |
| "sec_num": null |
| }, |
| { |
| "text": "A problem of match(d i , d j ) was that it ignored some important information by treating different mismatches equally. For example, the adverb \"very\" in \"very brilliant\" of D 1 was not important for discourse recognition. In other words, the number of mismatches in match(d i , d j ) could not precisely reflect the confidence of the generated common SSRs. Therefore, it was needed to weigh different mismatches for the confidence calculation of common SSRs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Weighing and filtering common SSRs", |
| "sec_num": null |
| }, |
| { |
| "text": "Intuitively, if a partial match or a mismatch (denoted by u m ) occurred very frequently in the generation of common SSRs, the importance of u m tends to diminish. Inspired by the tf-idf model, given ssr i \u2208S n , we utilized the following equation to estimate the weight (denoted by w m ) of u m .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Weighing and filtering common SSRs", |
| "sec_num": null |
| }, |
| { |
| "text": "w m = \u2212uf m \u2022 log (|S n |/ssrf m )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Weighing and filtering common SSRs", |
| "sec_num": null |
| }, |
| { |
| "text": "where uf m denoted the frequency of u m during the generation of ssr i , |S n | denoted the size of S n and ssrf m denoted the number of common SSRs in S n containing u m . All weights were normalized to [\u22121, 0).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Weighing and filtering common SSRs", |
| "sec_num": null |
| }, |
| { |
| "text": "Nouns (except for named entities) and verbs were most representative words in discourse recognition (Marcu and Echihabi, 2002) . In addition, adjectives and adverbs appearing in sentiment lexicons were important for polarity classification. Therefore, for these 4 kinds of words, we utilized \u22121.0 for a mismatch and \u22120.50 for a partial match.", |
| "cite_spans": [ |
| { |
| "start": 100, |
| "end": 126, |
| "text": "(Marcu and Echihabi, 2002)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Weighing and filtering common SSRs", |
| "sec_num": null |
| }, |
| { |
| "text": "As we had got the weights for all partial matches and mismatches, the confidence of ssr i \u2208S n could be calculated using the cumulation of weights of partial matches and mismatches in ssr i[1] and ssr i[2] . Recall Table 3 , conf (ssr [1] ) and conf (ssr [2] ) represented the confidence scores of match(", |
| "cite_spans": [ |
| { |
| "start": 235, |
| "end": 238, |
| "text": "[1]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 215, |
| "end": 222, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Weighing and filtering common SSRs", |
| "sec_num": null |
| }, |
| { |
| "text": "d i[1] , d j[1] ) and match(d i[2] , d j[2]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Weighing and filtering common SSRs", |
| "sec_num": null |
| }, |
| { |
| "text": "), respectively. In order to control the quantity and quality of mined SSRs, a threshold minconf was introduced. ssr i will be preserved if and only if conf (ssr i[1] ) \u2265 minconf and conf (ssr i[2] ) \u2265 minconf . The value of minconf was tuned using the development data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Weighing and filtering common SSRs", |
| "sec_num": null |
| }, |
| { |
| "text": "Finally, we combined adjacent '*'s and preserved SSRs containing at least one notional word and at least two words in each segment to meet the demand of maintaining high precision (e.g., \"[* DT *]\", \"[PER *]\" will be dropped). Moreover, since many of the SSRs were duplicated, we ranked all the generated SSRs according to their occurrences and dropped those appearing only once in order to preserve common SSRs. At last, SSRs appearing in more than one common SSR set were removed for maintaining the uniqueness of each set. The common SSR set S n for each discourse relation r n could be directly used in SSR-based unsupervised classifiers or be employed as effective features in supervised methods. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Weighing and filtering common SSRs", |
| "sec_num": null |
| }, |
| { |
| "text": "We extracted all compound sentences which may contain the defined discourse relations from opinionated sentences (neutral ones were dropped) of NT-CIR7 MOAT simplified Chinese training data. 1,225 discourse instances were extracted and two annotators were trained to annotate discourse relations according to the discourse scheme defined in Section 3. Note that we annotate both explicit and implicit discourse relations. The overall inter annotator agreement was 86.05% and the Kappa-value was 0.8031. Table 4 showed the distribution of annotated discourse relations based on the inter-annotator agreement. The proportion of occurrences of each discourse relations varied greatly. For example, Continuation was the most common relation in annotated corpus, but the occurrences of Condition relation were rare.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 503, |
| "end": 510, |
| "text": "Table 4", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Annotation work and Data", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "The experiments of this paper were performed using the following data sets: NTC-7 contained manually annotated discourse instances (shown in Table 4 ). The experiments of discourse identification were performed on this data set.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 141, |
| "end": 148, |
| "text": "Table 4", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Annotation work and Data", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "NTC-8 contained all opinionated sentences (neutral ones were dropped) extracted from NTCIR8 MOAT simplified Chinese test data. The experiments of polarity ambiguity elimination using the identified discourse relations were performed on this data set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation work and Data", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "XINHUA contained simplified Chinese raw news text from Xinhua.com (2002) (2003) (2004) (2005) . A word segmentation tool, a part-of-speech tagging tool, a named entity recognizer and a word sense disam-biguation tool (Che et al., 2010) were adopted to all sentences. The common SSRs were mined from this data set.", |
| "cite_spans": [ |
| { |
| "start": 66, |
| "end": 72, |
| "text": "(2002)", |
| "ref_id": null |
| }, |
| { |
| "start": 73, |
| "end": 79, |
| "text": "(2003)", |
| "ref_id": null |
| }, |
| { |
| "start": 80, |
| "end": 86, |
| "text": "(2004)", |
| "ref_id": null |
| }, |
| { |
| "start": 87, |
| "end": 93, |
| "text": "(2005)", |
| "ref_id": null |
| }, |
| { |
| "start": 217, |
| "end": 235, |
| "text": "(Che et al., 2010)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotation work and Data", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "In order to systematically justify the effectiveness of proposed unsupervised method, following experiments were performed on NTC-7:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discourse relation identification", |
| "sec_num": null |
| }, |
| { |
| "text": "Baseline used only cue-phrase-based patterns. M&E proposed by Marcu and Echihabi (2002) . Given a discourse instance D i , the probabilities:", |
| "cite_spans": [ |
| { |
| "start": 62, |
| "end": 87, |
| "text": "Marcu and Echihabi (2002)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discourse relation identification", |
| "sec_num": null |
| }, |
| { |
| "text": "P (r k |(D i[1] , D i[2]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discourse relation identification", |
| "sec_num": null |
| }, |
| { |
| "text": ")) for each relation r k were estimated on all text from XINHUA. Then, the most likely discourse relation was determined by taking the maximum over", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discourse relation identification", |
| "sec_num": null |
| }, |
| { |
| "text": "argmax k {P (r k |(D i[1] , D i[2] )}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discourse relation identification", |
| "sec_num": null |
| }, |
| { |
| "text": "cSSR used both cue-phrase-based patterns together with common SSRs for recognizing discourse relations. Common SSRs were mined from discourse instances extracted from XINHUA using cuephrase-based patterns. Development data were randomly selected for tuning minconf .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discourse relation identification", |
| "sec_num": null |
| }, |
| { |
| "text": "SVM was trained utilizing cue phrases, probabilities from M&E, topic similarity, structure overlap, polarity of segments and mined common SSRs (Optional). The parameters of the SVM classifier were set by a grid search on the training set. We performed 4-fold cross validation on NTC-7 to get an average performance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discourse relation identification", |
| "sec_num": null |
| }, |
| { |
| "text": "The purposes of introducing SVM in our experiment were: (1) to compare the performance of cSSR to supervised method; (2) to examine the effectiveness of integrating common SSRs as features for supervised methods.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discourse relation identification", |
| "sec_num": null |
| }, |
| { |
| "text": "BPC was trained mainly utilizing punctuation, uni-gram, bi-gram features with confidence score output. Discourse classifiers such as Baseline, cSSR or SVM were adopted individually for the postprocessing of BPC. Given an ambiguous sentence which contained more than one segment, an intuitive three-step method was adopted to integrated a discourse classifier and discourse constraints on polarity for the post-processing of BPC:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Polarity ambiguity elimination", |
| "sec_num": null |
| }, |
| { |
| "text": "(1) Recognize all discourse relations together with nucleus and satellite information using a discourse classifier. The nucleus and satellite information is acquired by cSSR if a segment pair could match a cSSR. Otherwise, we use the annotated nucleus and satellite information.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Polarity ambiguity elimination", |
| "sec_num": null |
| }, |
| { |
| "text": "(2) Apply discourse constraints on polarity to ascertain the polarity for each discourse instance. There may be conflicts between polarities acquired by BPC and discourse constraints on polarity (e.g., Two segments with the same polarity holding a Contrast relation). To handle this problem, we chose the segment with higher polarity confidence and adjusted the polarity of the other segment using discourse constraints on polarity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Polarity ambiguity elimination", |
| "sec_num": null |
| }, |
| { |
| "text": "(3) If there was more than one discourse instance in a single sentence, the overall polarity of the sentence was determined by voting of polarities from each discourse instance under the majority rule.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Polarity ambiguity elimination", |
| "sec_num": null |
| }, |
| { |
| "text": "Refer to Figure 2 , the performance of cSSR was significantly affected by minconf . Note that we performed the tuning process of minconf on different development data (1/4 instances randomly selected from NTC-7) and Figure 2 showed the average performance. cSSR became Baseline when minconf = 0. A significant drop of precision was observed when minconf was less than \u22122.5. The recall remained around 0.495 when minconf \u2264 \u22124.0. The best performance was observed when minconf =\u22123.5. As a result, \u22123.5 was utilized as the threshold value for cSSR in the following experiments. Table 5 presented the experimental results for discourse relation classification. it showed that:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 9, |
| "end": 17, |
| "text": "Figure 2", |
| "ref_id": "FIGREF3" |
| }, |
| { |
| "start": 216, |
| "end": 224, |
| "text": "Figure 2", |
| "ref_id": "FIGREF3" |
| }, |
| { |
| "start": 575, |
| "end": 582, |
| "text": "Table 5", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental Results", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "(1) Cue-phrase-based patterns could find only limited number of discourse relations (34.1% of average Table 6 : Performance of integrating discourse classifiers and constraints to polarity classification. Note that the experiments were performed on NTC-8 which contained only opinionated sentences.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 102, |
| "end": 109, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental Results", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "recall) with a very high precision (96.17% of average precision). This is a proof of assumption (1) given in Section 4. On the other side, M&E which only considered word pairs between two segments of discourse instances got a higher recall with a large drop of precision. The drop of precision may be caused by the neglect of structural and semantic information of discourse instances. However, M&E still outperformed Baseline in average F -score.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Results", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "(2) cSSR enhanced Baseline by increasing the average recall by about 15% with only a small drop of precision. The performance of cSSR demonstrated that our method could effectively discover high quality common SSRs. The most remarkable improvement was observed on Continuation in which the recall increased by almost 20% with only a minor drop of precision. Actually, cSSR outperformed Baseline in all discourse relations except for Contrast. In Discourse Tree Bank (Carlson et al., 2001 ) only 26% of Contrast relations were indicated by cue phrases while in NTC-7 about 70% of Contrast were indicated by cue phrases. A possible reason was that we were dealing with Chinese news text which were usually well written. Another important observation was that the performance of cSSR was very close to the result of SVM.", |
| "cite_spans": [ |
| { |
| "start": 466, |
| "end": 487, |
| "text": "(Carlson et al., 2001", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Results", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "(3) SVM+SSRs achieved the best F -score on Continuation and average performance. The integration of SSRs to the feature set of SVM contributed to a remarkable increase in average F -score. The results of cSSR and SVM+SSRs demonstrated the effectiveness of common SSRs mined by the proposed unsupervised method. Table 6 presented the performance of integrating discourse classifiers to polarity classification. For Baseline and cSSR, the information of nucleus and satellite could be obtained directly from cue- phrase-based patterns and SSRs, respectively. For SVM+cSSR, the nucleus and satellite information was acquired by cSSR if a segment pair could match a cSSR. Otherwise, we used manually annotated nucleus and satellite information. It's clear that the performance of polarity classification was enhanced with the improvement of discourse relation recognition. M&E was not included in this experiment because the performance of polarity classification was decreased by the mis-classified discourse relations. SVM+SSRs achieved significant (p<0.01) improvement in polarity classification compared to BPC.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 311, |
| "end": 318, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental Results", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "To assess the contribution of weighing and filtering in mining SSRs using a minimum confidence threshold, i.e. minconf , we implemented cSSR' without weighing and filtering on the same data set. Consider Table 7 , cSSR achieved obvious improvement in P recision and F -score than cSSR'. Moreover, the total number of SSRs was greatly reduced in cSSR with only a minor drop of recall. This was because cSSR' was affected by thousands of low quality common SSRs which would be filtered in cSSR. The result in Table 7 filtering were essential in our proposed method. We further analyzed how the improvement was achieved in cSSR. In our experiment, the most common mismatches were auxiliary words, named entities, adjectives or adverbs without sentiments (e.g., \"green\", \"very\", etc.), prepositions, numbers and quantifiers. It's straightforward that these words were insignificant in discourse relation classification purpose. Moreover, these words did not belong to the 4 kinds of most representative words. In other words, the weights of most mismatches were calculated using the equation presented in Section 4.2 instead of utilizing a unified value, i.e. \u22121. Recall Table 3 , the weight of \"RB|Ka01\" (original: \"very\") was \u22120.298 and \"DT\" (original: 'a') was \u22120.184. Comparing to the weights of mismatches for most representative words (\u22121.0), the proposed method successfully down weighed the words which were not important for discourse identification. Therefore, weighing and filtering were able to preserve high quality SSRs while filter out low quality SSRs by setting the confidence threshold, i.e. minconf .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 204, |
| "end": 211, |
| "text": "Table 7", |
| "ref_id": "TABREF9" |
| }, |
| { |
| "start": 507, |
| "end": 514, |
| "text": "Table 7", |
| "ref_id": "TABREF9" |
| }, |
| { |
| "start": 1167, |
| "end": 1174, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Effect of weighing and filtering", |
| "sec_num": null |
| }, |
| { |
| "text": "We also analyzed the contribution of different discourse relations in eliminating polarity ambiguities. Refer to Figure 3 , the improvement of polarity classification mainly came from three discourse relations: Contrast, Continuation and Cause. It was straightforward that Contrast relation could eliminate polarity ambiguities because it held between two segments with opposite polarities. The contribution of Cause relation also result from two segments holding different polarities such as example (a) in Section 1. However, recall Table 4 , although Cause occurred more often than Contrast, only a part of discourse instances holding Cause relation contained two segments with the opposite polarities. Another important relation in eliminating ambiguity was Continuation. We investigated sentences with polarities corrected by Continuation relation. Most of them fell into two categories: (1) sentences with mistakenly classified sentiments by BPC; (2) sentences with implicit sentiments. For example: The first segment of example (b) was negative (\"banned\" expressed a negative sentiment) and a Continuation relation held between these two seg-ments. Consequently, the polarity of the second segment should be negative.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 113, |
| "end": 121, |
| "text": "Figure 3", |
| "ref_id": "FIGREF4" |
| }, |
| { |
| "start": 535, |
| "end": 542, |
| "text": "Table 4", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Contribution of different discourse relations", |
| "sec_num": null |
| }, |
| { |
| "text": "This paper focused on unsupervised discovery of intra-sentence discourse relations for sentence level polarity classification. We firstly presented a discourse scheme based on empirical observations. Then, an unsupervised method was proposed starting from a small set of cue-phrase-based patterns to mine high quality common SSRs for each discourse relation. The performance of discourse classification was further improved by employing SSRs as features in supervised methods. Experimental results showed that our methods not only effectively recognized discourse relations but also achieved significant improvement (p<0.01) in sentence level polarity classification. Although we were dealing with Chinese text, the proposed unsupervised method could be easily generalized to other languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The future work will be focused on (1) integrating more semantic and syntactic information in proposed unsupervised method; (2) extending our method to inter-sentence level and then jointly modeling intrasentence level and inter-sentence level discourse constraints on polarity to reach a global optimal inference for polarity classification.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "http://research.nii.ac.jp/ntcir/ 3 Including simplified Chinese and traditional Chinese corpus from NTCIR-6 MOAT and NTCIR-7 MOAT", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Distilling opinion in discourse: A preliminary study", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Asher", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Benamara", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mathieu", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Coling 2008: Companion volume: Posters and Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "5--8", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Asher, F. Benamara, and Y.Y. Mathieu. 2008. Distill- ing opinion in discourse: A preliminary study. Coling 2008: Companion volume: Posters and Demonstra- tions, pages 5--8.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Building and refining rhetorical-semantic relation models", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Blair-Goldensohn", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [ |
| "R" |
| ], |
| "last": "Mckeown", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of NAACL HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "428--435", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Blair-Goldensohn, K.R. McKeown, and O.C. Ram- bow. 2007. Building and refining rhetorical-semantic relation models. In Proceedings of NAACL HLT, pages 428--435.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Building a discourse-tagged corpus in the framework of rhetorical structure theory", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Carlson", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "E" |
| ], |
| "last": "Okurowski", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the Second SIGdial Workshop on Discourse", |
| "volume": "16", |
| "issue": "", |
| "pages": "1--10", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "L. Carlson, D. Marcu, and M.E. Okurowski. 2001. Build- ing a discourse-tagged corpus in the framework of rhetorical structure theory. In Proceedings of the Sec- ond SIGdial Workshop on Discourse and Dialogue- Volume 16, pages 1--10. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Ltp: A chinese language technology platform", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Che", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "13--16", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "W. Che, Z. Li, and T. Liu. 2010. Ltp: A chinese language technology platform. In Proceedings of the 23rd In- ternational Conference on Computational Linguistics: Demonstrations, pages 13--16. Association for Com- putational Linguistics.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "A novel discourse parser based on support vector machine classification", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [ |
| "A" |
| ], |
| "last": "Duverle", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Prendinger", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", |
| "volume": "2", |
| "issue": "", |
| "pages": "665--673", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D.A. Duverle and H. Prendinger. 2009. A novel dis- course parser based on support vector machine classi- fication. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th Interna- tional Joint Conference on Natural Language Process- ing of the AFNLP: Volume 2, pages 665--673. Associ- ation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "WordNet: An electronic lexical database", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Fellbaum", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C. Fellbaum. 1998. WordNet: An electronic lexical database. The MIT press.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Mining and summarizing customer reviews", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Hu", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining", |
| "volume": "", |
| "issue": "", |
| "pages": "168--177", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Hu and B. Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge dis- covery and data mining, pages 168--177. ACM.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Rhetorical structure theory: Toward a functional theory of text organization", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [ |
| "C" |
| ], |
| "last": "Mann", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "A" |
| ], |
| "last": "Thompson", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "Text-Interdisciplinary Journal for the Study of Discourse", |
| "volume": "8", |
| "issue": "3", |
| "pages": "243--281", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "W.C. Mann and S.A. Thompson. 1988. Rhetorical struc- ture theory: Toward a functional theory of text organi- zation. Text-Interdisciplinary Journal for the Study of Discourse, 8(3):243--281.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "An unsupervised approach to recognizing discourse relations", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Echihabi", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "368--375", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Marcu and A. Echihabi. 2002. An unsupervised ap- proach to recognizing discourse relations. In Proceed- ings of the 40th Annual Meeting on Association for Computational Linguistics, pages 368--375. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "The penn discourse treebank", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Miltsakaki", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Prasad", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Joshi", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Webber", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the 4th International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "E. Miltsakaki, R. Prasad, A. Joshi, and B. Webber. 2004. The penn discourse treebank. In Proceedings of the 4th International Conference on Language Resources and Evaluation. Citeseer.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Thumbs up?: sentiment classification using machine learning techniques", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Pang", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Vaithyanathan", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the ACL-02 conference on Empirical methods in natural language processing", |
| "volume": "10", |
| "issue": "", |
| "pages": "79--86", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Pang, L. Lee, and S. Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing- Volume 10, pages 79--86. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Contextual valence shifters. Computing attitude and affect in text: Theory and applications", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Polanyi", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Zaenen", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "1--10", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "L. Polanyi and A. Zaenen. 2006. Contextual valence shifters. Computing attitude and affect in text: The- ory and applications, pages 1--10.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Learning subjective nouns using extraction pattern bootstrapping", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Riloff", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Wiebe", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Wilson", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003", |
| "volume": "4", |
| "issue": "", |
| "pages": "25--32", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "E. Riloff, J. Wiebe, and T. Wilson. 2003. Learning sub- jective nouns using extraction pattern bootstrapping. In Proceedings of the seventh conference on Natu- ral language learning at HLT-NAACL 2003-Volume 4, pages 25--32. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Sentiment analysis based on probabilistic models using inter-sentence information", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Sadamitsu", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Sekine", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Yamamoto", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Sadamitsu, S. Sekine, and M. Yamamoto. 2008. Sen- timent analysis based on probabilistic models using inter-sentence information.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Multiple aspect ranking using the good grief algorithm", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Snyder", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Barzilay", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of NAACL HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "300--307", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Snyder and R. Barzilay. 2007. Multiple aspect rank- ing using the good grief algorithm. In Proceedings of NAACL HLT, pages 300--307.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Discourse level opinion interpretation", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Somasundaran", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Wiebe", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Ruppenhofer", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 22nd International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "801--808", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Somasundaran, J. Wiebe, and J. Ruppenhofer. 2008. Discourse level opinion interpretation. In Proceed- ings of the 22nd International Conference on Compu- tational Linguistics, pages 801--808. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Supervised and unsupervised methods in employing discourse relations for improving opinion polarity classification", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Somasundaran", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Namata", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Wiebe", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Getoor", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "170--179", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Somasundaran, G. Namata, J. Wiebe, and L. Getoor. 2009. Supervised and unsupervised methods in em- ploying discourse relations for improving opinion po- larity classification. In Proceedings of the 2009 Con- ference on Empirical Methods in Natural Language Processing, pages 170--179. Association for Compu- tational Linguistics.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Sentence level discourse parsing using syntactic and lexical information", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Soricut", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology", |
| "volume": "", |
| "issue": "", |
| "pages": "149--156", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Soricut and D. Marcu. 2003. Sentence level dis- course parsing using syntactic and lexical information. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computa- tional Linguistics on Human Language Technology, pages 149--156. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Recognizing Contextual Polarity: an exploration of features for phrase-level sentiment analysis", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Wilson", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Wiebe", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Hoffmann", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Computational Linguistics", |
| "volume": "35", |
| "issue": "3", |
| "pages": "399--433", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "T. Wilson, J. Wiebe, and P. Hoffmann. 2009. Recogniz- ing Contextual Polarity: an exploration of features for phrase-level sentiment analysis. Computational Lin- guistics, 35(3):399--433.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "The unified collocation framework for opinion mining", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [ |
| "Q" |
| ], |
| "last": "Xia", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "F" |
| ], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [ |
| "F" |
| ], |
| "last": "Wong", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Zheng", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "International Conference on Machine Learning and Cybernetics", |
| "volume": "2", |
| "issue": "", |
| "pages": "844--850", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Y.Q. Xia, R.F. Xu, K.F. Wong, and F. Zheng. 2007. The unified collocation framework for opinion mining. In International Conference on Machine Learning and Cybernetics, volume 2, pages 844--850. IEEE.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Incorporating feature-based and similarity-based opinion mining--ctl in ntcir-8 moat", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Kit", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 8th NTCIR Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "276--281", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Xu and C. Kit. 2010. Incorporating feature-based and similarity-based opinion mining--ctl in ntcir-8 moat. In Proceedings of the 8th NTCIR Workshop, pages 276- -281.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "type_str": "figure", |
| "text": "(a) [Although Fujimori was criticized by the international community]\uff0c[he was loved by the domestic population]\uff0c [because people hated the corrupted ruling class]. (\u5118\u7ba1 \u570b\u969b\u9593\u5c0d\u85e4\u68ee\u53e3\u8a85\u7b46\u4f10\uff0c\u4ed6\u5728\u570b\u5167\u4e00\u76f4\u6df1\u53d7\u767e\u59d3\u611b \u6234\uff0c\u539f\u56e0\u662f\u767e\u59d3\u5c0d\u8150\u5316\u7684\u7d71\u6cbb\u968e\u7d1a\u65e9\u5c31\u6df1\u60e1\u75db\u7d55\u3002)", |
| "uris": null |
| }, |
| "FIGREF1": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Discourse relations for Example (a). (n and s denote nucleus and satellite segment, respectively)", |
| "uris": null |
| }, |
| "FIGREF2": { |
| "num": null, |
| "type_str": "figure", |
| "text": "or (ii) u.pos = v.pos and u.semlabel=v.semlabel or (iii) u.pos=v.pos and u had a sentiment tag and v had a sentiment tag or (iv) u.pos and v.pos\u2208{PRP, PER, ORG} (2) Partial match: u.pos = v.pos but not Full match; (3) Mismatch: u.pos \u0338 = v.pos.", |
| "uris": null |
| }, |
| "FIGREF3": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Influences of different values of minconf to the performance of cSSR", |
| "uris": null |
| }, |
| "FIGREF4": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Improvement from individual discourse relations. N denoted the number of ambiguities eliminated.", |
| "uris": null |
| }, |
| "FIGREF5": { |
| "num": null, |
| "type_str": "figure", |
| "text": "(b) [France and Germany have banned human cloning at present]\uff0c[on 20th, U.S. President George W. Bush called for regulations of the same content to Congress] (\u76ee\u524d\uff0c \u6cd5\u56fd\u548c\u5fb7\u56fd\u90fd\u7981\u6b62\u514b\u9686\u4eba\u7684\u80da\u80ce\uff0c\u7f8e\u56fd\u603b\u7edf\u5e03\u4ec0 20 \u65e5\u5411\u56fd\u4f1a\u63d0\u51fa\uff0c\u8981\u6c42\u5236\u5b9a\u540c\u6837\u5185\u5bb9\u7684\u6cd5\u89c4\u3002)", |
| "uris": null |
| }, |
| "TABREF0": { |
| "type_str": "table", |
| "text": "Examples of cue phrases", |
| "content": "<table/>", |
| "html": null, |
| "num": null |
| }, |
| "TABREF1": { |
| "type_str": "table", |
| "text": "Cue-phrase-based patterns. BOS and EOS denoted the beginning and end of two segments. is a horrible teacher] n D 2 : [John is good at basketball] s , [but he lacks team spirit] n", |
| "content": "<table/>", |
| "html": null, |
| "num": null |
| }, |
| "TABREF2": { |
| "type_str": "table", |
| "text": "Calculation of match(d 1 , d 2 ). ssr denoted the common SSR between d 1 and d 2 , conf (ssr [1] ) and conf (ssr [2] ) denoted the confidence of ssr. that d i and d j could generate a common SSR if and only if the orders of nucleus segment and satellite segment were the same.", |
| "content": "<table/>", |
| "html": null, |
| "num": null |
| }, |
| "TABREF4": { |
| "type_str": "table", |
| "text": "Distribution of discourse relations on NTC-7.", |
| "content": "<table><tr><td>Others represents discourse relations not included in our</td></tr><tr><td>discourse scheme.</td></tr></table>", |
| "html": null, |
| "num": null |
| }, |
| "TABREF7": { |
| "type_str": "table", |
| "text": "", |
| "content": "<table/>", |
| "html": null, |
| "num": null |
| }, |
| "TABREF9": { |
| "type_str": "table", |
| "text": "Comparison of cSSR' and cSSR. \"NOS\" denoted the number of mined common SSRs.", |
| "content": "<table/>", |
| "html": null, |
| "num": null |
| } |
| } |
| } |
| } |