| { |
| "paper_id": "K15-2005", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T07:08:29.829919Z" |
| }, |
| "title": "Shallow Discourse Parsing Using Constituent Parsing Tree *", |
| "authors": [ |
| { |
| "first": "Changge", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Shanghai Jiao Tong University", |
| "location": { |
| "postCode": "200240", |
| "settlement": "Shanghai", |
| "country": "China" |
| } |
| }, |
| "email": "changge.chen.cc@gmail.com" |
| }, |
| { |
| "first": "Peilu", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Shanghai Jiao Tong University", |
| "location": { |
| "postCode": "200240", |
| "settlement": "Shanghai", |
| "country": "China" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Hai", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Shanghai Jiao Tong University", |
| "location": { |
| "postCode": "200240", |
| "settlement": "Shanghai", |
| "country": "China" |
| } |
| }, |
| "email": "zhaohai@cs.sjtu.edu.cn" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper describes our system in the closed track of the shared task of CoNLL-2015. We formulize the discourse parsing work into a series of classification subtasks. The official evaluation shows that the proposed framework can give competitive results and we give a few discussions over latent improvement as well.", |
| "pdf_parse": { |
| "paper_id": "K15-2005", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper describes our system in the closed track of the shared task of CoNLL-2015. We formulize the discourse parsing work into a series of classification subtasks. The official evaluation shows that the proposed framework can give competitive results and we give a few discussions over latent improvement as well.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "We design our shallow discourse parser as a sequential pipeline to mimic the annotation procedure as the Penn Discourse Treebank (we will use PDTB instead in the rest of this paper) annotator (Lin et al., 2014) . Figure 1 gives the pipeline of the system. The system can be roughly split into two parts: the explicit and the non-explicit. The first part consists of three steps, which sequentially are Explicit Classifier, Explicit Argument Labeler, and Explicit Sense Classifier. While the non-explicit part consists of Filter, Non-explicit and Non-explicit Sense Classifiers. Non-explicit relations include 'Implicit', 'AltLex', 'EntRel', but not 'NoRel'.", |
| "cite_spans": [ |
| { |
| "start": 192, |
| "end": 210, |
| "text": "(Lin et al., 2014)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 213, |
| "end": 221, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "System Overview", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We adopt an adapted maximum entropy model as the classification algorithm for every steps. Our system only exploits resources provided by the organizer. * This work of C. Chen, P. Wang, and H. Zhao was supported in part by the National Natural Science Foundation of China under Grants 60903119, 61170114, and 61272248, the National Basic Research Program of China under Grant 2013CB329401, the Science and Technology Commission of Shanghai Municipality under Grant 13511500200, the European Union Seventh Framework Program under Grant 247619, the Cai Yuanpei Program (CSC fund 201304490199, 201304490171, and the art and science interdiscipline funds of Shanghai Jiao Tong University (a study on mobilization mechanism and alerting threshold setting for online community, and media image and psychology evaluation: a computational intelligence approach under Grant 14X190040031(14JCRZ04).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System Overview", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2020 Corresponding author We first give a brief introduction over each step of the entire system as the following. After the Explicit Classifier detects explicit connectives, the Explicit Argument Labeler then prunes and classifies the 'Arg1' and 'Arg2' of the detected connective. Then, Explicit Sense Classifier integrates results of previous two steps when trying to distinguish different senses. The second part of the system starts with filtering out obvious false cases. Then the Non-explicit classifier classifies the nonrelations into three classes, i.e., 'Implicit', 'Al-tLex', 'EntRel'. Finally, the Non-explicit Sense classifier determines the sense of the non-explicit relation. In the last two steps, we take the 'En-tRel' as a sense of implicit relation, which we will explain later. He added that \"having just one firm do this isn't going to mean a hill of beans. But if this prompts others to consider the same thing, then it may become much more important.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System Overview", |
| "sec_num": "1" |
| }, |
| { |
| "text": "'Arg1' is shown in italic, and 'Arg2' is shown in bold. The discourse connective is underlined and the sense of this explicit relation is 'Comparison.Concession'.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System Overview", |
| "sec_num": "1" |
| }, |
| { |
| "text": "There are 100 explicit connectives in PDTB annotation (Prasad et al., 2008) . However, some connectives, e.g., 'and', do not express a discourse relation. We use a level-order traverse to scan every node in the constituent parse tree to select the connective candidates. This method gives us a high recall in the train set as shown in Table 1 .", |
| "cite_spans": [ |
| { |
| "start": 54, |
| "end": 75, |
| "text": "(Prasad et al., 2008)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 335, |
| "end": 342, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Explicit Classifier", |
| "sec_num": "2.1.1" |
| }, |
| { |
| "text": "Seven features are considered : a) Self Category The highest dominated node which covers the connective. b) Parent Caterogy The category of the parent of the self category. c) Left Sibling Category The syntactic category of immediate left sibling of the self-category. It would be 'NONE' if the connective is the leftmost node. d) Right Sibling Category The immediate right sibling of the self category. It also would be assigned 'NONE' if the self-category has been the rightmost node. e) VP Existence We set a binary feature to indicate whether the right sibling contains a VP. f) Connective In addition to those features proposed by Pilter and Nenvoda, we introduce connective feature. The potential connective itself would be a strong sign of its function. A few of discourse connectives that are deterministic. For example, 'in addition' will always be 'Expansion.Conjunction'.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Explicit Classifier", |
| "sec_num": "2.1.1" |
| }, |
| { |
| "text": "Maximum Entropy classifier has shown good performance in various previous works (Wang et al., 2014; Jia et al., 2013; Zhao and Kit, 2008) . Based on these features, we trained a Maximum Entropy classifier. In order to check the performance of the classifier only, we evaluate the classifier on connective candidates that selected by a level-order traverse. This gives 93.87% accuracy and 90.1% F1 score on dev set.", |
| "cite_spans": [ |
| { |
| "start": 80, |
| "end": 99, |
| "text": "(Wang et al., 2014;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 100, |
| "end": 117, |
| "text": "Jia et al., 2013;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 118, |
| "end": 137, |
| "text": "Zhao and Kit, 2008)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Explicit Classifier", |
| "sec_num": "2.1.1" |
| }, |
| { |
| "text": "With all explicit connectives detected, we exploit a constituent-based approach to perform argument labeling (Kong et al., 2014) . Along the path from the connective node to the root node in the constituent parse tree, all the siblings of every node on the path are selected as candidates for 'Arg1' and 'Arg2'. For these candidates, we compare them with PDTB to label them as 'Arg1', 'Arg2', or 'NULL'. However, this argument prune strategy focuses on intra sentence. In addition, Kong et al. unified the intra-and inter-sentence cases by treating the immediate preceding sentence as a special constituent. Based on our empirical results, the inter-sentences only contribute to the augment candidate Arg1. Kong et al. also reported a very high recalls (80-90%) on 'Arg1' and 'Arg2' extraction, though our re-implementation only receive recalls 37.5% and 51.3% of the 'Arg1' and 'Arg2', respectively. And about 87.75% of all the pruning out constituents are labeled as 'NULL'. Similar to treating the immediate preceding sentence as 'Arg1' candidate, we take the remaining part of the sentence that is adjacent to the connective as 'Arg2' candidate. This approach gives a boost in 'Arg2' recall, as high as 93.1%.", |
| "cite_spans": [ |
| { |
| "start": 109, |
| "end": 128, |
| "text": "(Kong et al., 2014)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Explicit Argument Labeler", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "We extract features from constituent parser tree (Zhao and Kit, 2008; Zhao et al., 2009) . The extracted features can be divided into two parts. The first part captures information about the connective itself: a) Con-str Case-sensitive string of the given connective. b) Con-Lstr The lowercase string of the connective. c) Con-iLSib Number of left sibling of the connective. d) Con-iRSib Number of right sibling of the connective.", |
| "cite_spans": [ |
| { |
| "start": 49, |
| "end": 69, |
| "text": "(Zhao and Kit, 2008;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 70, |
| "end": 88, |
| "text": "Zhao et al., 2009)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Explicit Argument Labeler", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "The second part consists of features from the syntactic constituent: e) NT-CtxContext of the constituent. We use POS combination of the constituent, its parent, left sibling and right sibling to represent the context. f) Con-NT-Path The path from the parent of the connective to the node of the constituent. g) Con-NT-Position The positive of the constituent relative to the connective: left, right, or previous.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Explicit Argument Labeler", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "After the parser categories all the candidates constituent into 'Arg1', 'Arg2', and 'NULL', Kong et al. adopted a Linear Integer Programming to impose constraints that the number of 'Arg1' and 'Arg2' should no less than one, The extracted arguments should not overlap with the connective. Our experiments also show that some constraints are useless. For example, constraint that the pruned out candidates should not overlap with the connective. The pruning algorithm considers the siblings of the node along the path, there is no chance that the pruned out candidate would overlap with the connective node.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Explicit Argument Labeler", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "Without considering the error propagated by the pruning process, the argument labeler gives results as Table 2 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 103, |
| "end": 110, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Explicit Argument Labeler", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "In this part we only take a naive approach that take the most frequent sense of the detected explicit connective. A better approach needs to build a sense classifier with syntactic features of the connective such as POS, and position and length of arguments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Explicit Sense Classifier", |
| "sec_num": "2.1.3" |
| }, |
| { |
| "text": "This part is based on the result of explicit part. We assume that Explicit and Non-explicit relations cannot exist in the same sentence simultaneously. So we take out sentences which have been labeled as Explicit in the first part. Then, we take all the adjacent sentences left in the article as candidate implicit relations. There are 13,155 implicit relations given in the train set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Non-Explicit Part", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Apart form filtering out the explicit connective, we also discard sentences between two paragraphs. After these two filtering steps we get 8,728 nonexplicit relations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Filter", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "At first glance, we should build a classifier that can distinguish the relations 'Implicit', 'AltLex', and 'EntRel'. We give the distribution of each relations in the train set in Table 3 We can see the 'AltLex' only covers about 2.94%, which is relatively negligible comparing with 'Implicit'( 73.85%) and 'EntRel'(23.2%). So we decide to focus only on the latter two relations, and the classifier only works on these two relations. Instead of building a single classifier, we set all the non-explicit relations as 'Implicit' here, and view 'EntRel' as a sense of implicit relation.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 180, |
| "end": 187, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Non-explicit Classifier", |
| "sec_num": "2.2.2" |
| }, |
| { |
| "text": "The distribution of all senses in the train set is given in Table We can see that senses below the double line account less than 1%. Based on this observation, we decide only consider those significant sense.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 60, |
| "end": 65, |
| "text": "Table", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Non-explicit Sense Classifier", |
| "sec_num": "2.2.3" |
| }, |
| { |
| "text": "What's more, we can see that the most frequent sense is 'EntRel'. This leads to our another strategy: At first we set all the candidate non-explicit senses as 'Implicit' and view 'EntRel' as a sense. Then when the Non-explicit Sense Classifier labels the sense as 'EntRel', the Non-explicit Sense Classifier re-labels the type of corresponding relation as 'EntRel'.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Non-explicit Sense Classifier", |
| "sec_num": "2.2.3" |
| }, |
| { |
| "text": "Previous studies attempt to predict the missing connective of implicit relations (Zhou et al., 2010; . It has been shown that connective is very predictive for the sense of the relation (Kong et al., 2014) . Consequently, we can get the intuition that features for predicting the missing connective are also useful for predicting the implicit sense. Thus we use word-pair features to train our Non-explicit Sense Classifier: b) Arg1Last The last word of 'Arg1'. a) Arg1First The first word of 'Arg1'. c) Arg2First The first word of 'Arg2'. d) Arg2Last The last word of 'Arg2'. e) FirstS Arg1First + Arg2First. f) LastS Arg1Last + Arg2Last. g) Arg1First3 The first three words of 'Arg1'. h) Arg1Last3 The last three words of 'Arg2'. i) Arg2First3 The first three words of 'Arg2'.", |
| "cite_spans": [ |
| { |
| "start": 81, |
| "end": 100, |
| "text": "(Zhou et al., 2010;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 186, |
| "end": 205, |
| "text": "(Kong et al., 2014)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Non-explicit Sense Classifier", |
| "sec_num": "2.2.3" |
| }, |
| { |
| "text": "A comprehensive evaluation towards our parser has been given in Table 5 . We can see that the first step of our parser, i.e., Explicit Classifier, does a moderate job. However, our work to extract the 'Arg1' and 'Arg2' cannot be regarded as success. Since our parser is in a sequential mode, all steps after that receive negative impacts.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 64, |
| "end": 71, |
| "text": "Table 5", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In this paper, a sequential system is proposed to do shallow discourse parsing. We demonstrate that the whole task can be worked out by a pipeline consists of several subtasks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In future, we will tune our Argument Labeler in order to gain a better result in the explicit part .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "4" |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Grammatical error correction as multiclass classification with single model", |
| "authors": [ |
| { |
| "first": "Zhongye", |
| "middle": [], |
| "last": "Jia", |
| "suffix": "" |
| }, |
| { |
| "first": "Peilu", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Hai", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "74--81", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhongye Jia, Peilu Wang, and Hai Zhao. 2013. Gram- matical error correction as multiclass classification with single model. pages 74-81, Sofia, Bulgaria, August.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "A constituent-based approach to argument labeling with joint inference in discourse parsing", |
| "authors": [ |
| { |
| "first": "Fang", |
| "middle": [], |
| "last": "Kong", |
| "suffix": "" |
| }, |
| { |
| "first": "Hwee Tou", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "Guodong", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "68--77", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fang Kong, Hwee Tou Ng, and Guodong Zhou. 2014. A constituent-based approach to argument labeling with joint inference in discourse parsing. In Pro- ceedings of the 2014 Conference on Empirical Meth- ods in Natural Language Processing, pages 68-77, Doha, Qatar, October.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "A PDTB-styled end-to-end discourse parser", |
| "authors": [ |
| { |
| "first": "Ziheng", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Min-Yen", |
| "middle": [], |
| "last": "Hwee Tou Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kan", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Natural Language Engineering", |
| "volume": "20", |
| "issue": "", |
| "pages": "151--184", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2014. A PDTB-styled end-to-end discourse parser. Natural Language Engineering, 20:151-184, April.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Using syntax to disambiguate explicit discourse connectives in text", |
| "authors": [ |
| { |
| "first": "Emily", |
| "middle": [], |
| "last": "Pitler", |
| "suffix": "" |
| }, |
| { |
| "first": "Ani", |
| "middle": [], |
| "last": "Nenkova", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the Joint Conference of the 47th", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Emily Pitler and Ani Nenkova. 2009. Using syntax to disambiguate explicit discourse connectives in text. In Proceedings of the Joint Conference of the 47th", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "13--16", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 13-16, Suntec, Singapore, Au- gust.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Automatic sense prediction for implicit discourse relations in text", |
| "authors": [ |
| { |
| "first": "Emily", |
| "middle": [], |
| "last": "Pitler", |
| "suffix": "" |
| }, |
| { |
| "first": "Annie", |
| "middle": [], |
| "last": "Louis", |
| "suffix": "" |
| }, |
| { |
| "first": "Ani", |
| "middle": [], |
| "last": "Nenkova", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "683--691", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Emily Pitler, Annie Louis, and Ani Nenkova. 2009. Automatic sense prediction for implicit discourse re- lations in text. In Proceedings of the Joint Confer- ence of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Lan- guage Processing of the AFNLP, pages 683-691, Suntec, Singapore, August.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "The Penn discourse treebank 2.0", |
| "authors": [ |
| { |
| "first": "Rashmi", |
| "middle": [], |
| "last": "Prasad", |
| "suffix": "" |
| }, |
| { |
| "first": "Dinesh", |
| "middle": [], |
| "last": "Nikhil", |
| "suffix": "" |
| }, |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Eleni", |
| "middle": [], |
| "last": "Miltsakaki", |
| "suffix": "" |
| }, |
| { |
| "first": "Livio", |
| "middle": [], |
| "last": "Robaldo", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Aravind", |
| "suffix": "" |
| }, |
| { |
| "first": "Bonnie", |
| "middle": [ |
| "L" |
| ], |
| "last": "Joshi", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Webber", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "The International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rashmi Prasad, Dinesh Nikhil, Alan Lee, Eleni Milt- sakaki, Livio Robaldo, Aravind K Joshi, and Bon- nie L Webber. 2008. The Penn discourse tree- bank 2.0. In The International Conference on Lan- guage Resources and Evaluation, Marrakech, Mo- rocco, May.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Grammatical error detection and correction using a single maximum entropy model", |
| "authors": [ |
| { |
| "first": "Peilu", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhongye", |
| "middle": [], |
| "last": "Jia", |
| "suffix": "" |
| }, |
| { |
| "first": "Hai", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "74--82", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peilu Wang, Zhongye Jia, and Hai Zhao. 2014. Gram- matical error detection and correction using a single maximum entropy model. pages 74-82, Baltimore, Maryland, USA, July.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Parsing syntactic and semantic dependencies with two single-stage maximum entropy models", |
| "authors": [ |
| { |
| "first": "Hai", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Chunyu", |
| "middle": [], |
| "last": "Kit", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the Twelfth Conference on Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "203--207", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hai Zhao and Chunyu Kit. 2008. Parsing syntac- tic and semantic dependencies with two single-stage maximum entropy models. In Proceedings of the Twelfth Conference on Computational Natural Lan- guage Learning, pages 203-207, Manchester, Au- gust.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Multilingual dependency learning: Exploiting rich features for tagging syntactic and semantic dependencies", |
| "authors": [ |
| { |
| "first": "Hai", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Wenliang", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Kiyotaka", |
| "middle": [], |
| "last": "Jun'ichi Kazama", |
| "suffix": "" |
| }, |
| { |
| "first": "Kentaro", |
| "middle": [], |
| "last": "Uchimoto", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Torisawa", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning: Shared Task", |
| "volume": "", |
| "issue": "", |
| "pages": "61--66", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hai Zhao, Wenliang Chen, Jun'ichi Kazama, Kiyotaka Uchimoto, and Kentaro Torisawa. 2009. Multi- lingual dependency learning: Exploiting rich fea- tures for tagging syntactic and semantic dependen- cies. In Proceedings of the Thirteenth Confer- ence on Computational Natural Language Learn- ing: Shared Task, pages 61-66, Boulder, Colorado, June.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Predicting discourse connectives for implicit discourse relation recognition", |
| "authors": [ |
| { |
| "first": "Zhi-Min", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Yu", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Zheng-Yu", |
| "middle": [], |
| "last": "Niu", |
| "suffix": "" |
| }, |
| { |
| "first": "Man", |
| "middle": [], |
| "last": "Lan", |
| "suffix": "" |
| }, |
| { |
| "first": "Jian", |
| "middle": [], |
| "last": "Su", |
| "suffix": "" |
| }, |
| { |
| "first": "Chew Lim", |
| "middle": [], |
| "last": "Tan", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1507--1514", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhi-Min Zhou, Yu Xu, Zheng-Yu Niu, Man Lan, Jian Su, and Chew Lim Tan. 2010. Predicting discourse connectives for implicit discourse relation recogni- tion. In Proceedings of the 23rd International Con- ference on Computational Linguistics, pages 1507- 1514, Beijing, China, August.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "uris": null, |
| "num": null, |
| "text": "Pipeline of the system Explicit connectives in train set 14722 Level-order Scan 13911" |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "uris": null, |
| "num": null, |
| "text": "4. The current shared task only asks * Abbreviations: Expansionz(Expn.), Conjunction(Conj.), Restatement(Rest.), Contingency(Cont.), Instantiation(Inst.), Temporal(Temp.), Asynchronous(Asyn.), Precedence(Prec.), Comparison(Comp.),Concession(Conc.), Synchrony(Sync.), Asynchronous(Asyn.), Succession(Suc.), alternative(alt.), Condition(Cond.), Exception(Exc.)" |
| }, |
| "TABREF0": { |
| "num": null, |
| "content": "<table/>", |
| "type_str": "table", |
| "html": null, |
| "text": "" |
| }, |
| "TABREF3": { |
| "num": null, |
| "content": "<table/>", |
| "type_str": "table", |
| "html": null, |
| "text": "Distribution of Non-explicit Senses in train set." |
| }, |
| "TABREF5": { |
| "num": null, |
| "content": "<table/>", |
| "type_str": "table", |
| "html": null, |
| "text": "Detailed Results us to detect 15 senses, which are marked by star." |
| } |
| } |
| } |
| } |