ACL-OCL / Base_JSON /prefixK /json /K16 /K16-2008.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K16-2008",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:11:06.036141Z"
},
"title": "A Constituent Syntactic Parse Tree Based Discourse Parser",
"authors": [
{
"first": "Zhongyi",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Shanghai Jiao Tong University",
"location": {
"postCode": "200240",
"settlement": "Shanghai",
"country": "China"
}
},
"email": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Shanghai Jiao Tong University",
"location": {
"postCode": "200240",
"settlement": "Shanghai",
"country": "China"
}
},
"email": "zhaohai@cs.sjtu.edu.cn"
},
{
"first": "Chenxi",
"middle": [],
"last": "Pang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Shanghai Jiao Tong University",
"location": {
"postCode": "200240",
"settlement": "Shanghai",
"country": "China"
}
},
"email": ""
},
{
"first": "Lili",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Shanghai Jiao Tong University",
"location": {
"postCode": "200240",
"settlement": "Shanghai",
"country": "China"
}
},
"email": "hwang8@gc.omron.com"
},
{
"first": "Huan",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes our system in the CoNLL-2016 shared task. Our system takes a piece of newswire text as input and returns the discourse relations. In our system we use a pipeline to conduct each subtask. Our system is evaluated on the CoNLL-2016 Shared Task closed track and obtains 0.1515 in F1 measurement, especially the part of detecting connectives, which achieves 0.9838 on blind test set.",
"pdf_parse": {
"paper_id": "K16-2008",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes our system in the CoNLL-2016 shared task. Our system takes a piece of newswire text as input and returns the discourse relations. In our system we use a pipeline to conduct each subtask. Our system is evaluated on the CoNLL-2016 Shared Task closed track and obtains 0.1515 in F1 measurement, especially the part of detecting connectives, which achieves 0.9838 on blind test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "An end-to-end discourse parser is a system using natural language text as input and the discourse relation in labeled text as output. It has been widely used in the field of natural language processing, such as text classification, question answering system. In these discourse relations, two argument spans are marked as targets looked for by discourse relations, while conjunctions (connective) play an important role to confirm the relationship between the two argument spans. According to whether the conjunctions clearly appear in the text, discourse relations can be divided into two categories: explicit and non-explicit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Penn Discourse Treebank (PDTB) has become the most important corpus in the field of discourse parsing. Previous work (Lin et al., 2014) integrated the entire training process together to form a complete discourse parser. There were five major components in the system, including Connective classifier, Argument labeler, Explicit classifier, Non-Explicit classifier, Attribution span labeler, with a part of PDTB as the training, which has achieved good prediction performance.",
"cite_spans": [
{
"start": 117,
"end": 135,
"text": "(Lin et al., 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We design our discourse parser as a sequential pipeline, shown in Figure 1 . The whole system can be divided into two main parts: explicit and non-explicit.",
"cite_spans": [],
"ref_spans": [
{
"start": 66,
"end": 74,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2"
},
{
"text": "(1) Connective Classifier Detects the discourse connectives. Note that not all commonly used conjunctions have the effect of connective, so we first identify the ones function as discourse connective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Explicit part contains:",
"sec_num": null
},
{
"text": "(2) Explicit Argument Labeler Locates the relative positions and extracts spans of Arg1 and Arg2. We use an efficient method to extract for integrating Arg1 and Arg2 together.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Explicit part contains:",
"sec_num": null
},
{
"text": "(3) Explicit Sense Classifier Determines the discourse function of the detected connectives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Explicit part contains:",
"sec_num": null
},
{
"text": "For the explicit part and non-explicit part, there is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Explicit part contains:",
"sec_num": null
},
{
"text": "(4) Filter Gets rid of obviously incorrect parts, such as the ones that have already been marked as explicit relationship, with the remainder as the input part of non-explicit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Explicit part contains:",
"sec_num": null
},
{
"text": "The non-explicit part contains:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Explicit part contains:",
"sec_num": null
},
{
"text": "(5) Non-explicit Argument Labeler marks the location and range of Arg1 and Arg2 in the case that lacks of connective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Explicit part contains:",
"sec_num": null
},
{
"text": "(6) Non-explicit Sense Classifier Determines the discourse relations according to the semantic context of Arg1 and Arg2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Explicit part contains:",
"sec_num": null
},
{
"text": "Our system consists of six parts, and the general workflow refers to the shallow discourse parser based on the constituent parse tree (Chen et al., 2015) . Feature extraction for training follows previous works (Kong et al., 2014; Lin et al., 2014; .",
"cite_spans": [
{
"start": 134,
"end": 153,
"text": "(Chen et al., 2015)",
"ref_id": "BIBREF0"
},
{
"start": 211,
"end": 230,
"text": "(Kong et al., 2014;",
"ref_id": "BIBREF2"
},
{
"start": 231,
"end": 248,
"text": "Lin et al., 2014;",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Components",
"sec_num": "3"
},
{
"text": "We deduce each sentence into a constituent parse tree. Relative information is extracted from these constituent parse trees to train models and predict discourse relations. In PDTB, there are 100 species of discourse connective, but not all conjunctions in the form of these 100 kinds of connective in the text are necessarily discourse relation. Thus, at first, we find out all connectives appearing in the text by scanning each constituent parse tree, then use the connective classifier to determine whether each connective functions as discourse connective. The features in connective classifier are as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Components",
"sec_num": "3"
},
{
"text": "(1) ConnPos The category of the tree node which covers the whole connective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Components",
"sec_num": "3"
},
{
"text": "(2) PrevConn The previous word of the connective and the connective itself.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Components",
"sec_num": "3"
},
{
"text": "(3) PrevPos The category of the previous word of the connective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Components",
"sec_num": "3"
},
{
"text": "(4) PrevPosConnPos The category of previous word and category of the connective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Components",
"sec_num": "3"
},
{
"text": "(5) ConnNext The connective itself and the next word of the connective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Components",
"sec_num": "3"
},
{
"text": "(6) NextPos The category of the next word of the connective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Components",
"sec_num": "3"
},
{
"text": "(7) ConnPosNextPos The category of the connective itself and category of the next word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Components",
"sec_num": "3"
},
{
"text": "After extracting the mentioned feature for each connective, we annotate it as 1 or 0 according to whether this word in PDTB functions as discourse connective. (Jia et al., 2013; Zhao and Kit, 2008) showed maximum entropy classifier performed well in relative tasks, so we apply it to our classification problem * . According to official evaluation, F1 score of this part in our system is 0.9905 on the dev set and 0.9838 on the blind test set, comparing to 0.9514 and 0.9186, the best result of CoNLL-2015. The detailed results are shown in Table 1 From the comparison, we can learn that (1)From the constituent parse tree we build, we can extract connective features precisely.",
"cite_spans": [
{
"start": 159,
"end": 177,
"text": "(Jia et al., 2013;",
"ref_id": "BIBREF1"
},
{
"start": 178,
"end": 197,
"text": "Zhao and Kit, 2008)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 541,
"end": 548,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "System Components",
"sec_num": "3"
},
{
"text": "(2)We use a straightforward way build our classifier. Comparing to previous works, our features and model are much more intuitively, and finally get even better result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Components",
"sec_num": "3"
},
{
"text": "(3)There are different ways to process text, and our work shows that using constituent parse tree is a proper method in this task or similar ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Components",
"sec_num": "3"
},
{
"text": "In this part, we use interval mapping based on constituent parse tree and the extracting method proposed by (Kong et al., 2014) . When training, in constituent parse tree, we start with the node of connective, and ended with the root node. Along the path, left and right sibling of each node have become the candidate member of the Argument. Given that some part of the explicit discourse relation used previous sentence (PS) as Arg1, we use the a efficient method (Kong et al., 2014) , which is to treat the sentence previous to the one contained discourse connective as the candidate of Arg1. Later, we compare these candidates with PDTB, and label them as Arg1 and Arg2 or null, according to their uses in PDTB, of which null means that the candidate doesn't have the function of Arg1 or Arg2. By this means, we obtain satisfying effect of argument labeling.",
"cite_spans": [
{
"start": 108,
"end": 127,
"text": "(Kong et al., 2014)",
"ref_id": "BIBREF2"
},
{
"start": 465,
"end": 484,
"text": "(Kong et al., 2014)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Argument labeler",
"sec_num": "3.1.2"
},
{
"text": "The features were as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument labeler",
"sec_num": "3.1.2"
},
{
"text": "(1) ConStr Prototype of connective in the text. (6) CandiCtx Candidate's category, category of parent node, category of left sibling, and category of right sibling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument labeler",
"sec_num": "3.1.2"
},
{
"text": "(7) ConCandiPath Category of each node from the Candi to root node along the tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument labeler",
"sec_num": "3.1.2"
},
{
"text": "(8) ConCandiPosition The relative position between Candi and connective (left or right).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument labeler",
"sec_num": "3.1.2"
},
{
"text": "(9) ConCandiPathLSib Whether the left sibling number of the Candi is bigger than one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument labeler",
"sec_num": "3.1.2"
},
{
"text": "In this part, we combine the feature of Lin's experiment with the feature of Pilter's, particularly as follows, (1) C prototype (2) C POS (3) prev+C (4) category of parent (5) category of left sibling (6) category of right sibling. The detailed results are shown in Table 2 . The results are also better than the best ones of CoNLL-2015, which were 0.3861 on the dev set and 0.2394 on the blind test set.",
"cite_spans": [],
"ref_spans": [
{
"start": 266,
"end": 273,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Explicit Sense Classifier",
"sec_num": "3.1.3"
},
{
"text": "After identifying all explicit discourse relation connectives, and before non-explicit parser, we need to filter the training set. There are two cases for this filtering. (1) If one sentence, is labelled as Arg1 of some explicit discourse in previous step, then the related two sentences will not be considered by the following non-explicit parser. 2In the original text, if two adjacent sentences are located between the last sentence of the previous paragraph and the first sentence of the next paragraph respectively, then these two sentences will not be considered, either.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Filter",
"sec_num": "3.2"
},
{
"text": "After explicit parser and filtering above, we take the rest part as input into non-explicit parser, for finding all the non-explicit discourse relations. In PDTB, there are three kinds of non-explicit discourse relations, which are Implicit, AltLex and EntRel. We notice that there is only 2.94% of Al-tLex. Besides, according to official evaluation criteria, we need to detect only 15 senses of the part of implicit. According to (Chen et al., 2015) , we integrate EntRel together with implicit as a special sense for training and predicting.",
"cite_spans": [
{
"start": 431,
"end": 450,
"text": "(Chen et al., 2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Non-Explicit Parser",
"sec_num": "3.3"
},
{
"text": "In this part, we simply take the rest adjacent sentences which have been filtered as the argument span of non-explicit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-explicit Argument Labeler",
"sec_num": "3.3.1"
},
{
"text": "We perform sentence classification as mentioned above, practicing EntRel as a special sense of implicit, and ignored the senses which have few frequency of occurrences in PDTB. According to the previous works, the lost connective plays an important role in senses. Generally, connective appears at the beginning of the second sentence. According to this assumption, we use the following features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-explicit Sense Classifier",
"sec_num": "3.3.2"
},
{
"text": "(1) Arg1Last The last word of Arg1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-explicit Sense Classifier",
"sec_num": "3.3.2"
},
{
"text": "(2) Arg1First The first word of Arg1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-explicit Sense Classifier",
"sec_num": "3.3.2"
},
{
"text": "(3) Arg2Last The last word of Arg2. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-explicit Sense Classifier",
"sec_num": "3.3.2"
},
{
"text": "Our system is trained on the training set and evaluated on test set provided in the CoNLL-2016 Shared Task. We train our model of detecting connectives, extracting arguments of explicit part, predicting sense of connectives and predicting sense of non-explicit part, respectively. The results of the official evaluation are shown in the Table 3 , 4 and 5. From the result, we can learn that",
"cite_spans": [],
"ref_spans": [
{
"start": 337,
"end": 344,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "4"
},
{
"text": "(1) The part of connective detection and classification achieve great performances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "4"
},
{
"text": "(2) The results of the sampled part are good, while there is still some gap between our system and the best one on the explicit and non-explicit part. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Experiments",
"sec_num": "4"
},
{
"text": "In this paper, we present a complete discourse parser. Based on these previous works and through continuous improvement, our system has achieved good results. According to the official evaluation of CoNLL-2016 Shared Task closed track, our system gets 0.9905 in F1-measure on explicit connective classifier, and finally achieves 0.1515 in F1measure on the official blind test.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "* MaxEnt classifier of OpenNLP, an open-source toolkit. See http://opennlp.apache.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Shallow discourse parsing using constituent parsing tree",
"authors": [
{
"first": "Changge",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Peilu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning -Shared Task",
"volume": "",
"issue": "",
"pages": "37--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Changge Chen, Peilu Wang, and Hai Zhao. 2015. Shallow discourse parsing using constituent pars- ing tree. In Proceedings of the Nineteenth Confer- ence on Computational Natural Language Learning -Shared Task, pages 37-41, Beijing, China, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Grammatical error correction as multiclass classification with single model",
"authors": [
{
"first": "Zhongye",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Peilu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2013,
"venue": "Seventeenth Conference on Computational Natural Language Learning: Shared Task",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhongye Jia, Peilu Wang, and Hai Zhao. 2013. Gram- matical error correction as multiclass classification with single model. In Seventeenth Conference on Computational Natural Language Learning: Shared Task, pages 74-81, Sofia, Bulgaria, August.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A constituent-based approach to argument labeling with joint inference in discourse parsing",
"authors": [
{
"first": "Fang",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language rocessing",
"volume": "",
"issue": "",
"pages": "68--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fang Kong, Hwee Tou Ng, and Guodong Zhou. 2014. A constituent-based approach to argument labeling with joint inference in discourse parsing. In Pro- ceedings of the 2014 Conference on Empirical Meth- ods in Natural Language rocessing, pages 68-77, Doha, Qatar, October.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The sonlp-dp system in the conll-2015 shared task",
"authors": [
{
"first": "Fang",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Sheng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning -Shared Task",
"volume": "",
"issue": "",
"pages": "32--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fang Kong, Sheng Li, and Guodong Zhou. 2015. The sonlp-dp system in the conll-2015 shared task. In Proceedings of the Nineteenth Conference on Com- putational Natural Language Learning -Shared Task, pages 32-36, Beijing, China, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A pdtb-styled end-to-end discourse parser",
"authors": [
{
"first": "Ziheng",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kan",
"suffix": ""
}
],
"year": 2014,
"venue": "Natural Language Engineering",
"volume": "",
"issue": "",
"pages": "151--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2014. A pdtb-styled end-to-end discourse parser. Natural Language Engineering, pages 151-184, April.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Using syntax to disambiguate explicit discourse connectives in text",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "",
"issue": "",
"pages": "13--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Pitler and Ani Nenkova. 2009. Using syntax to disambiguate explicit discourse connectives in text. Proceedings of the Joint Conference of the 47th An- nual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 13-16, August.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automatic sense prediction for implicit discourse relations in text",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "Annie",
"middle": [],
"last": "Louis",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "",
"issue": "",
"pages": "683--691",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Pitler, Annie Louis, and Ani Nenkova. 2009. Automatic sense prediction for implicit discourse relations in text. Proceedings of the Joint Confer- ence of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Lan- guage Processing of the AFNLP, pages 683-691, August.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The penn discourse treebank 2",
"authors": [
{
"first": "Rashmi",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "Dinesh",
"middle": [],
"last": "Nikhil",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Eleni",
"middle": [],
"last": "Miltsakaki",
"suffix": ""
},
{
"first": "Livio",
"middle": [],
"last": "Robaldo",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Aravind",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [
"L"
],
"last": "Joshi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2008,
"venue": "The International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rashmi Prasad, Dinesh Nikhil, Alan Lee, Eleni Milt- sakaki, Livio Robaldo, Aravind K Joshi, and Bon- nie L Webber. 2008. The penn discourse treebank 2.0. The International Conference on Language Re- sources and Evaluation, May.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Shallow discourse parsing using convolutional neural network",
"authors": [
{
"first": "Lianhui",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Zhisong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Twentieth Conference on Computational Natural Language Learning -Shared Task",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lianhui Qin, Zhisong Zhang, and Hai Zhao. 2016. Shallow discourse parsing using convolutional neu- ral network. In Proceedings of the Twentieth Con- ference on Computational Natural Language Learn- ing -Shared Task, Berlin, Germany, Augest. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A refined endto-end discourse parser",
"authors": [
{
"first": "Jianxiang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Man",
"middle": [],
"last": "Lan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning -Shared Task",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianxiang Wang and Man Lan. 2015. A refined end- to-end discourse parser. In Proceedings of the Nine- teenth Conference on Computational Natural Lan- guage Learning -Shared Task, pages 17-24, Bei- jing, China, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Converting continuous-space language models into ngram language models for statistical machine translation",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Masao",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "Isao",
"middle": [],
"last": "Goto",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Bao-Liang",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2013,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "845--850",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rui Wang, Masao Utiyama, Isao Goto, Eiichiro Sumita, Hai Zhao, and Bao-Liang Lu. 2013. Con- verting continuous-space language models into n- gram language models for statistical machine trans- lation. In EMNLP, pages 845-850.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Grammatical error detection and correction using a single maximum entropy model",
"authors": [
{
"first": "Peilu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhongye",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task",
"volume": "",
"issue": "",
"pages": "74--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peilu Wang, Zhongye Jia, and Hai Zhao. 2014a. Grammatical error detection and correction using a single maximum entropy model. In Proceedings of the Eighteenth Conference on Computational Natu- ral Language Learning: Shared Task, pages 74-82, Baltimore, Maryland, June. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Learning distributed word representations for bidirectional lstm recurrent neural network",
"authors": [
{
"first": "Peilu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Soong",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peilu Wang, Yao Qian, Frank K Soong, Lei He, and Hai Zhao. 2014b. Learning distributed word representa- tions for bidirectional lstm recurrent neural network. In Proceedings of NAACL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Neural network based bilingual language model growing for statistical machine translation",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Bao-Liang",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Masao",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "189--195",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rui Wang, Hai Zhao, Bao-Liang Lu, Masao Utiyama, and Eiichiro Sumita. 2014c. Neural network based bilingual language model growing for statistical ma- chine translation. In EMNLP, pages 189-195.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The conll-2016 shared task on multilingual shallow discourse parsing",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "Attapol",
"middle": [
"T"
],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Rutherford",
"suffix": ""
},
{
"first": "Chuan",
"middle": [],
"last": "Webber",
"suffix": ""
},
{
"first": "Hongmin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Twentieth Conference on Computational Natural Language Learning: Shared Task",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nianwen Xue, Hwee Tou Ng, Sameer Pradhan, At- tapol T. Rutherford, Bonnie Webber, Chuan Wang, and Hongmin Wang. 2016. The conll-2016 shared task on multilingual shallow discourse parsing. In In Proceedings of the Twentieth Conference on Compu- tational Natural Language Learning: Shared Task, Berlin, Germany.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Parsing syntactic and semantic dependencies with two single-stage maximum entropy models",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Chunyu",
"middle": [],
"last": "Kit",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Twelfth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "203--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao and Chunyu Kit. 2008. Parsing syntactic and semantic dependencies with two single-stage max- imum entropy models. Proceedings of the Twelfth Conference on Computational Natural Language Learning, pages 203-207, August.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Semantic dependency parsing of nombank and propbank: An efficient integrated approach via a largescale feature selection",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Wenliang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chunyu",
"middle": [],
"last": "Kit",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "30--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao, Wenliang Chen, and Chunyu Kit. 2009a. Semantic dependency parsing of nombank and prop- bank: An efficient integrated approach via a large- scale feature selection. In Proceedings of the 2009 Conference on Empirical Methods in Natural Lan- guage Processing: Volume 1-Volume 1, pages 30- 39. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Multilingual dependency learning: A huge feature engineering method to semantic dependency parsing",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Wenliang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chunyu",
"middle": [],
"last": "Kit",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning: Shared Task",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao, Wenliang Chen, Chunyu Kit, and Guodong Zhou. 2009b. Multilingual dependency learning: A huge feature engineering method to semantic de- pendency parsing. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning: Shared Task, pages 55-60. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Cross language dependency parsing using a bilingual lexicon",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Chunyu",
"middle": [],
"last": "Kit",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "1",
"issue": "",
"pages": "55--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao, Yan Song, Chunyu Kit, and Guodong Zhou. 2009c. Cross language dependency parsing using a bilingual lexicon. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natu- ral Language Processing of the AFNLP: Volume 1- Volume 1, pages 55-63. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Integrative semantic dependency parsing via efficient large-scale feature selection",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Xiaotian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chunyu",
"middle": [],
"last": "Kit",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Artificial Intelligence Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao, Xiaotian Zhang, and Chunyu Kit. 2013. In- tegrative semantic dependency parsing via efficient large-scale feature selection. Journal of Artificial Intelligence Research.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Character-level dependencies in chinese: usefulness and learning",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "879--887",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao. 2009. Character-level dependencies in chi- nese: usefulness and learning. In Proceedings of the 12th Conference of the European Chapter of the As- sociation for Computational Linguistics, pages 879- 887. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Figure 1: System overview"
},
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "ConLStr Lowercase of connective. (3) ConCat Part of speech of connective. (4) ConLSib Left sibling number of connective. (5) ConLSib Right sibling number of connective."
},
"FIGREF3": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Arg2Last The first word of Arg2. (5) FirstS The first word of Arg1 and Arg2. (6) LastS The last word of Arg1 and Arg2. (7) Arg1First3 The first three words of Arg1. (8) Arg1First3 The first three words of Arg2. (9) Arg1Last3 The last three words of Arg1."
},
"TABREF1": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": ""
},
"TABREF3": {
"content": "<table><tr><td colspan=\"3\">: Official scores on dev set</td></tr><tr><td/><td>P</td><td>R</td><td>F</td></tr><tr><td colspan=\"4\">Explicit Connective 0.9967 0.9819 0.9892</td></tr><tr><td>Extract Arg1</td><td colspan=\"3\">0.5529 0.4988 0.5245</td></tr><tr><td>Extract Arg2</td><td colspan=\"3\">0.6674 0.6021 0.6331</td></tr><tr><td colspan=\"4\">Extract Arg1&amp;Arg2 0.4033 0.3639 0.3826</td></tr><tr><td>Parser</td><td colspan=\"3\">0.2013 0.2233 0.2117</td></tr></table>",
"type_str": "table",
"html": null,
"num": null,
"text": ""
},
"TABREF4": {
"content": "<table><tr><td colspan=\"3\">: Official scores on test set</td></tr><tr><td/><td>P</td><td>R</td><td>F</td></tr><tr><td colspan=\"4\">Explicit Connective 0.9856 0.9821 0.9838</td></tr><tr><td>Extract Arg1</td><td colspan=\"3\">0.5252 0.3501 0.4201</td></tr><tr><td>Extract Arg2</td><td colspan=\"3\">0.6675 0.4449 0.5339</td></tr><tr><td colspan=\"4\">Extract Arg1&amp;Arg2 0.3615 0.2409 0.2891</td></tr><tr><td>Parser</td><td colspan=\"3\">0.1262 0.1894 0.1515</td></tr></table>",
"type_str": "table",
"html": null,
"num": null,
"text": ""
},
"TABREF5": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": "Official scores on blind test set"
}
}
}
}