ACL-OCL / Base_JSON /prefixK /json /K16 /K16-2017.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K16-2017",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:11:20.003937Z"
},
"title": "DA-IICT Submission for PDTB-styled Discourse Parser",
"authors": [
{
"first": "Devanshu",
"middle": [],
"last": "Jain",
"suffix": "",
"affiliation": {},
"email": "devanshu.jain919@gmail.com"
},
{
"first": "Prasenjit",
"middle": [],
"last": "Majumder",
"suffix": "",
"affiliation": {},
"email": "prasenjit.majumder@gmail.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The CONLL 2016 Shared task focusses on building a Shallow Discourse Parsing system, which is given a piece of newswire text as input and it returns all discourse relations in that text in the form of discourse connectives, its two arguments and the relation sense. We have built a parser for the same. We follow a pipeline architecture to build the system. We employ machine learning methods to train our classifiers for each component in the pipeline. The system achieves an overall F1 score of 0.1065 when tested on blind dataset provided by the task organisers. On the same dataset, for explicit relations, F1 score of 0.2067 is achieved, while for non explicit relations, an F1 score of 0.0112 is achieved.",
"pdf_parse": {
"paper_id": "K16-2017",
"_pdf_hash": "",
"abstract": [
{
"text": "The CONLL 2016 Shared task focusses on building a Shallow Discourse Parsing system, which is given a piece of newswire text as input and it returns all discourse relations in that text in the form of discourse connectives, its two arguments and the relation sense. We have built a parser for the same. We follow a pipeline architecture to build the system. We employ machine learning methods to train our classifiers for each component in the pipeline. The system achieves an overall F1 score of 0.1065 when tested on blind dataset provided by the task organisers. On the same dataset, for explicit relations, F1 score of 0.2067 is achieved, while for non explicit relations, an F1 score of 0.0112 is achieved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Discourse Parsing is the process of assigning a discourse structure to the input provided in the form of natural language. The term \"Shallow\" signifies that the annotation of one discourse relation is independent of all other discourse relations, thus leaving room for a high level analysis that may attempt to connect them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "For the purpose of training and testing the system, we used PDTB (Penn Discourse Tree Bank), which is a discourse-level annotation on top of PTB (Penn Tree Bank). The corpus provides annotation for all discourse relations present in the documents. A discourse relation is composed of discourse connectives, its two arguments and the relation sense. PDTB provides a list of 100 discourse connectives, which may indicate the presence of a relation. A discourse connective can fall in any of 3 categories: Coordinating Conjunctions (e.g.: and, but, etc.), Subordinating Conjunctions (e.g.: if, because, etc.) or Discourse Adverbial (e.g.: however, also, etc.).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "There are four kinds of relations, namely Explicit Relations are marked by the presence of 100 connectives pre-defined by PDTB. Implicit Relations are realised by the reader. There are no words explicitly indicating the relationship. Sometimes, words not pre-defined like connectives by PDTB indicate a relationship. Such relations are called AltLex relations. EntRel relations exist between two sentences in which same entity is being realised. EntRel relations do not have a sense. Some examples are specified in figure 1. Here, the underlined word represents the discourse connective. Italicised text represents argument 1 and bold text represents argument 2. The right indented text following each relation represents the relation sense. The text in the bracket represents the relation type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "There are many challenges associated with this task. Firstly, we need to identify when a word works as a discourse connective and when it does not. In figure 1, consider examples 1 and 3. Both relations contain the word and which is present in the list of explicit connectives. But it acts as a discourse connective in example 1 and not in 3. In 3, it just links political and currency in a noun phrase. Secondly, we need to extract the arguments from sentences. And finally, we need to identify the relation sense.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "Study of discourse parsing has a variety of applications in the field of Natural Language Processing. For instance, in summarisation systems, 1. The agency has already spent roughly $19 billion selling 34 insolvent SLs, and it is likely to sell or merge 600 by the time the bailout concludes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "Expansion.Conjunction (Explicit) 2. But it doesn't take much to get burned. Implicit = FOR EXAMPLE Political and currency gyrations can whipsaw the funds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "Expansion.Restatement.Specification (Implicit) 3. Political and currency gyrations can whipsaw the funds. AltLex [Another concern]: The funds' share prices tend to swing more than the broader declared San Francisco batting coach Dusty Baker after game two market.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "Expansion.Conjunction (AltLex) 4. Pierre Vinken, 61 years old, will join the board as a non-executive director Nov. 29. Mr. Vinken is chairman of Elsevier N.V., the Dutch publishing group.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "(EntRel) Figure 1 : Examples of various types of discourse relations redundancy is an important aspect. We can analyse discourse relations with Expansion sense to weed out the redundant material. Also, in Question Answering systems, we can make use of relations with Cause senses to answer the why questions. The report is organised as follows. Section 2 gives a brief overview of the system. Section 3 describes each component in detail and features deployed to build our parser. Section 4 reports the evaluation strategy and results achieved by our parser. In PDTB corpus for explicit relations, argument 2 is always syntactically bound to the connective (i.e. it is in the same sentence as connective). As far as argument 1 is concerned, it can either be in one of the previous sentences (PS case), in the same sentence (SS case) or after that sentence (FS case). Since, FS cases' occurance was too low (only 4 instances out of total 32000 relations), therefore, such cases are ignored by our system. Argument Position Identifier tries to identify this relative position of argument 1 with respect to argument 2.",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 17,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "If the PS case appears, then the immediately previous sentence is considered as the sentence containing argument 1. This is true for 92% of the cases in training data. Argument Extractor extracts the argument span from the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2"
},
{
"text": "Explicit Sense Classifier identifies the relation sense. It is important to identify this as same connective may convey different meanings in different contexts. For example the word since can either be used in different senses as shown in figure 3. In 1, it is used in temporal sense while in 2, it is being used in causal sense.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2"
},
{
"text": "1. There have been more than 100 mergers and acquisitions within the European paper industry since the most recent wave of friendly takeovers was completed in the U.S. in 1986.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2"
},
{
"text": "2. It was a far safer deal for lenders since NWA had a healthier cash flow and more collateral on hand. Figure 3 : Since being used in different senses Non Explicit Classifier tries to identify one of the non-explicit relations (Implicit, AltLex, En-tRel) and otherwise NoRel (no relation) between adjecent sentences within the same paragraph.",
"cite_spans": [],
"ref_spans": [
{
"start": 104,
"end": 112,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2"
},
{
"text": "Non Explicit Argument Extractor tries to extract the argument spans for non-explicit relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2"
},
{
"text": "For the purpose of classification, our system uses MaxEnt Classification Algorithm without smoothing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2"
},
{
"text": "The input to this component is free text from the documents. We sift through all the words in all the documents and identify the occurences of predefined explicit connectives. Then, we identify whether these connectives actually work as discourse connectives or not. For this task, we used Pitler and Nenkova 's (2009) syntactic features. Lin et al. (2014) approached this problem by using POS tags and context based features . They used used features from syntax tree, namely path from connective word to the root and compressed path (i.e. same subsequent nodes in the path are clubbed). We too, have used the similar features, as shown in table 1. Here, C-syn features refer to the combination of Connective string with each of syntactic feature and syn-syn features mean the pairing of a syntactic feature with another different syntactic feature.",
"cite_spans": [
{
"start": 339,
"end": 356,
"text": "Lin et al. (2014)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Connective Classifier",
"sec_num": "3.1"
},
{
"text": "Here, we first identify the relative position of argument 1 with respect to argument 2. Given this position, we extract the arguments from sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Labeller",
"sec_num": "3.2"
},
{
"text": "To identify the position of argument 1, we extract the features mentioned in table 2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Position Identifier",
"sec_num": "3.2.1"
},
{
"text": "After predicting the position of argument 1, we employed different tactics for different positions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Extractor",
"sec_num": "3.2.2"
},
{
"text": "\u2022 If the position is SS (that is, both arguments are in same sentence), then we use constituency based approach by Kong et.al. without Joint Inference to extract arguments. This consists of two steps:",
"cite_spans": [
{
"start": 115,
"end": 126,
"text": "Kong et.al.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Extractor",
"sec_num": "3.2.2"
},
{
"text": "-Pruning: In the parse tree of sentence, identify the node dominating all the connective words. From that node move towards the root and collect all the siblings. If this node does not exactly contain the connective words, collect all its children too. These nodes are termed as constituents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Extractor",
"sec_num": "3.2.2"
},
{
"text": "-Classification:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Extractor",
"sec_num": "3.2.2"
},
{
"text": "For all these constituents, we extract the features mentioned in table 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Extractor",
"sec_num": "3.2.2"
},
{
"text": "\u2022 If the position is PS, then we consider the immediately previous sentence as a candidate for containing argument 1 and the sentence containing connective string as a candidate for containing argument 2. Extracting the arguments from sentence is a two step process:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Extractor",
"sec_num": "3.2.2"
},
{
"text": "-Cause Splitter: We split the sentence into clauses using punctuation symbols. For the resulting clauses, we again separate SBAR (Subordinating clauses) components from them. -Now we classify each of these clauses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Extractor",
"sec_num": "3.2.2"
},
{
"text": "For immediately previous sentence, a clause can belong to either Arg1 or none and for the sentence containing connetive string, a clause may belong to Arg2 or none. First word in this clause 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Extractor",
"sec_num": "3.2.2"
},
{
"text": "Last word in this clause 9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Extractor",
"sec_num": "3.2.2"
},
{
"text": "Last word in previous clause 10",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Extractor",
"sec_num": "3.2.2"
},
{
"text": "First word in next clause 11",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Extractor",
"sec_num": "3.2.2"
},
{
"text": "Last word in previous clause + First word in this clause 12",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Extractor",
"sec_num": "3.2.2"
},
{
"text": "Last word in this clause + First word in next clause 13",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Extractor",
"sec_num": "3.2.2"
},
{
"text": "Position of this clause in sentence: start, middle or end Position of this clause in the sentence Table 7 : Features for Non Explicit Argument Extraction",
"cite_spans": [],
"ref_spans": [
{
"start": 98,
"end": 105,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Argument Extractor",
"sec_num": "3.2.2"
},
{
"text": "To determine the relation sense, we use Lin's as well as Pitler's features, as shown in table 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit Sense Classifier",
"sec_num": "3.3"
},
{
"text": "Non Explicit Relations occur between adjacent sentences within same paragraph. We consider the first sentence as the one containing argument 1 and second containing argument 2. Then, we extract the features mentioned in table 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non Explicit Classifier",
"sec_num": "3.4"
},
{
"text": "To extract argument spans for Non Explicit and Non EntRel Relations, we first use clause splitter as mentioned before and then extract the features for each clause as mentioned in table 7. For EntRel relations, we simply mention the first sentence as argument 1 and second sentence as argument 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Extractor",
"sec_num": "3.4.1"
},
{
"text": "We used the training datasets provided by CONLL 2016 organisers (LDC2016E50). In addition we also used the brown clusters (3200 classes). For Stemming purposes, we used snowball stemmer and for lemmatising, we used stanford core nlp library. For the purpose of classification, we used Apache OpenNLP implementation of MaxEnt classifier. We used Java programming language to implement the parser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Setup",
"sec_num": "4.1"
},
{
"text": "A relation is seen correct iff:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Strategy",
"sec_num": "4.2"
},
{
"text": "\u2022 The discourse connective is correctly detected (for explicit relations)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Strategy",
"sec_num": "4.2"
},
{
"text": "\u2022 Sense of relation is correctly predicted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Strategy",
"sec_num": "4.2"
},
{
"text": "\u2022 Text spans of two arguments as well as their labels (Arg1 and Arg2) are correctly predicted. Partial matches are not identified as correct.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Strategy",
"sec_num": "4.2"
},
{
"text": "Results are mentioned in tables 8. As we can see, explicit connective classifier achieves only a precision score of around 0.77 while the best team previous year (Wang) achieved a precision of 0.93. This is not good enough and perhaps is the major reason for error being propagated towards subsequent components. The results of non explicit relations were also discouraging with an F1 score of only 0.012.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "This paper describes the PDTB-styled discourse parser system we implemented for CONLL '16 shared task. We divided the system into different components and arrange in a pipeline. We apply Maximum Entropy for each of these components. It is an ongoing work. We plan to incorporate deep learning mehods in each component to try to improve the system. We also plan to do feature selection to optimise the components of our system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Further Work",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Attribution and the (non-)alignment of syntactic and discourse arguments of connectives",
"authors": [
{
"first": "N",
"middle": [],
"last": "Dines",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Miltsakaki",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the Workshop on Frontiers in Corpus Annotations II: Pie KNOTT, A. A Data-Driven Methodology for Motivating a Set of Coherence Relations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "DINES, N., LEE, A., MILTSAKAKI, E., PRASAD, R., JOSHI, A., AND WEBBER, B. Attribution and the (non-)alignment of syntactic and discourse argu- ments of connectives. In Proceedings of the Work- shop on Frontiers in Corpus Annotations II: Pie KNOTT, A. A Data-Driven Methodology for Motivat- ing a Set of Coherence Relations. PhD thesis, De- partment of Artificial Intelligence, University of Ed- inburgh, 1996.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A constituentbased approach to argument labeling with joint inference in discourse parsing",
"authors": [
{
"first": "F",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "H",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "68--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "KONG, F., NG, H. T., AND ZHOU, G. A constituent- based approach to argument labeling with joint in- ference in discourse parsing. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25- 29, 2014, Doha, Qatar, A meeting of SIGDAT, a Spe- cial Interest Group of the ACL (2014), pp. 68-77.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A pdtb-styled end-to-end discourse parser",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "H",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Kan",
"suffix": ""
}
],
"year": 2014,
"venue": "Natural Language Engineering",
"volume": "20",
"issue": "",
"pages": "151--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "LIN, Z., NG, H. T., AND KAN, M. A pdtb-styled end-to-end discourse parser. Natural Language En- gineering 20, 2 (2014), 151-184.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Using syntax to disambiguate explicit discourse connectives in text",
"authors": [
{
"first": "E",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2009,
"venue": "ACL 2009, Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "",
"issue": "",
"pages": "13--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "PITLER, E., AND NENKOVA, A. Using syntax to dis- ambiguate explicit discourse connectives in text. In ACL 2009, Proceedings of the 47th Annual Meet- ing of the Association for Computational Linguistics and the 4th International Joint Conference on Natu- ral Language Processing of the AFNLP, 2-7 August 2009, Singapore, Short Papers (2009), pp. 13-16.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Improving the Reproducibility of PAN's Shared Tasks: Plagiarism Detection, Author Identification, and Author Profiling",
"authors": [
{
"first": "M",
"middle": [],
"last": "Potthast",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Gollub",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Rangel",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Stamatatos",
"suffix": ""
},
{
"first": "B",
"middle": [
"E"
],
"last": "Stein",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Kanoulas",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Lupu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Clough",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sanderson",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Hall",
"suffix": ""
}
],
"year": 2014,
"venue": "Information Access Evaluation meets Multilinguality, Multimodality, and Visualization. 5th International Conference of the CLEF Initiative",
"volume": "",
"issue": "",
"pages": "268--299",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "POTTHAST, M., GOLLUB, T., RANGEL, F., ROSSO, P., STAMATATOS, E., AND STEIN, B. Improving the Reproducibility of PAN's Shared Tasks: Pla- giarism Detection, Author Identification, and Au- thor Profiling. In Information Access Evaluation meets Multilinguality, Multimodality, and Visualiza- tion. 5th International Conference of the CLEF Ini- tiative (CLEF 14) (Berlin Heidelberg New York, Sept. 2014), E. Kanoulas, M. Lupu, P. Clough, M. Sanderson, M. Hall, A. Hanbury, and E. Toms, Eds., Springer, pp. 268-299.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Discovering implicit discourse relations through brown cluster pair representation and coreference patterns",
"authors": [
{
"first": "A",
"middle": [],
"last": "Rutherford",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "645--654",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "RUTHERFORD, A., AND XUE, N. Discovering im- plicit discourse relations through brown cluster pair representation and coreference patterns. In Proceed- ings of the 14th Conference of the European Chap- ter of the Association for Computational Linguistics, EACL 2014, April 26-30, 2014, Gothenburg, Sweden (2014), pp. 645-654.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A refined end-to-end discourse parser",
"authors": [
{
"first": "J",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning -Shared Task",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "WANG, J., AND LAN, M. A refined end-to-end dis- course parser. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning -Shared Task (Beijing, China, July 2015), Association for Computational Linguistics, pp. 17- 24.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The conll-2016 shared task on shallow discourse parsing",
"authors": [
{
"first": "N",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "H",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Webber",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Rutherford",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Twentieth Conference on Computational Natural Language Learning -Shared Task",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "XUE, N., NG, H. T., PRADHAN, S., WEBBER, B., RUTHERFORD, A., WANG, C., AND WANG, H. The conll-2016 shared task on shallow discourse parsing. In Proceedings of the Twentieth Confer- ence on Computational Natural Language Learning -Shared Task (Berlin, Germany, August 2016), As- sociation for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "There are five major components involved in the process of discourse parsing as shown in figure 2identifies the cases when explicit connectives are being used as discourse connectives as opposed to when they are not.Explicit Argument Labeller extracts arguments of the relation. This component itself consists of two sub-components:\u2022 Argument Position Identifier \u2022 Argument Extractor"
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Figure 2: System Pipeline"
},
"TABREF2": {
"text": "Features for Connective Classifier",
"content": "<table><tr><td colspan=\"2\">Feature ID Feature</td></tr><tr><td>1</td><td>Connective String</td></tr><tr><td>2</td><td>Position of Connective String in sentence</td></tr><tr><td>3</td><td>POS tag of Connective String</td></tr><tr><td>4</td><td>1st previous word to Connective String</td></tr><tr><td>5</td><td>POS tag of 1st previous word to Connective String</td></tr><tr><td>6</td><td>2nd previous word to Connective String</td></tr><tr><td>7</td><td>POS tag of 2nd previous word to Connective String</td></tr><tr><td>8</td><td>1st previous word + Connective String</td></tr><tr><td>9</td><td>POS of 1st previous word + POS of Connective String</td></tr><tr><td>10</td><td>2nd previous word + Connective String</td></tr><tr><td>11</td><td>POS of 2nd previous word + POS of Connective String</td></tr></table>",
"html": null,
"type_str": "table",
"num": null
},
"TABREF3": {
"text": "Features for Argument Position Classifier",
"content": "<table><tr><td colspan=\"2\">Feature ID Feature</td></tr><tr><td>1</td><td>Connective String</td></tr><tr><td>2</td><td>Lowercased Connective String</td></tr><tr><td>3</td><td>Category of Connective String : Subordinating, Coordinating or Discourse Adverbials</td></tr><tr><td>4</td><td>Constituent Context: Value of Constituent Node + its parent + its left sibling + its right</td></tr><tr><td/><td>sibling</td></tr><tr><td>5</td><td>Path of Connective String to the constituent node in syntax tree</td></tr><tr><td>6</td><td>Relative Position of constituent node with respect to Connective String</td></tr><tr><td>7</td><td>Path of Connective String to the constituent node in syntax tree + whether number of left</td></tr><tr><td/><td>siblings of Connective String \u00bf 1</td></tr></table>",
"html": null,
"type_str": "table",
"num": null
},
"TABREF4": {
"text": "Features for Kong's approach in SS case",
"content": "<table><tr><td colspan=\"2\">Feature ID Feature</td></tr><tr><td>1</td><td>Production Rules in the clause</td></tr><tr><td>2</td><td>Lowercased Verbs in the clause</td></tr><tr><td>3</td><td>Lemmatised Verbs in the clause</td></tr><tr><td>4</td><td>Connective String</td></tr><tr><td>5</td><td>Lowercased Connective String</td></tr><tr><td>6</td><td>Category of Connective String : Subordinating, Coordinating or Discourse Adverbials</td></tr><tr><td>7</td><td/></tr></table>",
"html": null,
"type_str": "table",
"num": null
},
"TABREF5": {
"text": "Features for Classifying clauses in PS case",
"content": "<table><tr><td>Feature Type</td><td colspan=\"2\">Feature ID Feature</td></tr><tr><td/><td>1</td><td>Connective String</td></tr><tr><td>Lexical Features</td><td>2 3</td><td>Lowercased Connective String POS tag of Connective String</td></tr><tr><td/><td>4</td><td>Previous word to Connective String + Connective String</td></tr><tr><td/><td>5</td><td>Self Category : Parent of the connective in syntax tree</td></tr><tr><td>Syntactic Features</td><td>6 7</td><td>Parent Category : Parent of self category in syntax tree Left Sibling Category : Left sibling of self category in syntax tree</td></tr><tr><td/><td>8</td><td>Right Sibling Category : Right sibling of self category in syntax tree</td></tr><tr><td/><td>9</td><td>C-syn features</td></tr><tr><td/><td>10</td><td>syn-syn features</td></tr></table>",
"html": null,
"type_str": "table",
"num": null
},
"TABREF6": {
"text": "Features for Explicit Sense Classifier",
"content": "<table><tr><td colspan=\"2\">Feature ID Feature</td></tr><tr><td>1</td><td>Production Rules in syntax tree</td></tr><tr><td>2</td><td>Dependency Rules in dependency tree</td></tr><tr><td>3</td><td>Word Pair features</td></tr><tr><td>4</td><td>First 3 terms of argument 2 sentence</td></tr></table>",
"html": null,
"type_str": "table",
"num": null
},
"TABREF7": {
"text": "Features for Non Explicit Classifier",
"content": "<table><tr><td colspan=\"2\">Feature ID Feature</td></tr><tr><td>1</td><td>Production Rules in syntax tree</td></tr><tr><td>2</td><td>Lower cased verbs in this clause</td></tr><tr><td>3</td><td>Lemmatised Verbs in this clause</td></tr><tr><td>4</td><td>First Word in this clause</td></tr><tr><td>5</td><td>Last Word in this clause</td></tr><tr><td>6</td><td>Last Word in previous clause</td></tr><tr><td>7</td><td>Fist word in next clause</td></tr><tr><td>8</td><td>Last Word in previous clause + First word in this clause</td></tr><tr><td>9</td><td>Last Word in this clause + First Word in next clause</td></tr><tr><td>10</td><td/></tr></table>",
"html": null,
"type_str": "table",
"num": null
}
}
}
}