ACL-OCL / Base_JSON /prefixK /json /K16 /K16-2010.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K16-2010",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:11:21.243579Z"
},
"title": "Shallow Discourse Parsing Using Convolutional Neural Network",
"authors": [
{
"first": "Lianhui",
"middle": [],
"last": "Qin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Shanghai Jiao Tong University",
"location": {
"postCode": "200240",
"settlement": "Shanghai",
"country": "China"
}
},
"email": ""
},
{
"first": "Zhisong",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Shanghai Jiao Tong University",
"location": {
"postCode": "200240",
"settlement": "Shanghai",
"country": "China"
}
},
"email": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Shanghai Jiao Tong University",
"location": {
"postCode": "200240",
"settlement": "Shanghai",
"country": "China"
}
},
"email": "zhaohai@cs.sjtu.edu.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes a discourse parsing system for our participation in the CoNLL 2016 Shared Task. We focus on the supplementary task: Sense Classification, especially the Non-Explicit one which is the bottleneck of discourse parsing system. To improve Non-Explicit sense classification, we propose a Convolutional Neural Network (CNN) model to determine the senses for both English and Chinese tasks. We also explore a traditional linear model with novel dependency features for Explicit sense classification. Compared with the best system in CoNLL-2015, our system achieves competitive performances. Moreover, as shown in the results, our system has higher F1 score on Non-Explicit sense classification.",
"pdf_parse": {
"paper_id": "K16-2010",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes a discourse parsing system for our participation in the CoNLL 2016 Shared Task. We focus on the supplementary task: Sense Classification, especially the Non-Explicit one which is the bottleneck of discourse parsing system. To improve Non-Explicit sense classification, we propose a Convolutional Neural Network (CNN) model to determine the senses for both English and Chinese tasks. We also explore a traditional linear model with novel dependency features for Explicit sense classification. Compared with the best system in CoNLL-2015, our system achieves competitive performances. Moreover, as shown in the results, our system has higher F1 score on Non-Explicit sense classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This paper presents the Shanghai Jiao Tong University discourse parsing system for the CoNLL 2016 Shared Task (Xue et al., 2016) on Shallow Discourse Parsing and the supplementary tasks of sense classification for English and Chinese.",
"cite_spans": [
{
"start": 110,
"end": 128,
"text": "(Xue et al., 2016)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As shown by the results of the same task in CoNLL 2015 (Xue et al., 2015) , sense classification has been found more difficult than other subtasks, especially determining Non-Explicit senses which is the bottleneck of the end-to-end discourse parsing system. Without the discourse connectives which provide strong indications, the Non-Explicit relations between adjacent sentences are difficult to figure out. Therefore, our primary work is to improve sense classification components, especially on Non-Explicit relations. For other components such as connectives detection and arguments extraction, we just follow the top ranked system (Wang and Lan, 2015) in CoNLL-2015, which is as the baseline system in this paper.",
"cite_spans": [
{
"start": 55,
"end": 73,
"text": "(Xue et al., 2015)",
"ref_id": "BIBREF23"
},
{
"start": 637,
"end": 657,
"text": "(Wang and Lan, 2015)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In CoNLL-2015, various approaches were explored to conquer the sense classification problem, which is a straightforward multi-category classification task (Okita et al., 2015; Wang and Lan, 2015; Chiarcos and Schenk, 2015; Song et al., 2015; Stepanov et al., 2015; Yoshida et al., 2015; Sun et al., 2015; Nguyen et al., 2015; Laali et al., 2015) . Typical data-driven machine learning methods, like Maximum Entropy and Support Vector Machine, were adopted. Some of them selected lexical and syntactic features over the arguments, including linguistically motivated word groupings such as Levin verb classes and polarity tags. Brown cluster features, surface features and entity semantics were also effective to enhance sense classification. Additionally, paragraph embeddings were also used to determine the senses (Okita et al., 2015) . In other previous work of implicit sense classification, Chen et al (2015) used word-pair features for predicting missing connectives, Zhou et al. (2010) attempted to insert discourse connectives between arguments with the use of a language model, Lin et al. (2009) applied various feature selection methods. Although traditional methods have performed well on semantic tasks through feature engineering (Zhao et al., 2009a; Zhao et al., 2009b; , they still suffer from data sparsity problems.",
"cite_spans": [
{
"start": 155,
"end": 175,
"text": "(Okita et al., 2015;",
"ref_id": "BIBREF11"
},
{
"start": 176,
"end": 195,
"text": "Wang and Lan, 2015;",
"ref_id": "BIBREF18"
},
{
"start": 196,
"end": 222,
"text": "Chiarcos and Schenk, 2015;",
"ref_id": "BIBREF2"
},
{
"start": 223,
"end": 241,
"text": "Song et al., 2015;",
"ref_id": "BIBREF15"
},
{
"start": 242,
"end": 264,
"text": "Stepanov et al., 2015;",
"ref_id": "BIBREF16"
},
{
"start": 265,
"end": 286,
"text": "Yoshida et al., 2015;",
"ref_id": "BIBREF25"
},
{
"start": 287,
"end": 304,
"text": "Sun et al., 2015;",
"ref_id": "BIBREF17"
},
{
"start": 305,
"end": 325,
"text": "Nguyen et al., 2015;",
"ref_id": "BIBREF10"
},
{
"start": 326,
"end": 345,
"text": "Laali et al., 2015)",
"ref_id": "BIBREF6"
},
{
"start": 815,
"end": 835,
"text": "(Okita et al., 2015)",
"ref_id": "BIBREF11"
},
{
"start": 973,
"end": 991,
"text": "Zhou et al. (2010)",
"ref_id": "BIBREF31"
},
{
"start": 1086,
"end": 1103,
"text": "Lin et al. (2009)",
"ref_id": "BIBREF7"
},
{
"start": 1242,
"end": 1262,
"text": "(Zhao et al., 2009a;",
"ref_id": "BIBREF28"
},
{
"start": 1263,
"end": 1282,
"text": "Zhao et al., 2009b;",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, Neural Network (NN) methods have shown competitive or even better performance than traditional linear models with hand-crafted sparse features for some Nature Language Process (NLP) tasks (Wang et al., 2013; Wang et al., 2014; Cai and Zhao, 2016; Zhang and Zhao, 2016) , such as sentence modeling (Kalchbrenner et al., 2014; Kim, 2014) . In Non-Explicit sense classification, due to the absence of discourse connectives, the task is exactly to classify a sentence pair, where CNN could be utilized.",
"cite_spans": [
{
"start": 198,
"end": 217,
"text": "(Wang et al., 2013;",
"ref_id": "BIBREF19"
},
{
"start": 218,
"end": 236,
"text": "Wang et al., 2014;",
"ref_id": "BIBREF21"
},
{
"start": 237,
"end": 256,
"text": "Cai and Zhao, 2016;",
"ref_id": "BIBREF0"
},
{
"start": 257,
"end": 278,
"text": "Zhang and Zhao, 2016)",
"ref_id": "BIBREF27"
},
{
"start": 307,
"end": 334,
"text": "(Kalchbrenner et al., 2014;",
"ref_id": "BIBREF4"
},
{
"start": 335,
"end": 345,
"text": "Kim, 2014)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For Explicit sense classification which has strong discourse relation information provided by the connectives, we will use traditional linear methods with novel dependency features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows: Section 2 briefly describes our system, Section 3 introduces the CNN model for modeling sentence pairs, Section 4 discusses our main works including Explicit sense classification and Non-Explicit sense classification, Section 5 shows our experiments on sense classification and Section 6 reports our results on the final official evaluation. Section 7 concludes this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our parsing system uses the sequential pipeline following by (Lin et al., 2014; Wang and Lan, 2015) . Figure 1 shows the system pipeline. The system can be roughly split into two parts: the Explicit parser and the Non-Explicit parser. We will give a brief introduction for every components. The overall parser starts from detecting discourse connectives for the Explicit Parser. Then the types of relative location of Argument1 (Arg1) and Ar-gument2 (Arg2) are identified: Arg1 located in the exact previous sentence of Arg2 (noted as PS) or both arguments are within the same sentence (noted as SS). For the last part of Explicit parser, the tuples (Arg1, Connective, Arg2) are classified into one of the Explicit relation senses. For the Non-Explicit parser, it classifies the senses of Non-Explicit with original arguments and then extracts the arguments of the argument pairs. Finally, the senses of Non-Explicit argument pairs are again decided with refined arguments. Among all subtasks, we will focus on sense classification the other parts have been done relatively well in previous work. ",
"cite_spans": [
{
"start": 61,
"end": 79,
"text": "(Lin et al., 2014;",
"ref_id": "BIBREF8"
},
{
"start": 80,
"end": 99,
"text": "Wang and Lan, 2015)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 102,
"end": 110,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2"
},
{
"text": "Each sentence could obtain a sentence vector through CNN and the final classification is based on the transformations of the sentence vectors. Although both Explicit and Non-Explicit tasks could utilize the neural model, CNN might be more apposite for the Non-Explicit one because of lacking indicating connectives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional Neural Network",
"sec_num": "3"
},
{
"text": "The architecture of our CNN model, is illustrated in Figure 2 . Firstly, a look-up table is utilized to fetch the embeddings of words and partof-speech (POS) tags, forming two sentence embeddings which will be the input of the convolutional layer. Through the convolution and max pooling operations, two sentence vectors are obtained. Finally, these vectors will be sent to the final softmax layer after concatenated.",
"cite_spans": [],
"ref_spans": [
{
"start": 53,
"end": 61,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Convolutional Neural Network",
"sec_num": "3"
},
{
"text": "Embedding For a sentence S = w 1 w 2 . . . w n and POS sequence P = p 1 p 2 . . . p n , the sentence embedding M is formed through projection and concatenating. Following the jargons in the task, the input sentences will be called \"Arguments\" and the two arguments are represented as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional Neural Network",
"sec_num": "3"
},
{
"text": "M 1 = [w 1 1 \u2295 p 1 1 ; w 1 2 \u2295 p 1 2 ; . . . ; w 1 n \u2295 p 1 n ] M 2 = [w 2 1 \u2295 p 2 1 ; w 2 2 \u2295 p 2 2 ; . . . ; w 2 n \u2295 p 2 n ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional Neural Network",
"sec_num": "3"
},
{
"text": "Here w j i \u2208 R dw is the word vector corresponding to the i-th word in the j-th argument, and p j i \u2208 R dp is the POS vector for w j i , where d w and d p respectively stand for the dimensions of word and POS vectors. \u2295 and ; are the concatenation operators on different dimensions. Considering the efficiency, we specialize a max sentence length for both arguments, and apply truncating or zero-padding when needed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional Neural Network",
"sec_num": "3"
},
{
"text": "Filter matrices [W 1 , W 2 , . . . , W k ] with several variable sizes [l 1 , l 2 , . . . , l k ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional layer",
"sec_num": null
},
{
"text": "are utilized to perform the convolution operations for the sentence embeddings. Via parameter sharing, this feature extraction procedure become same for both arguments. For the sake of simplicity, ignoring the superscripts, we will explain the procedure for only one argument. The sentence embedding will be transformed to sequences",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional layer",
"sec_num": null
},
{
"text": "C j (j \u2208 [1, k]) : C j = [. . . ; tanh(W j \u2022 M [i:i+l j \u22121] + b j ); . . . ] Here, [i : i + l j \u2212 1]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional layer",
"sec_num": null
},
{
"text": "indexes the convolution window. Additionally, We apply wide convolution operation between embedding layer and filter matrices, because it ensures that all weights in the filters reach the entire sentence, including the words at the margins.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional layer",
"sec_num": null
},
{
"text": "Max Pooling A one-max-pooling operation is adopted after convolution and the sentence vector s is obtained through concatenating all the mappings for those k filters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional layer",
"sec_num": null
},
{
"text": "s = [s 1 \u2295 \u2022 \u2022 \u2022 \u2295 s j \u2295 \u2022 \u2022 \u2022 \u2295 s k ] s j = max(C j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional layer",
"sec_num": null
},
{
"text": "In this way, the model can capture the most important features in the sentence with different filters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional layer",
"sec_num": null
},
{
"text": "Concatenating and Softmax Now adding the superscripts and considering the two arguments (s 1 , s 2 ), they are concatenated to form the argument-pair representation vector v as below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional layer",
"sec_num": null
},
{
"text": "v = s 1 \u2295 s 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional layer",
"sec_num": null
},
{
"text": "For the final labeling decision, a softmax layer will be applied using the argument-pair vector v. It is raining I bring an umbrella",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional layer",
"sec_num": null
},
{
"text": "W 1 v M 1 s 1 s 2 M 2 W 1 W k W k \u2022\u2022\u2022 \u2022\u2022\u2022 softmax Pr(y) C1 1 s k 1 s 1 2 s 1 1 s k 2 Ck 1 C1 2 Ck 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional layer",
"sec_num": null
},
{
"text": "Figure 2: Our neural model for sentence classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional layer",
"sec_num": null
},
{
"text": "Training The training object J will be the crossentropy error E with L2 regularization:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional layer",
"sec_num": null
},
{
"text": "E(\u0177, y) = \u2212 l j y j \u00d7 log(P r(\u0177 j )) J(\u03b8) = 1 m m k E(\u0177 (k) , y (k) ) + \u03bb 2 \u03b8 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional layer",
"sec_num": null
},
{
"text": "where y j is the gold label and\u0177 j is the predicted one. For the optimization process, we apply the diagonal variant of AdaGrad (Duchi et al., 2011) with mini-batches.",
"cite_spans": [
{
"start": 128,
"end": 148,
"text": "(Duchi et al., 2011)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional layer",
"sec_num": null
},
{
"text": "Now we will discuss about the sense classification task. Both the Explicit and Non-Explicit labeling are typical classification tasks with the argumentpair as the input and the CNN model could be applied to both of them. However, the Explicit task provides the connectives which are the crucial indicators and we find that CNN performs slightly poorly on this task even if embeddings for indicators are concatenated. Thus, for the Explicit task, we will adopt the traditional linear model considering only the features related with the indicators and CNN model will be applied to the more difficult Non-Explicit task. feature, as connectives are ambiguous as pointed out in Pitler et al. (2008) , and the majority of the ambiguous connectives is highly skewed toward certain senses (Lin et al., 2014 ). Thus, the task is in fact to disambiguate the connective under different contexts.",
"cite_spans": [
{
"start": 674,
"end": 694,
"text": "Pitler et al. (2008)",
"ref_id": "BIBREF12"
},
{
"start": 782,
"end": 799,
"text": "(Lin et al., 2014",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sense Classification",
"sec_num": "4"
},
{
"text": "Although the provided context contains the two whole arguments, the most crucial indicators are still the words that near the connectives or the ones that have close syntactic dependency relations with the connectives. This might explain why plain CNN model performs poorly on this task without these key features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit Sense Classification",
"sec_num": "4.1"
},
{
"text": "Thus, for the Explicit task, we will adopt the traditional method, using Support Vector Machines (SVM) with linear kernel and manually selected features. We consider only three features which are all related to Connective C: (1) C string (2) C POS (3) C string combined with POS of C's parent node in dependency tree (noted as C-HP).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit Sense Classification",
"sec_num": "4.1"
},
{
"text": "We will use an example in the Chinese task to explain the influence of the third feature which utilizes the dependency tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit Sense Classification",
"sec_num": "4.1"
},
{
"text": "(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit Sense Classification",
"sec_num": "4.1"
},
{
"text": "1) \u7537\u9009\u624b\u7684\u6210\u7ee9\u662f\u8fd1\uff11\uff10\u5e74\u6765\u6700\u5dee\u7684\u4e00 \u6b21\uff0c\u8bf4\u660e\u6c34\u5e73\u5728\u4e0b\u964d[Arg1] \u800c [Connective] \u7f57 \u7f57 \u7f57 \u8389 \u8389 \u8389\u3001 \u3001 \u3001\u4e54 \u4e54 \u4e54\u5a05 \u5a05 \u5a05\u548c \u548c \u548c\u83ab \u83ab \u83ab\u60e0 \u60e0 \u60e0\u5170 \u5170 \u5170\uff13 \uff13 \uff13\u540d \u540d \u540d\u5973 \u5973 \u5973\u9009 \u9009 \u9009\u624b \u624b \u624b\u90fd \u90fd \u90fd\u662f \u662f \u662f\u7b2c \u7b2c \u7b2c\u4e00 \u4e00 \u4e00\u6b21 \u6b21 \u6b21\u53c2 \u53c2 \u53c2 \u52a0 \u52a0 \u52a0\u4e16 \u4e16 \u4e16\u754c \u754c \u754c\u5927 \u5927 \u5927\u8d5b \u8d5b \u8d5b\uff0c \uff0c \uff0c\u5747 \u5747 \u5747\u8868 \u8868 \u8868\u73b0 \u73b0 \u73b0\u4e0d \u4e0d \u4e0d\u9519 \u9519 \u9519\u3002 \u3002 \u3002[Arg2]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit Sense Classification",
"sec_num": "4.1"
},
{
"text": "(Contrast -CHTB 0310)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit Sense Classification",
"sec_num": "4.1"
},
{
"text": "In Chinese, '\u800c' is a connective with ambiguity relations of 'Contrast' and 'Conjunction'. Because 'Conjunction' accounts for a large part of these instances, the classifier will tend to predict '\u800c' as 'Conjunction' if just using connective features. Like in this example, the sense of the in-filter-size on original Args (2,3,3) 38.45 (2, 4, 5) 38.86 (2, 6, 12) 38.45 (3, 3, 3) 39.40 (4, 8, 12) 40.08 (6, 8, 18) 38.99 25 (3,3,3) 45.50 (3, 5, 9) 43.92 Table 4 : F 1 scores (%) with different CNN filter sizes for Explicit on refined arguments on development set.",
"cite_spans": [
{
"start": 335,
"end": 338,
"text": "(2,",
"ref_id": null
},
{
"start": 339,
"end": 341,
"text": "4,",
"ref_id": null
},
{
"start": 342,
"end": 344,
"text": "5)",
"ref_id": null
},
{
"start": 351,
"end": 354,
"text": "(2,",
"ref_id": null
},
{
"start": 355,
"end": 357,
"text": "6,",
"ref_id": null
},
{
"start": 358,
"end": 361,
"text": "12)",
"ref_id": null
},
{
"start": 368,
"end": 371,
"text": "(3,",
"ref_id": null
},
{
"start": 372,
"end": 374,
"text": "3,",
"ref_id": null
},
{
"start": 375,
"end": 377,
"text": "3)",
"ref_id": null
},
{
"start": 384,
"end": 387,
"text": "(4,",
"ref_id": null
},
{
"start": 388,
"end": 390,
"text": "8,",
"ref_id": null
},
{
"start": 391,
"end": 394,
"text": "12)",
"ref_id": null
},
{
"start": 401,
"end": 404,
"text": "(6,",
"ref_id": null
},
{
"start": 405,
"end": 407,
"text": "8,",
"ref_id": null
},
{
"start": 408,
"end": 411,
"text": "18)",
"ref_id": null
},
{
"start": 418,
"end": 428,
"text": "25 (3,3,3)",
"ref_id": null
},
{
"start": 435,
"end": 438,
"text": "(3,",
"ref_id": null
},
{
"start": 439,
"end": 441,
"text": "5,",
"ref_id": null
},
{
"start": 442,
"end": 444,
"text": "9)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 451,
"end": 458,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Explicit Sense Classification",
"sec_num": "4.1"
},
{
"text": "stance is 'Contrast' but is predicted as 'Conjunction' if considering only the connective itself. But if we add the third feature, which means the combination feature '\u800c-VC' will be added (C is '\u800c' and POS of C's parent node is 'VC'), the classifier will correctly decide the right sense.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit Sense Classification",
"sec_num": "4.1"
},
{
"text": "The situations for the Non-Explicit task are quite different. Without the information of connectives, we have to extract the discourse relations through the two arguments, which might need semantic comprehensions sometimes. This might be hard for traditional methods because it is not easy to extract hand-craft features. The neural models which can automatically extract features may be another solution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-Explicit Sense Classification",
"sec_num": "4.2"
},
{
"text": "We apply the CNN model described in Section 3 for this task. To simplify model building and parameter tuning, and also due to the similar architectures, the model structures for sense classification components in English and Chinese are identical.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-Explicit Sense Classification",
"sec_num": "4.2"
},
{
"text": "Our system is trained on the PDTB 2.0 corpus. Sections 02-21 are used as training set, and Section 22 as the development set. There are two tests sets for the shared task: Section 23 of the PDTB, and a blind test prepared especially for this task. We participate in the closed track, so only two resources (Brown Clusters and MPQA Subjectivity Lexicon) are used. test platform of CoNLL-2016 still adopts still the TIRA evaluation platform (Potthast et al., 2014) . Non-Explicit relations contains three types: Implicit, EntRel and AltLex. Originally EntRel is not treated as discourse relation in Penn Discourse TreeBank (PDTB) (Prasad et al., 2008) , but this category has been included in this task and we also count it as one sense. Some instances are annotated with two senses, so the predicted sense for a relation must match one of the two senses if there is more than one sense. We compare with the best system in the competition of CoNLL 2015 (Wang and Lan, 2015) , which is regarded as the baseline. Table 1 reports our results of the Explicit sense classifier on both English and Chinese develop-ment sets. Compared with the baseline, our methods obtain progress and the overall F1 score of Explicit Sense classification increases by 1.97% for English task.",
"cite_spans": [
{
"start": 439,
"end": 462,
"text": "(Potthast et al., 2014)",
"ref_id": "BIBREF13"
},
{
"start": 628,
"end": 649,
"text": "(Prasad et al., 2008)",
"ref_id": "BIBREF14"
},
{
"start": 951,
"end": 971,
"text": "(Wang and Lan, 2015)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 1009,
"end": 1016,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "For both English and Chinese sense classification, the C string and C POS features can classify most of the relations correctly. Moreover, the new combination feature based on dependency relations helps effectively disambiguate senses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explicit Sense Classification",
"sec_num": "5.1"
},
{
"text": "For the Non-Explicit task, we utilize the CNN model to model the argument pairs. Following (Wang and Lan, 2015) , in the final discourse parsing pipeline, we utilize the sense classifier twice, once for original arguments (adjacent sentence pairs) and once for redefined arguments (after argument extraction). Because the two classifiers expect different inputs, we train different CNN models for these two tasks and also with slightly different hyper-parameters. Table 7 : Results of the supplementary task on English and Chinese.",
"cite_spans": [
{
"start": 91,
"end": 111,
"text": "(Wang and Lan, 2015)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 464,
"end": 471,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Non-Explicit Sense Classification",
"sec_num": "5.2"
},
{
"text": "On Original Arguments The input for this classifier will be two adjacent sentences without Explicit discourse relations. The maximum input length for both sentences is set to 80, the dimensions for word embeddings and POS embeddings are 300 and 50 respectively. The word embeddings are initialized with pre-trained word vectors using word2vec 1 (Mikolov et al., 2013) and other parameters are randomly initialized including POS embeddings. We employ three categories of CNN filters, and choose 512 as the number of feature maps. About the filter region sizes, Zhang and Wallace (2015) have concluded that each dataset has its own optimal range. We set the three filter sizes to 4,8,12 separately according to the empirical results in Table 3 .",
"cite_spans": [
{
"start": 345,
"end": 367,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF9"
},
{
"start": 560,
"end": 584,
"text": "Zhang and Wallace (2015)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 734,
"end": 741,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Non-Explicit Sense Classification",
"sec_num": "5.2"
},
{
"text": "On Refined Arguments This module is similar to the above one but with some differences. The input will be the refined arguments and correspondingly, golden argument pairs are utilized for training. Thus, we adopt slightly different hyperparameters. The number of feature maps for each filter categories is set to 1024, and the final filter region sizes are 3,3,3 accordingly to the empirical results in Table 4 . For the choice of filter region sizes, we have attempted a lot of combinations, but only the best ones are shown.",
"cite_spans": [],
"ref_spans": [
{
"start": 403,
"end": 410,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Non-Explicit Sense Classification",
"sec_num": "5.2"
},
{
"text": "The trained model on refined arguments could be directly utilized for part of Non-Explicit sense classification in the supplementary task and Table 2 reports the results on English and Chinese development sets. Compared to the Explicit task, the Non-Explicit task is indeed much more difficult. Using CNN, we achieve an improvement of 2.58% compared to the baseline. This result fully illustrates that CNN model is suitable to determine the Non-Explicit relations.",
"cite_spans": [],
"ref_spans": [
{
"start": 142,
"end": 149,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results of classification",
"sec_num": null
},
{
"text": "We report our official results and comparisons on Shallow Discourse Parsing task on English and the 1 http://www.code.google.com/p/word2vec supplementary tasks of sense classification on English and Chinese. Table 5 and 6 show the performance on two test sets for English: i) (Official) Blind test set; ii) Standard WSJ test set. Our parsers give higher F1 scores than baselines: 0.55% higher on WSJ test set and 0.61% on Blind Test set, though our Explicit connective detection F1 is less than theirs at the beginning of the pipeline, which might introduce more error propagations. This might suggest that our sense classifiers play key roles in the system.",
"cite_spans": [],
"ref_spans": [
{
"start": 208,
"end": 215,
"text": "Table 5",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "To see the performances of the sense classifiers, Table 7 shows the results for English and Chinese supplementary tasks (sense classifications on golden argument pairs without errors propagation). For Explicit sense classification, the features we proposed are proved to be effective. For Non-Explicit sense classification, our CNN model also works well on the test sets. Compared to the performance of discourse parsing sense classification components (with error propagation), the subtask results are higher. The reasons include: i) Connective detection serves as the first component of the pipeline and plays an important role, because it has a major influence on Explicit sense classification which relies heavily on discourse connectives. ii) Arguments extraction also have important effects on the classifications for both Explicit and Non-Explicit relations.",
"cite_spans": [],
"ref_spans": [
{
"start": 50,
"end": 57,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "This paper describes our discourse parsing system for the CoNLL 2016 shared Task and reports our results on test data and blind test data. Despite of the errors propagation in the beginning of discourse parsing pipeline, we still obtain improvements against baseline, and perform well on the supplementary tasks. Especially, the CNN model for Non-Explicit sense classification gives competitive performances. Actually, Non-Explicit sense classification performance can be furthermore improved in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural word segmentation learning for Chinese",
"authors": [
{
"first": "Deng",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deng Cai and Hai Zhao. 2016. Neural word segmen- tation learning for Chinese. In Proceedings of ACL, Berlin, Germany, August.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Shallow discourse parsing using constituent parsing tree",
"authors": [
{
"first": "Changge",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Peilu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning -Shared Task",
"volume": "",
"issue": "",
"pages": "37--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Changge Chen, Peilu Wang, and Hai Zhao. 2015. Shallow discourse parsing using constituent pars- ing tree. In Proceedings of the Nineteenth Confer- ence on Computational Natural Language Learning -Shared Task, pages 37-41, Beijing, China, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A minimalist approach to shallow discourse parsing and implicit relation recognition",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Chiarcos",
"suffix": ""
},
{
"first": "Niko",
"middle": [],
"last": "Schenk",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning -Shared Task",
"volume": "",
"issue": "",
"pages": "42--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Chiarcos and Niko Schenk. 2015. A min- imalist approach to shallow discourse parsing and implicit relation recognition. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning -Shared Task, pages 42-49, Beijing, China, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Adaptive subgradient methods for online learning and stochastic optimization",
"authors": [
{
"first": "John",
"middle": [],
"last": "Duchi",
"suffix": ""
},
{
"first": "Elad",
"middle": [],
"last": "Hazan",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2011,
"venue": "The Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2121--2159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Ma- chine Learning Research, 12:2121-2159.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A convolutional neural network for modelling sentences",
"authors": [
{
"first": "Nal",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1404.2188"
]
},
"num": null,
"urls": [],
"raw_text": "Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural net- work for modelling sentences. arXiv preprint arXiv:1404.2188.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1408.5882"
]
},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural net- works for sentence classification. arXiv preprint arXiv:1408.5882.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The clac discourse parser at CoNLL-2015",
"authors": [
{
"first": "Majid",
"middle": [],
"last": "Laali",
"suffix": ""
},
{
"first": "Elnaz",
"middle": [],
"last": "Davoodi",
"suffix": ""
},
{
"first": "Leila",
"middle": [],
"last": "Kosseim",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning -Shared Task",
"volume": "",
"issue": "",
"pages": "56--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Majid Laali, Elnaz Davoodi, and Leila Kosseim. 2015. The clac discourse parser at CoNLL-2015. In Pro- ceedings of the Nineteenth Conference on Compu- tational Natural Language Learning -Shared Task, pages 56-60, Beijing, China, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Recognizing implicit discourse relations in the penn discourse treebank",
"authors": [
{
"first": "Ziheng",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "343--351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziheng Lin, Min-Yen Kan, and Hwee Tou Ng. 2009. Recognizing implicit discourse relations in the penn discourse treebank. In Proceedings of the 2009 Con- ference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1, pages 343-351. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A pdtb-styled end-to-end discourse parser",
"authors": [
{
"first": "Ziheng",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kan",
"suffix": ""
}
],
"year": 2014,
"venue": "Natural Language Engineering",
"volume": "20",
"issue": "02",
"pages": "151--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2014. A pdtb-styled end-to-end discourse parser. Natural Language Engineering, 20(02):151-184.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Jaist: A two-phase machine learning approach for identifying discourse relations in newswire texts",
"authors": [
{
"first": "Son",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Ho",
"suffix": ""
},
{
"first": "Minh",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning -Shared Task",
"volume": "",
"issue": "",
"pages": "66--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Son Nguyen, Quoc Ho, and Minh Nguyen. 2015. Jaist: A two-phase machine learning approach for identi- fying discourse relations in newswire texts. In Pro- ceedings of the Nineteenth Conference on Compu- tational Natural Language Learning -Shared Task, pages 66-70, Beijing, China, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The dcu discourse parser: A sense classification task",
"authors": [
{
"first": "Tsuyoshi",
"middle": [],
"last": "Okita",
"suffix": ""
},
{
"first": "Longyue",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning -Shared Task",
"volume": "",
"issue": "",
"pages": "71--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsuyoshi Okita, Longyue Wang, and Qun Liu. 2015. The dcu discourse parser: A sense classification task. In Proceedings of the Nineteenth Confer- ence on Computational Natural Language Learning -Shared Task, pages 71-77, Beijing, China, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Easily identifiable discourse relations",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "Mridhula",
"middle": [],
"last": "Raghupathy",
"suffix": ""
},
{
"first": "Hena",
"middle": [],
"last": "Mehta",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Aravind K",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Pitler, Mridhula Raghupathy, Hena Mehta, Ani Nenkova, Alan Lee, and Aravind K Joshi. 2008. Easily identifiable discourse relations.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Improving the Reproducibility of PAN's Shared Tasks: Plagiarism Detection, Author Identification, and Author Profiling",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Gollub",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Rangel",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Efstathios",
"middle": [],
"last": "Stamatatos",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2014,
"venue": "Information Access Evaluation meets Multilinguality, Multimodality, and Visualization. 5th International Conference of the CLEF Initiative (CLEF 14)",
"volume": "",
"issue": "",
"pages": "268--299",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Potthast, Tim Gollub, Francisco Rangel, Paolo Rosso, Efstathios Stamatatos, and Benno Stein. 2014. Improving the Reproducibility of PAN's Shared Tasks: Plagiarism Detection, Author Iden- tification, and Author Profiling. In Evangelos Kanoulas, Mihai Lupu, Paul Clough, Mark Sander- son, Mark Hall, Allan Hanbury, and Elaine Toms, editors, Information Access Evaluation meets Mul- tilinguality, Multimodality, and Visualization. 5th International Conference of the CLEF Initiative (CLEF 14), pages 268-299, Berlin Heidelberg New York, September. Springer.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The penn discourse treebank 2.0",
"authors": [
{
"first": "Rashmi",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Dinesh",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Eleni",
"middle": [],
"last": "Miltsakaki",
"suffix": ""
},
{
"first": "Livio",
"middle": [],
"last": "Robaldo",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Aravind",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [
"L"
],
"last": "Joshi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Milt- sakaki, Livio Robaldo, Aravind K Joshi, and Bon- nie L Webber. 2008. The penn discourse treebank 2.0. In LREC. Citeseer.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Improving a pipeline architecture for shallow discourse parsing",
"authors": [
{
"first": "Yangqiu",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Haoruo",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Parisa",
"middle": [],
"last": "Kordjamshidi",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Sammons",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning -Shared Task",
"volume": "",
"issue": "",
"pages": "78--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yangqiu Song, Haoruo Peng, Parisa Kordjamshidi, Mark Sammons, and Dan Roth. 2015. Improv- ing a pipeline architecture for shallow discourse parsing. In Proceedings of the Nineteenth Confer- ence on Computational Natural Language Learning -Shared Task, pages 78-83, Beijing, China, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The unitn discourse parser in CoNLL 2015 shared task: Token-level sequence labeling with argument-specific models",
"authors": [
{
"first": "Evgeny",
"middle": [],
"last": "Stepanov",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Riccardi",
"suffix": ""
},
{
"first": "Ali Orkan",
"middle": [],
"last": "Bayer",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning -Shared Task",
"volume": "",
"issue": "",
"pages": "25--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evgeny Stepanov, Giuseppe Riccardi, and Ali Orkan Bayer. 2015. The unitn discourse parser in CoNLL 2015 shared task: Token-level sequence labeling with argument-specific models. In Proceedings of the Nineteenth Conference on Computational Natu- ral Language Learning -Shared Task, pages 25-31, Beijing, China, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A shallow discourse parsing system based on maximum entropy model",
"authors": [
{
"first": "Jia",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Peijia",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Weiqun",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yonghong",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning -Shared Task",
"volume": "",
"issue": "",
"pages": "84--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jia Sun, Peijia Li, Weiqun Xu, and Yonghong Yan. 2015. A shallow discourse parsing system based on maximum entropy model. In Proceedings of the Nineteenth Conference on Computational Natu- ral Language Learning -Shared Task, pages 84-88, Beijing, China, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A refined endto-end discourse parser",
"authors": [
{
"first": "Jianxiang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Man",
"middle": [],
"last": "Lan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning -Shared Task",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianxiang Wang and Man Lan. 2015. A refined end- to-end discourse parser. In Proceedings of the Nine- teenth Conference on Computational Natural Lan- guage Learning -Shared Task, pages 17-24, Bei- jing, China, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Converting continuous-space language models into n-gram language models for statistical machine translation",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Masao",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "Isao",
"middle": [],
"last": "Goto",
"suffix": ""
},
{
"first": "Eiichro",
"middle": [],
"last": "Sumita",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Bao-Liang",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rui Wang, Masao Utiyama, Isao Goto, Eiichro Sumita, Hai Zhao, and Bao-Liang Lu. 2013. Convert- ing continuous-space language models into n-gram language models for statistical machine translation.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Proceedings of EMNLP",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "845--850",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "In Proceedings of EMNLP, pages 845-850, Seattle, Washington, USA, October.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Neural network based bilingual language model growing for statistical machine translation",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Bao-Liang",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Masao",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "189--195",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rui Wang, Hai Zhao, Bao-Liang Lu, Masao Utiyama, and Eiichiro Sumita. 2014. Neural network based bilingual language model growing for statistical ma- chine translation. In Proceedings of EMNLP, pages 189-195, Doha, Qatar, October.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Learning distributed word representations for bidirectional lstm recurrent neural network",
"authors": [
{
"first": "Peilu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Soong",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peilu Wang, Yao Qian, Frank Soong, Lei He, and Hai Zhao. 2016. Learning distributed word representa- tions for bidirectional lstm recurrent neural network. In Proceedings of NAACL, June.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "The CoNLL-2015 shared task on shallow discourse parsing",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "Rashmi Prasado Christopher",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bryant",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rutherford",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nianwen Xue, Hwee Tou Ng, Sameer Pradhan, Rashmi PrasadO Christopher Bryant, and Attapol T Ruther- ford. 2015. The CoNLL-2015 shared task on shal- low discourse parsing. In Proceedings of CoNLL, page 2.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The CoNLL-2016 shared task on multilingual shallow discourse parsing",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Attapol",
"middle": [],
"last": "Webber",
"suffix": ""
},
{
"first": "Chuan",
"middle": [],
"last": "Rutherford",
"suffix": ""
},
{
"first": "Hongmin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Twentieth Conference on Computational Natural Language Learning -Shared Task",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nianwen Xue, Hwee Tou Ng, Sameer Pradhan, Bon- nie Webber, Attapol Rutherford, Chuan Wang, and Hongmin Wang. 2016. The CoNLL-2016 shared task on multilingual shallow discourse parsing. In Proceedings of the Twentieth Conference on Compu- tational Natural Language Learning -Shared Task, Berlin, Germany, August. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Hybrid approach to pdtb-styled discourse parsing for CoNLL-2015",
"authors": [
{
"first": "Yasuhisa",
"middle": [],
"last": "Yoshida",
"suffix": ""
},
{
"first": "Katsuhiko",
"middle": [],
"last": "Hayashi",
"suffix": ""
},
{
"first": "Tsutomu",
"middle": [],
"last": "Hirao",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning -Shared Task",
"volume": "",
"issue": "",
"pages": "95--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yasuhisa Yoshida, Katsuhiko Hayashi, Tsutomu Hirao, and Masaaki Nagata. 2015. Hybrid approach to pdtb-styled discourse parsing for CoNLL-2015. In Proceedings of the Nineteenth Conference on Com- putational Natural Language Learning -Shared Task, pages 95-99, Beijing, China, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A sensitivity analysis of (and practitioners' guide to) convolutional neural networks for sentence classification",
"authors": [
{
"first": "Ye",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Byron",
"middle": [],
"last": "Wallace",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1510.03820"
]
},
"num": null,
"urls": [],
"raw_text": "Ye Zhang and Byron Wallace. 2015. A sensitiv- ity analysis of (and practitioners' guide to) convo- lutional neural networks for sentence classification. arXiv preprint arXiv:1510.03820.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Probabilistic graph-based dependency parsing with convolutional neural network",
"authors": [
{
"first": "Zhisong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhisong Zhang and Hai Zhao. 2016. Probabilistic graph-based dependency parsing with convolutional neural network. In Proceedings of ACL, Berlin, Ger- many, August.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Semantic dependency parsing of NomBank and PropBank: An efficient integrated approach via a large-scale feature selection",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Wenliang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chunyu",
"middle": [],
"last": "Kit",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "30--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao, Wenliang Chen, and Chunyu Kit. 2009a. Semantic dependency parsing of NomBank and PropBank: An efficient integrated approach via a large-scale feature selection. In Proceedings of EMNLP, pages 30-39, Singapore, August.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Multilingual dependency learning: A huge feature engineering method to semantic dependency parsing",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Wenliang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chunyu",
"middle": [],
"last": "Kity",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of CoNLL",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao, Wenliang Chen, Chunyu Kity, and Guodong Zhou. 2009b. Multilingual dependency learning: A huge feature engineering method to semantic de- pendency parsing. In Proceedings of CoNLL, pages 55-60, Boulder, Colorado, June.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Integrative semantic dependency parsing via efficient large-scale feature selection",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Xiaotian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chunyu",
"middle": [],
"last": "Kit",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Artificial Intelligence Research",
"volume": "46",
"issue": "",
"pages": "203--233",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao, Xiaotian Zhang, and Chunyu Kit. 2013. In- tegrative semantic dependency parsing via efficient large-scale feature selection. Journal of Artificial Intelligence Research, 46:203-233.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Predicting discourse connectives for implicit discourse relation recognition",
"authors": [
{
"first": "Zhi-Min",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Zheng-Yu",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Man",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Chew Lim",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Posters",
"volume": "",
"issue": "",
"pages": "1507--1514",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhi-Min Zhou, Yu Xu, Zheng-Yu Niu, Man Lan, Jian Su, and Chew Lim Tan. 2010. Predicting discourse connectives for implicit discourse relation recog- nition. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 1507-1514. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "System pipeline for the discourse parser"
},
"TABREF1": {
"content": "<table/>",
"text": "Non-Explicit Sense Classification on English and Chinese development sets without error propagation.",
"num": null,
"type_str": "table",
"html": null
},
"TABREF2": {
"content": "<table><tr><td colspan=\"2\">: F 1 scores (%) with different CNN filter</td></tr><tr><td colspan=\"2\">sizes for Non-Explicit on original arguments on</td></tr><tr><td>development set.</td><td/></tr><tr><td colspan=\"2\">filter-size on refined Args</td></tr><tr><td>(1,2,3)</td><td>45.11</td></tr><tr><td>(2,3,4)</td><td>44.18</td></tr><tr><td>(2,5,10)</td><td>44.97</td></tr><tr><td>(2,8,16)</td><td>43.</td></tr></table>",
"text": "",
"num": null,
"type_str": "table",
"html": null
},
"TABREF3": {
"content": "<table><tr><td/><td/><td/><td colspan=\"2\">WSJ Test</td><td/></tr><tr><td>Components</td><td/><td>baseline</td><td/><td/><td>our parser</td></tr><tr><td/><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td></tr><tr><td colspan=\"2\">ALL Explicit connective 94.83 Non-Explicit only Parser -</td><td>-</td><td colspan=\"4\">20.74 20.66 22.11 21.36</td></tr><tr><td>All Arg1 extraction</td><td colspan=\"6\">59.20 61.03 60.10 59.67 58.29 58.97</td></tr><tr><td>All Arg2 extraction</td><td colspan=\"6\">71.43 73.64 72.52 72.82 71.13 71.97</td></tr><tr><td>All Both extration</td><td colspan=\"6\">48.62 50.13 49.36 49.10 47.96 48.52</td></tr><tr><td>All Parser</td><td colspan=\"6\">29.27 30.08 29.72 29.90 30.65 30.27</td></tr></table>",
"text": "93.49 94.16 92.42 94.88 93.63 Explicit Arg1 extraction 51.05 50.33 50.68 49.73 51.06 50.38 Explicit Arg2 extraction 77.89 76.79 77.33 75.73 77.75 76.73 Explicit Both extraction 45.54 44.90 45.22 44.31 45.49 44.90 Explicit only Parser --39.96 41.05 40.02 40.53 Non-Explicit Arg1 extraction 64.83 69.50 67.08 67.42 63.08 65.18 Non-Explicit Arg2 extraction 66.02 70.78 68.32 70.18 65.65 67.84 Non-Explicit Both extraction 51.20 54.89 52.98 53.44 50.00 51.67",
"num": null,
"type_str": "table",
"html": null
},
"TABREF4": {
"content": "<table><tr><td/><td/><td/><td colspan=\"2\">Blind Test</td><td/></tr><tr><td>Components</td><td/><td>baseline</td><td/><td/><td>our parser</td></tr><tr><td/><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td></tr><tr><td>ALL Explicit connective</td><td colspan=\"6\">93.48 90.29 91.86 88.67 93.73 91.13</td></tr><tr><td>Explicit Arg1 extraction</td><td colspan=\"6\">49.16 47.48 48.31 47.12 49.81 48.43</td></tr><tr><td>Explicit Arg2 extraction</td><td colspan=\"6\">75.61 73.02 74.29 71.58 75.56 73.57</td></tr><tr><td>Explicit Both extraction</td><td colspan=\"6\">42.09 40.65 41.35 40.29 42.59 41.40</td></tr><tr><td>Explicit only Parser</td><td>-</td><td>-</td><td colspan=\"4\">30.38 32.57 30.76 31.64</td></tr><tr><td colspan=\"7\">Non-Explicit Arg1 extraction 58.66 63.25 60.87 64.01 59.38 61.61</td></tr><tr><td colspan=\"7\">Non-Explicit Arg2 extraction 71.88 77.49 74.58 80.86 75.00 77.82</td></tr><tr><td colspan=\"7\">Non-Explicit Both extraction 48.58 52.37 50.41 55.44 51.42 53.35</td></tr><tr><td>Non-Explicit only Parser</td><td>-</td><td>-</td><td colspan=\"4\">18.87 18.32 19.75 19.01</td></tr><tr><td>All Arg1 extraction</td><td colspan=\"6\">55.12 56.58 55.84 56.91 55.93 56.42</td></tr><tr><td>All Arg2 extraction</td><td colspan=\"6\">73.49 75.43 74.45 76.59 75.28 75.93</td></tr><tr><td>All Both extration</td><td colspan=\"6\">45.77 46.98 46.37 48.47 47.64 48.05</td></tr><tr><td>All Parser</td><td colspan=\"6\">23.69 24.32 24.00 24.41 24.81 24.61</td></tr></table>",
"text": "Results of the Shallow Discourse Parsing task on English WSJ test set.",
"num": null,
"type_str": "table",
"html": null
},
"TABREF5": {
"content": "<table/>",
"text": "Results of the Shallow Discourse Parsing task on English Blind test set.",
"num": null,
"type_str": "table",
"html": null
}
}
}
}