ACL-OCL / Base_JSON /prefixK /json /K16 /K16-2006.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K16-2006",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:11:15.270716Z"
},
"title": "Discourse Sense Classification from Scratch using Focused RNNs",
"authors": [
{
"first": "Gregor",
"middle": [],
"last": "Weiss",
"suffix": "",
"affiliation": {},
"email": "gregor.weiss@student.uni-lj.si"
},
{
"first": "Marko",
"middle": [],
"last": "Bajec",
"suffix": "",
"affiliation": {},
"email": "marko.bajec@fri.uni-lj.si"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The subtask of CoNLL 2016 Shared Task focuses on sense classification of multilingual shallow discourse relations. Existing systems rely heavily on external resources, hand-engineered features, patterns, and complex pipelines fine-tuned for the English language. In this paper we describe a different approach and system inspired by end-to-end training of deep neural networks. Its input consists of only sequences of tokens, which are processed by our novel focused RNNs layer, and followed by a dense neural network for classification. Neural networks implicitly learn latent features useful for discourse relation sense classification, make the approach almost language-agnostic and independent of prior linguistic knowledge. In the closed-track sense classification task our system achieved overall 0.5246 F 1-measure on English blind dataset and achieved the new state-of-the-art of 0.7292 F 1-measure on Chinese blind dataset.",
"pdf_parse": {
"paper_id": "K16-2006",
"_pdf_hash": "",
"abstract": [
{
"text": "The subtask of CoNLL 2016 Shared Task focuses on sense classification of multilingual shallow discourse relations. Existing systems rely heavily on external resources, hand-engineered features, patterns, and complex pipelines fine-tuned for the English language. In this paper we describe a different approach and system inspired by end-to-end training of deep neural networks. Its input consists of only sequences of tokens, which are processed by our novel focused RNNs layer, and followed by a dense neural network for classification. Neural networks implicitly learn latent features useful for discourse relation sense classification, make the approach almost language-agnostic and independent of prior linguistic knowledge. In the closed-track sense classification task our system achieved overall 0.5246 F 1-measure on English blind dataset and achieved the new state-of-the-art of 0.7292 F 1-measure on Chinese blind dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Shallow discourse parsing is a challenging natural language processing task and sense classification is its most difficult subtask (Lin et al., 2014; Xue et al., 2015) . Given text spans for argument 1 and 2, connective, and punctuation, the goal is to predict the sense of the discourse relation that holds between them. These text spans can appear in various orders, are not necessarily continuous, can spread across multiple sentences, and sometimes connectives and punctuation are not even present. The CoNLL 2016 Shared Task (Xue et al., 2016) focuses on multilingual shallow discourse parsing based on the English Penn Dis-course TreeBank (PDTB) (Prasad et al., 2008) and Chinese Discourse TreeBank (CDTB) (Zhou and Xue, 2012) . Evaluation is performed on separate test and blind datasets on the remote TIRA evaluation system (Potthast et al., 2014) .",
"cite_spans": [
{
"start": 131,
"end": 149,
"text": "(Lin et al., 2014;",
"ref_id": "BIBREF3"
},
{
"start": 150,
"end": 167,
"text": "Xue et al., 2015)",
"ref_id": "BIBREF9"
},
{
"start": 530,
"end": 548,
"text": "(Xue et al., 2016)",
"ref_id": "BIBREF10"
},
{
"start": 652,
"end": 673,
"text": "(Prasad et al., 2008)",
"ref_id": "BIBREF6"
},
{
"start": 712,
"end": 732,
"text": "(Zhou and Xue, 2012)",
"ref_id": "BIBREF11"
},
{
"start": 832,
"end": 855,
"text": "(Potthast et al., 2014)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Existing systems for discourse parsing rely heavily on existing resources, hand-engineered features, patterns, and complex pipelines finetuned for the English language (Xue et al., 2015; Wang and Lan, 2015; Stepanov et al., 2015) . Such features include word lists, part-of-speech tags, chunking tags, syntactic features extracted from constituent parse trees, path features built around connectives or specific words, production rules, dependency rules, Brown cluster pairs, features that disambiguate problematic connectives, and similar. Similar to our system, these pipelines separately process explicit and non-explicit discourse relation types.",
"cite_spans": [
{
"start": 168,
"end": 186,
"text": "(Xue et al., 2015;",
"ref_id": "BIBREF9"
},
{
"start": 187,
"end": 206,
"text": "Wang and Lan, 2015;",
"ref_id": "BIBREF8"
},
{
"start": 207,
"end": 229,
"text": "Stepanov et al., 2015)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we describe a different approach and system inspired by end-to-end training of deep neural networks. Instead of engineering features and incorporating linguistic knowledge into them, its input consists of only sequences of tokens. They are processed by a neural network model that utilizes our novel focused recurrent neural networks (RNNs). It automatically learns latent features and how to allocate focus for our task. This way the system is independent of any prior knowledge, existing parsers, or external resources, what makes it almost language-agnostic. By only changing a few hyper-parameters, we successfully applied the same system to the English and Chinese datasets and achieved new state-of-the-art results on the Chinese blind dataset. Our system 1 was developed in Python using the Keras library (Chollet, 2015) that enables it to run on either CPU or GPU. The system architecture is described in Section 2, followed by details of layers in our neural network and their training. Section 3 presents official evaluation results on English and Chinese datasets. Section 4 draws conclusions and directions for future work.",
"cite_spans": [
{
"start": 826,
"end": 841,
"text": "(Chollet, 2015)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our system for discourse sense classification of the CoNLL 2016 Shared Task consists of two similar neural network models build from three types of layers (see Figure 1 ). In the spirit of end-to-end training its input consists of only tokenized text spans that are mapped to vocabulary ids, which are processed by our neural network to classify each discourse relation into a sense category.",
"cite_spans": [],
"ref_spans": [
{
"start": 160,
"end": 168,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2"
},
{
"text": "Important steps of our system are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2"
},
{
"text": "\u2022 Two models for separately handling present and absent connectives in discourse relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2"
},
{
"text": "\u2022 Input consists of four sequences of tokens mapped to vocabulary ids (for argument 1 and 2, connectives, and punctuations).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2"
},
{
"text": "\u2022 Word Embeddings layer maps each token into a low-dimensional vector space using a lookup table.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2"
},
{
"text": "\u2022 Focused RNNs layer focuses multiple RNNs onto different aspects of these sequences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2"
},
{
"text": "\u2022 Classification is performed with a dense neural network and logistic regression on top.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2"
},
{
"text": "We used the same system on the English and Chinese datasets and each one uses two separate neural network models with only a few differences in its 18 parameters. Because of these differences, individual models are trained and applied completely separately, although parts could be shared. Total number of trainable weights for both neural network models is 1355661/1185006 for English and 369972/1276761 for Chinese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2"
},
{
"text": "According to suggestions from related work we separately handle discourse relations with and without given connectives. For each case we train a separate neural network model with the same architecture, but different hyper-parameters. Throughout the paper we present those differences in parameters with a/b, where a presents a value ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two models",
"sec_num": "2.1"
},
{
"text": "Initially a vocabulary of all words or tokens in the training dataset is prepared mapping each one to a unique token id. Four text spans representing individual shallow discourse relations are tokenized and mapped into four sequences of vocabulary ids. Depending on the language these input sequences are cropped to different maximal lengths, see Table 1 . Out-of-vocabulary words that are not present during training are mapped to a special id.",
"cite_spans": [],
"ref_spans": [
{
"start": 347,
"end": 354,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Input",
"sec_num": "2.2"
},
{
"text": "Relation part English Chinese Argument 1 100 500 Argument 2 100 500 Connective 10 10 Punctuation 2 2 Table 1 : Maximal lengths of input sequences in our system for English and Chinese datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 101,
"end": 108,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Input",
"sec_num": "2.2"
},
{
"text": "A shared word embedding layer turns previous sequences of positive integers (token ids) into dense vectors of fixed size using a lookup table. These vector representations are automatically learned with the rest of the model using backpropagation. All four input sequences are mapped into the same low-dimensional vector space with 30/20 dimensions for English and 20/70 for Chinese. For regu-larization purposes we randomly drop embeddings during training with probability 0.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word embeddings",
"sec_num": "2.3"
},
{
"text": "Although the closed-track allowed the use of pre-trained skip-gram neural word embeddings (Mikolov et al., 2013) , we decided to learn them from scratch for each model separately.",
"cite_spans": [
{
"start": 90,
"end": 112,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word embeddings",
"sec_num": "2.3"
},
{
"text": "These embeddings are processed by our novel focused RNNs layer. Any recurrent neural network (RNN) can be used as its building block, but we decided to use the GRU layer (Chung et al., 2014) . First a special focus RNN with 4/6 dimensions for English and 4/5 for Chinese is used to assign multidimensional focus weights to the input sequence. For each focus dimension a separate RNN is applied to the input sequence multiplied with corresponding focus weights. This way different RNNs can focus on different aspects of input sequencesin our case on different words and senses. Final outputs of these RNNs are concatenated and used in the classification layers. Our system uses separate RNNs with 10/50 dimensions for English and 20/30 for Chinese. For regularization purposes we randomly drop 0.33 input gates of focus and separate RNNs, 0.66 recurrent connections of the focus RNN, and 0.33 of separate RNNs.",
"cite_spans": [
{
"start": 170,
"end": 190,
"text": "(Chung et al., 2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Focused RNNs",
"sec_num": "2.4"
},
{
"text": "Note that our focused RNNs layer differs a lot from other attention mechanisms found in literature. They are designed to only work with question-answering systems, use a weighted combination of all input states, and can focus on only one aspect of the input sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Focused RNNs",
"sec_num": "2.4"
},
{
"text": "Classification into discourse sense categories is performed using a dense neural network. Merged outputs of all focused RNNs are first processed by a dense layer with 90/40 dimensions for English and 100/90 for Chinese, followed by the SReLU activation function (Jin et al., 2015) . The S-shaped rectified linear activation unit (SReLU) consists of piecewise linear functions and can learn both convex and non-convex functions. Finally logistic regression, i.e. a dense layer followed by the softmax activation function, is applied to get classification probabilities. For regularization purposes we randomly drop connections before the second dense layers with probability 0.5.",
"cite_spans": [
{
"start": 262,
"end": 280,
"text": "(Jin et al., 2015)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "2.5"
},
{
"text": "Loss function suitable for our classification task is the categorical cross-entropy. Training is achieved with backpropagation and any gradient descent optimization, such as Adam optimizer. To parallelize and speed up the learning process we train in batches of 64 training samples. During training we monitor the loss function on the validation dataset and stop if it does not increase in the last 20 epochs. For regularization purposes we also introduce 32 random noise samples for each discourse relation during training. Weights used by the resulting system are those with the best encountered validation loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "2.6"
},
{
"text": "Datasets used by the CoNLL 2016 Shared Task consist of PDTB for English, CDTB for Chinese, and two unknown blind test datasets from Wikinews. For each language there is a train dataset for training models, validation dataset for monitoring the learning process, and test and blind test datasets for evaluating its performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3"
},
{
"text": "Metric used for this subtask of CoNLL 2016 Shared Task is the F 1 -measure. It is computed based on the number of predicted discourse relation senses that match a gold standard relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3"
},
{
"text": "The training dataset from PDTB for English consists of 1756 documents with 15246 discourse relations that can be categorized into 15 different discourse relation senses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for English",
"sec_num": "3.1"
},
{
"text": "Overall our system performs pretty well on all English datasets (see Table 2 ) despite not using any external resources or hand-engineered features. As expected it performs best on the validation dataset, achieves slightly lower scores (0.5845) on the test dataset, and performs the worst on the blind dataset (0.5246) that contains a different writing style than PDTB. For only explicit relations our system performs much better, close to inter-annotator agreement (91%) on development and test datasets, but without using any word lists or patterns like other systems. On the other hand non-explicit relations seem to be a much harder problem and the relatively small size of the training dataset does not contain enough information.",
"cite_spans": [],
"ref_spans": [
{
"start": 69,
"end": 76,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results for English",
"sec_num": "3.1"
},
{
"text": "Detailed per-sense analysis on all discourse relations is shown in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 67,
"end": 74,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results for English",
"sec_num": "3.1"
},
{
"text": "We see Table 3 : Per-sense F 1 -measures of discourse relation sense classification evaluated on all relations on English datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 7,
"end": 14,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results for English",
"sec_num": "3.1"
},
{
"text": "The training dataset from CDTB for Chinese consists of 455 documents with 2445 discourse relations that can be categorized into 10 different discourse relation senses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for Chinese",
"sec_num": "3.2"
},
{
"text": "Overall our system performs pretty well on all Chinese datasets (see Table 4 ) despite not using any external resources or hand-engineered features. Its overall performance is almost consistent across the validation, test (0.7011), and blind (0.7292) datasets, although the last one probably contains a different writing style than CDTB. For only explicit relations our system performs much better on development and test datasets. For nonexplicit relations the situation seems to be the opposite. This inconsistencies indicate that the relatively small size of the training dataset does not contain enough information. (Xue et al., 2016) .",
"cite_spans": [
{
"start": 620,
"end": 638,
"text": "(Xue et al., 2016)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 69,
"end": 76,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results for Chinese",
"sec_num": "3.2"
},
{
"text": "Detailed per-sense analysis on all discourse relations is shown in Table 5 . We see that our system performs consistently well on Conjunction, Conditional, and Temporal, but does not perform at all on Alternative, EntRel, and Progression, because of insufficient number of samples. ",
"cite_spans": [],
"ref_spans": [
{
"start": 67,
"end": 74,
"text": "Table 5",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Type",
"sec_num": null
},
{
"text": "We have shown that it is possible to implement a shallow discourse relation sense classifier that does not depend on any external sources, handengineered features, patterns, and complex fine-tuned pipelines. Our system consists of two neural network models built from three types of layers and is trained end-to-end. As a consequence it is almost language-agnostic and we have evaluated its performance on the English and Chinese datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "http://github.com/gw0/conll16st-v34-focused-rnns/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. arXiv",
"authors": [
{
"first": "Junyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2014. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. arXiv, pages 1-9.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Deep Learning with S-shaped Rectified Linear Activation Units",
"authors": [
{
"first": "Xiaojie",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Chunyan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jiashi",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Yunchao",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Junjun",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Shuicheng",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaojie Jin, Chunyan Xu, Jiashi Feng, Yunchao Wei, Junjun Xiong, and Shuicheng Yan. 2015. Deep Learning with S-shaped Rectified Linear Activation Units.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A PDTB-Styled End-to-End Discourse Parser",
"authors": [
{
"first": "Ziheng",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kan",
"suffix": ""
}
],
"year": 2014,
"venue": "Nat. Lang. Eng",
"volume": "20",
"issue": "2",
"pages": "151--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2014. A PDTB-Styled End-to-End Discourse Parser. Nat. Lang. Eng., 20(2):151-184.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Distributed Representations of Words and Phrases and their Compositionality. Nips",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed Representations of Words and Phrases and their Compositionality. Nips, pages 1-9.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Improving the Reproducibility of PAN's Shared Tasks: Plagiarism Detection, Author Identification, and Author Profiling",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Gollub",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Rangel",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Efstathios",
"middle": [],
"last": "Stamatatos",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2014,
"venue": "Inf. Access Eval. meets Multilinguality, Multimodality, Vis. 5th Int. Conf. CLEF Initiat. (CLEF 14)",
"volume": "",
"issue": "",
"pages": "268--299",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Potthast, Tim Gollub, Francisco Rangel, Paolo Rosso, Efstathios Stamatatos, and Benno Stein. 2014. Improving the Reproducibility of PAN's Shared Tasks: Plagiarism Detection, Author Iden- tification, and Author Profiling. In Evangelos Kanoulas, Mihai Lupu, Paul Clough, Mark Sander- son, Mark Hall, Allan Hanbury, and Elaine Toms, editors, Inf. Access Eval. meets Multilinguality, Mul- timodality, Vis. 5th Int. Conf. CLEF Initiat. (CLEF 14), pages 268-299, Berlin Heidelberg New York, sep. Springer.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The Penn Discourse TreeBank 2.0",
"authors": [
{
"first": "Rashmi",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Dinesh",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Eleni",
"middle": [],
"last": "Miltsakaki",
"suffix": ""
},
{
"first": "Livio",
"middle": [],
"last": "Robaldo",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. Sixth Int. Conf. Lang. Resour. Eval",
"volume": "",
"issue": "",
"pages": "2961--2968",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Milt- sakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The Penn Discourse TreeBank 2.0. Proc. Sixth Int. Conf. Lang. Resour. Eval., pages 2961-2968.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The UniTN Discourse Parser in CoNLL 2015 Shared Task: Token-level Sequence Labeling with Argument-specific Models",
"authors": [
{
"first": "Evgeny",
"middle": [],
"last": "Stepanov",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Riccardi",
"suffix": ""
},
{
"first": "Ali Orkan",
"middle": [],
"last": "Bayer",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. Ninet. Conf. Comput. Nat",
"volume": "",
"issue": "",
"pages": "25--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evgeny Stepanov, Giuseppe Riccardi, and Ali Orkan Bayer. 2015. The UniTN Discourse Parser in CoNLL 2015 Shared Task: Token-level Sequence Labeling with Argument-specific Models. Proc. Ninet. Conf. Comput. Nat. Lang. Learn. -Shar. Task, (Dcd):25-31.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A refined end-toend discourse parser",
"authors": [
{
"first": "Jianxiang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Man",
"middle": [],
"last": "Lan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. Ninet. Conf. Comput. Nat",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianxiang Wang and Man Lan. 2015. A refined end-to- end discourse parser. In Proc. Ninet. Conf. Comput. Nat. Lang. Learn. Shar. Task, pages 17-24.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The CoNLL-2015 Shared Task on Shallow Discourse Parsing",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "Rashmi",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "Attapol",
"middle": [
"T"
],
"last": "Bryant",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rutherford",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. Ninet. Conf. Comput",
"volume": "",
"issue": "",
"pages": "1--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nianwen Xue, Hwee Tou Ng, Sameer Pradhan, Rashmi Prasad, Christopher Bryant, and Attapol T. Ruther- ford. 2015. The CoNLL-2015 Shared Task on Shal- low Discourse Parsing. In Proc. Ninet. Conf. Com- put. Nat. Lang. Learn. Shar. Task, pages 1-16.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The CoNLL-2016 Shared Task on Multilingual Shallow Discourse Parsing",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Attapol",
"middle": [],
"last": "Webber",
"suffix": ""
},
{
"first": "Chuan",
"middle": [],
"last": "Rutherford",
"suffix": ""
},
{
"first": "Hongmin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. Twent. Conf. Comput. Nat. Lang. Learn. -Shar. Task",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nianwen Xue, Hwee Tou Ng, Sameer Pradhan, Bon- nie Webber, Attapol Rutherford, Chuan Wang, and Hongmin Wang. 2016. The CoNLL-2016 Shared Task on Multilingual Shallow Discourse Parsing. In Proc. Twent. Conf. Comput. Nat. Lang. Learn. - Shar. Task, Berlin, Germany, aug. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "PDTB-style Discourse Annotation of Chinese Text",
"authors": [
{
"first": "Yuping",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. 50th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuping Zhou and Nianwen Xue. 2012. PDTB-style Discourse Annotation of Chinese Text. Proc. 50th",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Our neural network model for end-toend training of sense classification. Two such models are separately trained for each language.used for Explicit and AltLex relation types (where connectives are present) and b for Implicit and En-tRel relation types (where connectives are absent).",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"num": null,
"type_str": "table",
"text": "",
"html": null,
"content": "<table><tr><td colspan=\"4\">: Overall F 1 -measures of discourse relation</td></tr><tr><td colspan=\"4\">sense classification evaluated on different relation</td></tr><tr><td colspan=\"4\">types on English datasets from our and best com-</td></tr><tr><td colspan=\"4\">peting system of CoNLL 2016 Shared Task (Xue</td></tr><tr><td>et al., 2016).</td><td/><td/></tr><tr><td colspan=\"4\">that our system performs consistently well</td></tr><tr><td colspan=\"4\">on Contingency.Condition, Temporal.Async.Precedence,</td></tr><tr><td colspan=\"4\">and Temporal.Async.Succession, but fails on Com-</td></tr><tr><td colspan=\"4\">parison.Concession, Expansion.Instantiation, and Expan-</td></tr><tr><td>sion.Restatement.</td><td/><td/></tr><tr><td>Sense</td><td>Dev</td><td colspan=\"2\">Test Blind</td></tr><tr><td>Comparison.Concession</td><td colspan=\"3\">0.2000 0.2105 0.0370</td></tr><tr><td>Comparison.Contrast</td><td colspan=\"3\">0.7696 0.7690 0.3077</td></tr><tr><td colspan=\"4\">Contingency.Cause.Reason 0.4087 0.5155 0.3556</td></tr><tr><td>Contingency.Cause.Result</td><td colspan=\"3\">0.4490 0.4216 0.4110</td></tr><tr><td>Contingency.Condition</td><td colspan=\"3\">0.9318 0.8966 0.9811</td></tr><tr><td>EntRel</td><td colspan=\"3\">0.5458 0.4523 0.5228</td></tr><tr><td>Expansion.Alt</td><td colspan=\"3\">0.9231 0.9091 0.5455</td></tr><tr><td>Expansion.Alt.Chosen alt.</td><td colspan=\"2\">0.7692 0.2000</td><td>-</td></tr><tr><td>Expansion.Conjunction</td><td colspan=\"3\">0.7015 0.6938 0.7432</td></tr><tr><td>Expansion.Instantiation</td><td colspan=\"3\">0.2899 0.4496 0.2041</td></tr><tr><td>Expansion.Restatement</td><td colspan=\"3\">0.2748 0.2584 0.2378</td></tr><tr><td colspan=\"4\">Temporal.Async.Precedence 0.7812 0.8706 0.8409</td></tr><tr><td colspan=\"4\">Temporal.Async.Succession 0.8211 0.7611 0.8468</td></tr><tr><td>Temporal.Synchrony</td><td colspan=\"3\">0.7931 0.6889 0.6034</td></tr><tr><td>Overall (micro-average)</td><td colspan=\"3\">0.6136 0.5845 0.5246</td></tr></table>"
},
"TABREF3": {
"num": null,
"type_str": "table",
"text": "Overall F 1 -measures of discourse relation sense classification evaluated on different relation types on Chinese datasets from our and best competing system of CoNLL 2016 Shared Task",
"html": null,
"content": "<table/>"
},
"TABREF4": {
"num": null,
"type_str": "table",
"text": "Per-sense F 1 -measures of discourse relation sense classification evaluated on all relations on Chinese datasets.",
"html": null,
"content": "<table/>"
}
}
}
}