ACL-OCL / Base_JSON /prefixW /json /W16 /W16-0313.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W16-0313",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:59:59.257451Z"
},
"title": "Data61-CSIRO systems at the CLPsych 2016 Shared Task",
"authors": [
{
"first": "Sunghwan",
"middle": [
"Mac"
],
"last": "Kim",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Yufei",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Queensland",
"location": {
"settlement": "Brisbane",
"country": "Australia"
}
},
"email": "yufei.wang1@uq.net.au"
},
{
"first": "Stephen",
"middle": [],
"last": "Wan",
"suffix": "",
"affiliation": {},
"email": "stephen.wan@csiro.au"
},
{
"first": "C\u00e9cile",
"middle": [],
"last": "Paris",
"suffix": "",
"affiliation": {},
"email": "cecile.paris@csiro.au"
},
{
"first": "",
"middle": [],
"last": "Data61",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Sydney",
"middle": [],
"last": "Csiro",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Australia",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes the Data61-CSIRO text classification systems submitted as part of the CLPsych 2016 shared task. The aim of the shared task is to develop automated systems that can help mental health professionals with the process of triaging posts with ideations of depression and/or self-harm. We structured our participation in the CLPsych 2016 shared task in order to focus on different facets of modelling online forum discussions: (i) vector space representations; (ii) different text granularities; and (iii) fine-versus coarse-grained labels indicating concern. We achieved an F1score of 0.42 using an ensemble classification approach that predicts fine-grained labels of concern. This was the best score obtained by any submitted system in the 2016 shared task. * This work was performed while Yufei was at CSIRO.",
"pdf_parse": {
"paper_id": "W16-0313",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes the Data61-CSIRO text classification systems submitted as part of the CLPsych 2016 shared task. The aim of the shared task is to develop automated systems that can help mental health professionals with the process of triaging posts with ideations of depression and/or self-harm. We structured our participation in the CLPsych 2016 shared task in order to focus on different facets of modelling online forum discussions: (i) vector space representations; (ii) different text granularities; and (iii) fine-versus coarse-grained labels indicating concern. We achieved an F1score of 0.42 using an ensemble classification approach that predicts fine-grained labels of concern. This was the best score obtained by any submitted system in the 2016 shared task. * This work was performed while Yufei was at CSIRO.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The aim of the shared task is to research and develop automatic systems that can help mental health professionals with the process of triaging posts with ideations of depression and/or self-harm. We structured our participation in the CLPsych 2016 shared task in order to focus on different facets of modelling online forum discussions: (i) vector space representations (TF-IDF vs. embeddings); (ii) different text granularities (e.g., sentences vs posts); and (iii) fineversus coarse-grained (FG and CG respectively) labels indicating concern.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(i) For our exploration of vector space representations, we explored the traditional TF-IDF feature representation that has been widely applied to NLP. We also investigated the use of post embeddings, which have recently attracted much attention as feature vectors for representing text (Zhou et al., 2015; Salehi et al., 2015) . Here, as in other related work (Guo et al., 2014) , the post embeddings are learned from the unlabelled data as features for supervised classifiers. (ii) Our exploration of text granularity focuses on classifiers for sentences as well as posts. For the sentence-level classifiers, a post is split into sentences as the basic unit of annotation using a sentence segmenter. (iii) To explore the granularity of labels indicating concern, we note that the data includes a set of 12 FG labels representing factors that assist in deciding on whether a post is concerning or not. These are in addition to 4 CG labels.",
"cite_spans": [
{
"start": 287,
"end": 306,
"text": "(Zhou et al., 2015;",
"ref_id": "BIBREF7"
},
{
"start": 307,
"end": 327,
"text": "Salehi et al., 2015)",
"ref_id": "BIBREF4"
},
{
"start": 361,
"end": 379,
"text": "(Guo et al., 2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We trained 6 single classifiers based on different combinations of vector space features, text granularities and label sets. We also explored ensemble classifiers (based on these 6 single classifiers), as this is a way of combining the strengths of the single classifiers. We used one of two ensemble methods: majority voting and probability scores over labels. We submitted five different systems as submissions to the shared task. Two of them were based on single classifiers, whereas the remaining three systems used ensemble-based classifiers. We achieved an F1-score of 0.42 using an ensemble classification approach that predicts FG labels of concern. This was the best score obtained by any submitted system in the 2016 shared task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is organised as follows: Section 2 briefly discusses the data of the shared task. Section 3 presents the details of the systems we sub-mitted. Section 4 then shows experimental results. Finally, we summarise our findings in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The dataset used in the shared task is a collection of online posts crawled from a mental health forum, ReachOut.com 1 , collected by the shared task annotators, who then labelled each discussion post with one of 4 CG labels: Green, Amber, Red and Crisis, describing how likely a post is to require the attention of a mental health professional. Each post is also annotated with one of 12 FG labels, which are mapped deterministically to one of the 4 CG labels according to the relationships presented in Table 1 (which also provides the frequencies of these relationships). For instance, a post labelled with Red could be labelled with one of 4 FG labels: angry-WithForumMember, angryWithReachout, currentA-cuteDistress and followupWorse. As can be seen in the table, the dataset is imbalanced since it contains more Green labelled posts than any other post.",
"cite_spans": [],
"ref_spans": [
{
"start": 505,
"end": 512,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "The corpus consists of 65,024 posts, and it is subdivided into labelled (947) and unlabelled data (64,077). The final test data contains an extra 241 forum posts. Each post is provided in an XML file and each post file contains metadata, such as the number of \"likes\" a post received from the online community. The shared task requires each submitted system to predict a label for each of test posts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "In addition to the post data, the data set contains anonymised metadata about post authors, which indicates whether authors were affiliated with Rea-chOut, either as a community moderator or a site administrator. Specifically, this metadata contains anonymised author IDs and their forum ranking. In total, there were 1,640 unique authors and 20 author rankings on the forums. Each author has one of the 20 rankings. 7 ranking types indicate ReachOut affiliated, whereas 13 author ranking types represent a member of the general public.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "We performed several text pre-processing steps prior to feature extraction in order to reduce the noisiness of the original forum posts. We removed HTML special characters, non-ASCII characters and stop words, and all tokens were lower-cased. We used NLTK (Bird et al., 2009) to segment sentences for the sentence-level classifiers, producing 4,305 sentences from the 947 posts.",
"cite_spans": [
{
"start": 256,
"end": 275,
"text": "(Bird et al., 2009)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text Pre-processing",
"sec_num": "3.1"
},
{
"text": "We used two types of feature representations for the text: TF-IDF and post embeddings. The TF-IDF feature vectors of unigrams were generated from the labelled dataset, whereas the embeddings were obtained using both labelled and unlabelled dataset using sent2vec (Le and Mikolov, 2014) . We obtained the embeddings for the whole post directly instead of combining the embeddings for the individual words of the post due to the superior performance of document embeddings (Sun et al., 2015; Tang et al., 2015) . In our preliminary investigations, we explored various kinds of features such as bi-and trigrams, metadata from the posts (such as the number of views of a post or the author's affiliation with Rea-chOut) and orthographic features (for example, the presence of emoticons, punctuation, etc.), but we did not obtain any performance benefits with respect to intrinsic evaluations on the training data.",
"cite_spans": [
{
"start": 263,
"end": 285,
"text": "(Le and Mikolov, 2014)",
"ref_id": "BIBREF2"
},
{
"start": 471,
"end": 489,
"text": "(Sun et al., 2015;",
"ref_id": "BIBREF5"
},
{
"start": 490,
"end": 508,
"text": "Tang et al., 2015)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "For the text classifiers, we trained a MaxEnt model using scikit-learn's SGDClassifier (Pedregosa et al., 2011) with the log loss function and a learning rate of 0.0001 as our classifier for all experiments. In the training phrase, the weights of SGDClassifier are optimised using stochastic gradient descent (SGD) through minimising a given loss function, and L2 regularisation is employed to avoid overfitting. The log loss function in SGDClassifier allows us to obtain the probability score of a label at prediction time.",
"cite_spans": [
{
"start": 87,
"end": 111,
"text": "(Pedregosa et al., 2011)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classifiers",
"sec_num": "3.3"
},
{
"text": "We developed classifiers for two granularities of text: (i) entire posts, and (ii) sentences in posts. For the latter, we post-processed the predicted sentencelevel labels to produce post-level labels (to be consistent with the shared task). We obtained distributions of probabilities for the label sets for each sentence, and then summed the distributions for all sentences in a post. This provided a final distribution of probabilities for labels for a post. The label with the highest probability was then taken as the inferred label for the post.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifiers",
"sec_num": "3.3"
},
{
"text": "To perform the post-processing steps above, we used the distributions for labels produced by the MaxEnt model. That is, the model can be used to provide estimates for the probabilities of:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifiers",
"sec_num": "3.3"
},
{
"text": "\u2022 CG labels given a post, P (CG label|post);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifiers",
"sec_num": "3.3"
},
{
"text": "\u2022 CG labels given a sentence, P (CG label|sentence);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifiers",
"sec_num": "3.3"
},
{
"text": "\u2022 FG labels given a post, P (F G label|post); and \u2022 FG labels given a sentence, P (F G label|sentence).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifiers",
"sec_num": "3.3"
},
{
"text": "We also developed classifiers for the CG and FG label sets. In the case of the FG set, we again performed post-processing steps to produce CG labels. In this case, we deterministically reduced the predicted 12 labels to the 4 CG labels, using the mapping presented in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 268,
"end": 275,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Classifiers",
"sec_num": "3.3"
},
{
"text": "This allowed us to experiment with different combinations of the 3 facets, described in Section 1. We built 6 classifiers based on the combination of the configurations described so far as follows: C1. post-level TF-IDF classifier using 4 labels C2. post-level embedding classifier using 4 labels C3. sentence-level TF-IDF classifier using 4 labels C4. post-level TF-IDF classifier using 12 labels C5. post-level embedding classifier using 12 labels C6. sentence-level TF-IDF classifier using 12 labels",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifiers",
"sec_num": "3.3"
},
{
"text": "One reason why the ensemble approaches may work well is that, even if a classifier does not pick the correct label, the probabilities for all labels can still be taken as input to the ensemble approach. For example, although a classifier may have chosen a la- bel incorrectly, the correct label could have had the second highest probability score, which when combined with information from other classifiers may lead to the correct label being assigned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensembles",
"sec_num": "3.4"
},
{
"text": "As mentioned in Section 1, the outputs of the ensemble models were produced using one of two ensemble methods: majority voting and probability scores over labels. In the majority voting method, each classifier votes for a single label, and the label with highest number of votes is selected for the final decision. The second ensemble method uses an estimate of the posterior probability for each label from individual classifiers, and the label with highest sum of probabilities is chosen for the final prediction. Neither ensemble method requires any parameter tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensembles",
"sec_num": "3.4"
},
{
"text": "Five different systems were adopted for our submissions to the shared task. Two were based on a single MaxEnt classifier, whereas the remaining three systems used ensemble-based classifiers. The two single classifiers were as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Submitted Systems",
"sec_num": "3.5"
},
{
"text": "1. a single classifier C1 (Post-tfidf-4labels) 2. a single classifier C6 (Sent-tfidf-12labels) And the three ensemble classifiers are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Submitted Systems",
"sec_num": "3.5"
},
{
"text": "3. an ensemble classifier combining all six C1-C6 by majority voting (Ensb-6classifiers-mv) 4. an ensemble classifier combining C1, C2, C3 by posterior probabilities (Ensb-3classifiers-4labels-prob) 5. an ensemble classifier combining C4, C5, C6 by posterior probabilities (Ensb-3classifiers-12labels-prob) The Post-tfidf-4labels system uses a standard approach predicting 4 CG labels with respect to posts using TF-IDF feature representation. The Sent-tfidf-12labels system predicts 12 fined-grained labels for sentences using the same feature representation method. The Ensb-6classifiers-mv system combines all judgements of the six MaxEnt classifiers described in Section 3.3 through majority voting. The Table 3 : Results for the test set. The filter decides whether the label of a forum post is green or not (non-green vs. green).",
"cite_spans": [],
"ref_spans": [
{
"start": 708,
"end": 715,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Submitted Systems",
"sec_num": "3.5"
},
{
"text": "remaining two systems, Ensb-3classifiers-4labelsprob and Ensb-3classifiers-12labels-prob, use the sum of label probabilities estimated from individual classifiers to select the most probable label. The main difference between the two systems is the estimation of probability scores in different level of label granularities (CG labels vs. FG labels).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Submitted Systems",
"sec_num": "3.5"
},
{
"text": "In this section, we present two evaluation results: the cross-validation results and the final test results. We performed 5-fold cross-validation on the training set (947 labelled posts). We also report the shared task evaluation scores for the five systems on the test set of 214 posts. These are shown in Table 2 where scores are computed for three labels: Amber, Red and Crisis (but not Green), since this is the official evaluation metric in the shared task. We observe that two of the ensemble systems (Ensb-6classifiers-mv and Ensb-3classifiers-12labels-prob) show higher F1-scores than the others in the cross-validation experiments. In particular, Ensb-3classifiers-12labels-prob performs best both in the cross-validation experiment (0.37) and the main competition (0.42).",
"cite_spans": [],
"ref_spans": [
{
"start": 307,
"end": 314,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4"
},
{
"text": "Somewhat surprisingly, the first system, Posttfidf-4labels, gave us an F1-score of 0.39 on the test data, while its F1-score was the lowest in the cross-validation experiment. This result indicates that good performance is possible on the test dataset using a \"textbook\" TF-IDF classifier but further investigation is required to understand why the official test result differs from our cross-validation result. Table 3 shows the superior performance of the Ensb-3classifiers-12labels-prob, with respect to the other systems in terms of F1 and accuracy. It achieved the highest accuracy (0.85) for the three labels. Furthermore, it is a robust system for identifying the non-concerning label, Green.",
"cite_spans": [],
"ref_spans": [
{
"start": 412,
"end": 419,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4"
},
{
"text": "It is interesting to see that the F1-score was im-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4"
},
{
"text": "P R F1 Ensb-3classifiers-4labels-prob",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4"
},
{
"text": "Amber 0.60 0.57 0.59 Red 0.69 0.33 0.45 Crisis 0.00 0.00 0.00 Ensb-3classifiers-12labels-prob Amber 0.71 0.53 0.61 Red 0.68 0.63 0.65 Crisis 0.00 0.00 0.00 proved by performing the hard classification task of 12 labels compared to 4-label classification. We compare the performance of the Ensb-3classifiers-4labels-prob and Ensb-3classifiers-12labels-prob systems on the test data per label, as shown in Table 4 to shed light on why the 12-labelling system has superior performance. Both systems were unable to detect any Crisis-labelled posts. A notable difference between the two systems is that the Ensb-3classifiers-12labels-prob system produces significantly higher recall (0.63) than the Ensb-3classifiers-4labels-prob system (0.33). In addition, the Ensb-3classifiers-12labels-prob system has a higher precision for finding Amber posts. These results consequently led to overall better F1 as shown in Table 3 , and suggest that identifying Green and Amber posts for a user-in-the-loop scenario may be one way to help moderators save time in triaging posts.",
"cite_spans": [],
"ref_spans": [
{
"start": 404,
"end": 411,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 908,
"end": 915,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4"
},
{
"text": "We applied single and ensemble classifiers to the task of classifying online forum posts based on the likelihood of a mental health professional being required to intervene in the discussion. We achieved an F1-score of 0.42 with a system that combined post and sentence-level classifications through probability scores to produce FG labels. This was the best score obtained by any submitted system in the 2016 shared task. The experimental results suggest that identifying Green and Amber posts for a userin-the-loop scenario may be one way to help moderators save time in triaging posts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "We would like to thank the organisers of the shared task for their support.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Natural Language Processing with Python",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Ewan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird, Ewan Klein, and Edward Loper. 2009. Nat- ural Language Processing with Python. O'Reilly Me- dia, Inc., 1st edition.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning sense-specific word embeddings by exploiting bilingual resources",
"authors": [
{
"first": "Jiang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "497--507",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiang Guo, Wanxiang Che, Haifeng Wang, and Ting Liu. 2014. Learning sense-specific word embeddings by exploiting bilingual resources. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 497-507, Dublin, Ireland, August. Dublin City Uni- versity and Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Distributed representations of sentences and documents",
"authors": [
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 31st International Conference on Machine Learning (ICML-14)",
"volume": "",
"issue": "",
"pages": "1188--1196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed repre- sentations of sentences and documents. In Tony Jebara and Eric P. Xing, editors, Proceedings of the 31st In- ternational Conference on Machine Learning (ICML- 14), pages 1188-1196. JMLR Workshop and Confer- ence Proceedings.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Scikit-learn: Machine learning in python",
"authors": [
{
"first": "Fabian",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "Ga\u00ebl",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Bertrand",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "Mathieu",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "Ron",
"middle": [],
"last": "Weiss",
"suffix": ""
}
],
"year": 2011,
"venue": "J. Mach. Learn. Res",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vin- cent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Per- rot, and\u00c9douard Duchesnay. 2011. Scikit-learn: Machine learning in python. J. Mach. Learn. Res., 12:2825-2830, November.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A word embedding approach to predicting the compositionality of multiword expressions",
"authors": [
{
"first": "Bahar",
"middle": [],
"last": "Salehi",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Cook",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "977--983",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bahar Salehi, Paul Cook, and Timothy Baldwin. 2015. A word embedding approach to predicting the composi- tionality of multiword expressions. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 977-983, Denver, Colorado, May-June. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning word representations by jointly modeling syntagmatic and paradigmatic relations",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Jiafeng",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Yanyan",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xueqi",
"middle": [],
"last": "Cheng",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "136--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Sun, Jiafeng Guo, Yanyan Lan, Jun Xu, and Xueqi Cheng. 2015. Learning word representations by jointly modeling syntagmatic and paradigmatic rela- tions. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 136-145, Beijing, China, July. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Document modeling with gated recurrent neural network for sentiment classification",
"authors": [
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1422--1432",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Duyu Tang, Bing Qin, and Ting Liu. 2015. Docu- ment modeling with gated recurrent neural network for sentiment classification. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1422-1432, Lisbon, Por- tugal, September. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Learning continuous word embedding with metadata for question retrieval in community question answering",
"authors": [
{
"first": "Guangyou",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Tingting",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Po",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "250--259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guangyou Zhou, Tingting He, Jun Zhao, and Po Hu. 2015. Learning continuous word embedding with metadata for question retrieval in community question answering. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 250-259, Beijing, China, July. Association for Com- putational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"html": null,
"content": "<table><tr><td colspan=\"3\">CG label Frequency FG label</td><td>Frequency</td></tr><tr><td>Green</td><td>549</td><td>allClear</td><td>367</td></tr><tr><td/><td/><td>followupBye</td><td>16</td></tr><tr><td/><td/><td>supporting</td><td>166</td></tr><tr><td>Amber</td><td>249</td><td>underserved</td><td>34</td></tr><tr><td/><td/><td>currentMildDistress</td><td>40</td></tr><tr><td/><td/><td>followupOk</td><td>165</td></tr><tr><td/><td/><td>pastDistress</td><td>10</td></tr><tr><td>Red</td><td>110</td><td colspan=\"2\">angryWithForumMember 1</td></tr><tr><td/><td/><td>angryWithReachout</td><td>2</td></tr><tr><td/><td/><td>currentAcuteDistress</td><td>87</td></tr><tr><td/><td/><td>followupWorse</td><td>20</td></tr><tr><td>Crisis</td><td>39</td><td>crisis</td><td>39</td></tr></table>",
"num": null,
"type_str": "table",
"text": "CG and FG label sets. Their frequencies represent the number of posts in the labelled dataset."
},
"TABREF2": {
"html": null,
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "F1 results for 5-fold cross-validation on training data and the official test results from the shared task."
},
"TABREF4": {
"html": null,
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "Comparison results on the test dataset in terms of precision, recall and F1."
}
}
}
}