ACL-OCL / Base_JSON /prefixW /json /W16 /W16-0319.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W16-0319",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:59:45.710216Z"
},
"title": "The UMD CLPsych 2016 Shared Task System: Text Representation for Predicting Triage of Forum Posts about Mental Health",
"authors": [
{
"first": "Meir",
"middle": [],
"last": "Friedenberg",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Hadi",
"middle": [],
"last": "Amiri",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": "",
"affiliation": {},
"email": "resnik@umd.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We report on a multiclass classifier for triage of mental health forum posts as part of the CLPsych 2016 shared task. We investigate a number of document representations, including topic models and representation learning to represent posts in semantic space, including context-and emotion-sensitive feature representations of posts.",
"pdf_parse": {
"paper_id": "W16-0319",
"_pdf_hash": "",
"abstract": [
{
"text": "We report on a multiclass classifier for triage of mental health forum posts as part of the CLPsych 2016 shared task. We investigate a number of document representations, including topic models and representation learning to represent posts in semantic space, including context-and emotion-sensitive feature representations of posts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The 2016 CLPsych Shared Task focused on automatic triage of posts from ReachOut.com, an anonymous online mental health site for young people that permits peer support and dissemination of mental health information and guidance. Peer support and volunteer services like ReachOut, Koko, 1 and Crisis Text Line 2 offer new and potentially very important ways to help serve mental health needs, given the challenges many people face in obtaining access to mental health providers and the astronomical societal cost of mental illness (Insel, 2008) . In such settings, however, it is essential that moderators be able to quickly and accurately identify posts that require intervention from trained personnel, e.g., where there is potential for harm to self or others. This shared task aimed to make progress on that problem by advancing technology for automatic triage of forum posts. In particular, the task involved prediction of categories for ReachOut posts, with the four categories, {crisis, red, amber, green}, indicating how urgently the post needs attention.",
"cite_spans": [
{
"start": 529,
"end": 542,
"text": "(Insel, 2008)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1 itskoko.com 2 crisistextline.org",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Following Resnik et al. (2015) , the core of our system is classification via multi-class support vector machines (SVMs) with a linear kernel. We explore topic models as well as context-and emotionsensitive representations of posts, together with baseline bag of words representations, as features for our model.",
"cite_spans": [
{
"start": 10,
"end": 30,
"text": "Resnik et al. (2015)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Systems Overview",
"sec_num": "2"
},
{
"text": "We considered bag of words and bag of bigrams in conjunction with TF-IDF and binary weighting schemes of these represenations and stopword removal. Our preliminary experiments with development data suggested that binary weighted bag of words features with stopword removal were an effective baseline; we refer to this feature set simply as BOW.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Lexical Features",
"sec_num": "2.1"
},
{
"text": "We use Latent Dirichlet Allocation (LDA) (Blei et al., 2003) to create a 30-topic model on the entire ReachOut corpus (including labeled, unlabeled, and test data), as well as posts from the Reddit.com /r/Depression forum, yielding document (forum post) topic probability posteriors as features. The inclusion of the test data among the inputs to LDA can be thought of as a transductive approach to model generation for this shared task aiming to take maximal advantage of available data, although this would prevent post-by-post processing in a realworld setting. ",
"cite_spans": [
{
"start": 41,
"end": 60,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Models",
"sec_num": "2.2"
},
{
"text": "We obtain context-sensitive representations of an input post by concatenating the average word embedding of the input post with its \"context\" information (represented by low dimensional vectors) and passing the resulting vector to a basic autoencoder (Hinton and Salakhutdinov, 2006) . We obtain context vectors for posts via non-negative matrix factorization (NMF) where the disttribution of an input post over the topics in the dataset is used as its context vector. We use the pre-trained 300-dimensional word embeddings provided by Word2Vec. 3 Formally, we use NMF to identify context information for input posts as follows. Given a training dataset with n posts, i.e., X \u2208 R v\u00d7n , where v is the size of a global vocabulary and the scalar k is the number of topics in the dataset, we learn the topic matrix D \u2208 R v\u00d7k and a context matrix C \u2208 R k\u00d7n using the following sparse coding algorithm:",
"cite_spans": [
{
"start": 251,
"end": 283,
"text": "(Hinton and Salakhutdinov, 2006)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Context-Sensitive Representation",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "min D,C X \u2212 DC 2 F + \u00b5 C 1 ,",
"eq_num": "(1)"
}
],
"section": "Context-Sensitive Representation",
"sec_num": "2.3"
},
{
"text": "s.t. D \u2265 0, C \u2265 0,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context-Sensitive Representation",
"sec_num": "2.3"
},
{
"text": "where each column in C is a sparse representation of an input over all topics and can be used as context information for its corressponding input post. Note that we obtain the context of test instances by transforming them according to the fitted NMF model on training data. We believe combining test and training data (as discussed above) will further improve the quality of our context vectors. We concatenate the average word embedings and context vectors of input posts and pass them to a basic deep autoencoder (Hinton and Salakhutdinov, 2006) with three hidden layers. The hidden representations produced by the autoencoder will be used as context-sensitive representations of inputs and considered as features in our system.",
"cite_spans": [
{
"start": 516,
"end": 548,
"text": "(Hinton and Salakhutdinov, 2006)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Context-Sensitive Representation",
"sec_num": "2.3"
},
{
"text": "3 code.google.com/p/word2vec.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context-Sensitive Representation",
"sec_num": "2.3"
},
{
"text": "The emotion-sensitive representation of an input post is obtained by computing the distance (Euclidean distance or cosine similarity) between the average word embedding of the input post with nine categories of emotion words. The emotion categories that we consider are anger, disgust, sadness, fear, guilt, interest, joy, shame, surprise,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Emotion-Sensitive Representation",
"sec_num": "2.4"
},
{
"text": "where each category has a designated word, e.g. \"anger\", and its 40 nearest neighbor words in embedding space according to Euclidean distance. For example, the category for anger contains \"anger\" along with related words like \"resentment\", \"fury\", \"frustration\", \"outrage\", \"disgust\", \"indignation\", \"dissatisfaction\", \"discontentment\", etc. 4 Using the Euclidean distance or cosine similarity between average word embedding of the input post with the embedding of each emotion word yields 311 features for the classifier, one per emotion-word category ignoring the emotion words that were removed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Emotion-Sensitive Representation",
"sec_num": "2.4"
},
{
"text": "In our experiments we used multi-class SVM classifiers with a linear kernel. Specifically, we used the python scikit-learn module (Pedregosa et al., 2011) , which interfaces with the widely-used libsvm. 5 We employed a one-vs-one decision function, and used the 'balanced' class weight option to set class weights to be inversally proportional to their frequency in the training data. 6 All other parameters were set to their default values.",
"cite_spans": [
{
"start": 130,
"end": 154,
"text": "(Pedregosa et al., 2011)",
"ref_id": "BIBREF4"
},
{
"start": 203,
"end": 204,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classifier Details",
"sec_num": "2.5"
},
{
"text": "Specific feature combinations for our systems are reported in Table 1 and were selected based on development data. While our main criterion for choosing what features to use was Macro-Averaged F-Score, System 3 (emotion-sensitive representations) was selected primarily because of its superior performance on red prediction. Given the importance of red and crisis prediction in this context, we found this system interesting and consider its relative success at red prediction to be worthy of further exploration.",
"cite_spans": [],
"ref_spans": [
{
"start": 62,
"end": 69,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Classifier Details",
"sec_num": "2.5"
},
{
"text": "Preprocessing: We performed the same basic preprocessing on all posts, including removing URLs and non-ascii characters, unescaping HTML, and expansion of contractions. We also lemmatized the tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "2.6"
},
{
"text": "Data Splits: As per the suggestion in the shared task description, we set aside the last 250 posts of the training data as development data. Our primary use of the development data was in system development and selecting feature combinations. We also removed one post each from the training and development data as they did not appear to us to have significant linguistic content.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "2.6"
},
{
"text": "Tables 2 and 3 show the performance of our submitted systems on development and test data respectively. Table 4 presents the effects of different feature combinations on development data performance, which we used to select our systems for submission.",
"cite_spans": [],
"ref_spans": [
{
"start": 104,
"end": 111,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "Test data performance is noticeably worse for all five of our systems than development data performance. A non-negligible part of that seems to be our performace on crisis recall -the fact that there is only one crisis post in the test data set implies that when our system incorrectly labels that post an F-Score of 0 is necessarily averaged in. Evaluating why all five of our systems predict a green label for the crisis post seems like a worthwhile line of inquiry towards improving upon our system. We will conduct such experiments in the future. Our system #3, which used Euclidean distance based emotion-sensitive representation of documents, was submitted because of its outstanding red prediction performance on development data. Given the importance of red and crisis recall in this domain, a system that perfomed particularly well in such an area seems worth exploring. Unfortunately, this red recall rate did not carry over to the test data, so it seems likely that our model simply overfit to the red data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "An examination of Table 4 suggests that it may be difficult to find features that are significantly more effective for this task than bag of words features. In particular, all of the systems listed that outperformed bag of words overall (whether on Macro-Averaged F-Score or Macro-Averaged F-Score over the amber, red, and crisis classes) seem to have done so only minimally. Interestingly, many of the feature sets did outperform bag of words on F-Score for the red class in development data, but this result does not seem to replicate in the test data.",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 25,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "In this paper we have summarized our contribution to the CLPSych 2016 shared task on triage of mental health forum posts. Our approach used classweighted multi-class SVM classifiers with a linear kernel, and we found binary bag of words features to be reasonably effective for this task. Though topic models and context-and emotion-sensitive vector representations did not perform well independently on this task, when used to supplement bag of words features they did lead to some improvement in test data prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Directions",
"sec_num": "4"
},
{
"text": "In future work, one direction for potential improvement is the exploration of more complex topic models. In particular, our work utilized \"vanilla\" Latent Dirichlet Allocation, but Resnik et al. (2015) found some success in applying supervised topic modelling techniques to this domain. Furthemore, it would be interesting to introduce domain expertise into the models, whether by interactive topic modelling (Hu et al., 2014) or by providing informed priors, and seeing how that affects performance.",
"cite_spans": [
{
"start": 181,
"end": 201,
"text": "Resnik et al. (2015)",
"ref_id": "BIBREF5"
},
{
"start": 409,
"end": 426,
"text": "(Hu et al., 2014)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Directions",
"sec_num": "4"
},
{
"text": "Another interesting direction we hope to explore is tracking changes amongst a user's posts over time. While we only used the four class labels, available sublables included \"followupOk\" for some amber posts and \"followupWorse\" for some red posts. Tracking how a user's language has changed both since the start of their time on the forum and from the start of a given thread seems likely to be able to provide useful features for classification of such cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Directions",
"sec_num": "4"
},
{
"text": "Finally, the labeled data available for this task was rather limited, and while we used the unlabeled data in the creation of the topic models, our system in general focused on the labeled data. Future work might explore application of semi-supervised models, integrating both the unlabeled ReachOut data and mental health posts from other forums.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Directions",
"sec_num": "4"
},
{
"text": "We also manually verified the nearest neighbor words to ensure that they correctly represent their corresponding categories, and remove words that appear in at least two categories with opposite sentiment orientation.5 scikit-learn.org/stable/modules/generated/sklearn.svm. SVC.html 6 One-vs-one beat one-vs-all in preliminary experimentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Latent Dirichlet allocation",
"authors": [
{
"first": "M",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Blei",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Michael I Jordan",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993-1022.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Reducing the dimensionality of data with neural networks",
"authors": [
{
"first": "E",
"middle": [],
"last": "Geoffrey",
"suffix": ""
},
{
"first": "Ruslan R",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2006,
"venue": "Science",
"volume": "313",
"issue": "5786",
"pages": "504--507",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey E Hinton and Ruslan R Salakhutdinov. 2006. Reducing the dimensionality of data with neural net- works. Science, 313(5786):504-507.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Interactive topic modeling. Machine learning",
"authors": [
{
"first": "Yuening",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "Brianna",
"middle": [],
"last": "Satinoff",
"suffix": ""
},
{
"first": "Alison",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "95",
"issue": "",
"pages": "423--469",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuening Hu, Jordan Boyd-Graber, Brianna Satinoff, and Alison Smith. 2014. Interactive topic modeling. Ma- chine learning, 95(3):423-469.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Assessing the economic costs of serious mental illness",
"authors": [
{
"first": "",
"middle": [],
"last": "Thomas R Insel",
"suffix": ""
}
],
"year": 2008,
"venue": "American Journal of Psychiatry",
"volume": "165",
"issue": "6",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas R Insel. 2008. Assessing the economic costs of serious mental illness. American Journal of Psychia- try, 165(6).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Scikit-learn: Machine learning in Python",
"authors": [
{
"first": "F",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Dubourg",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Vanderplas",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Cournapeau",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Brucher",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Perrot",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Duchesnay",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duches- nay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825- 2830.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The University of Maryland CLPsych 2015 shared task system. NAACL HLT",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Armstrong",
"suffix": ""
},
{
"first": "Leonardo",
"middle": [],
"last": "Claudino",
"suffix": ""
},
{
"first": "Thang",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Resnik, William Armstrong, Leonardo Claudino, and Thang Nguyen. 2015. The University of Mary- land CLPsych 2015 shared task system. NAACL HLT 2015, page 54.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"text": "System Features and Runs",
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF3": {
"text": "F-scores on development data. (Official Score is",
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"5\">Macro-Averaged F-Score over crisis, red, and amber.)</td><td/></tr><tr><td>F-Score\\System</td><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td></tr><tr><td>Green</td><td colspan=\"5\">0.83 0.87 0.84 0.83 0.85</td></tr><tr><td>Amber</td><td colspan=\"5\">0.41 0.5 0.33 0.43 0.48</td></tr><tr><td>Red</td><td colspan=\"5\">0.47 0.44 0.4 0.48 0.44</td></tr><tr><td>Crisis</td><td colspan=\"5\">0.00 0.00 0.00 0.00 0.00</td></tr><tr><td colspan=\"6\">Macro-Averaged 0.43 0.45 0.39 0.44 0.44</td></tr><tr><td>Official Score</td><td colspan=\"5\">0.29 0.31 0.24 0.30 0.31</td></tr></table>",
"num": null
},
"TABREF4": {
"text": "",
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF6": {
"text": "Multi-class F-scores of different feature combinations on development data. (Official Score is Macro-Averaged F-Score over crisis, red, and amber.)",
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null
}
}
}
}