| { |
| "paper_id": "W16-0324", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T03:54:01.807363Z" |
| }, |
| "title": "Classification of mental health forum posts", |
| "authors": [ |
| { |
| "first": "Glen", |
| "middle": [], |
| "last": "Pink", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "glen.pink@sydney.edu.au" |
| }, |
| { |
| "first": "Will", |
| "middle": [], |
| "last": "Radford", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "wradford@hugo.ai" |
| }, |
| { |
| "first": "Ben", |
| "middle": [], |
| "last": "Hachey", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "ben.hachey@sydney.edu.au" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We detail our approach to the CLPsych 2016 triage of mental health forum posts shared task. We experiment with a number of features in a logistic regression classification approach. Our baseline approach with lexical features from a post and previous posts in the reply chain gives our best performance of 0.33, which is roughly the median for the task.", |
| "pdf_parse": { |
| "paper_id": "W16-0324", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We detail our approach to the CLPsych 2016 triage of mental health forum posts shared task. We experiment with a number of features in a logistic regression classification approach. Our baseline approach with lexical features from a post and previous posts in the reply chain gives our best performance of 0.33, which is roughly the median for the task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The CLPsych 2016 shared task requires the triage of forum posts from the ReachOut.com forums, a support forum for youth mental health issues. The triage task centres on directing forum moderators to posts which required the most immediate attention (Calvo et al., 2016) . For this task, a set of posts from the forum are each annotated with one of the labels crisis, red, amber or green, which indicate decreasing degrees of urgency of moderator addition. All unlabelled posts are made available for systems.", |
| "cite_spans": [ |
| { |
| "start": 249, |
| "end": 269, |
| "text": "(Calvo et al., 2016)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This task follows other studies of social media discourse as it relates to clinical psychology (Thompson et al., 2014; Schwartz et al., 2014; Coppersmith et al., 2015; Schrading et al., 2015) . Analysis of ReachOut.com posts is interesting as posts are made by young individuals who have originally come to the forum seeking some kind of help, but over time may participate in several different capacities. Typically most users will initially need support, but this need may substantially increase or decrease over time; users may also support each other or use the forums for activity unrelated to mental health. Our approach to this task was primarily focussed on implementing a straightforward baseline and experimenting with a few ideas derived from experience looking at the data in detail. While the data itself is definitely sequenced, we choose not to model this as a sequence problem, primarily because we expect the meaningful sequences to be fairly short: typically users either create new posts that are generally relevant to the original post in a thread, or reply to a specific post.", |
| "cite_spans": [ |
| { |
| "start": 95, |
| "end": 118, |
| "text": "(Thompson et al., 2014;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 119, |
| "end": 141, |
| "text": "Schwartz et al., 2014;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 142, |
| "end": 167, |
| "text": "Coppersmith et al., 2015;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 168, |
| "end": 191, |
| "text": "Schrading et al., 2015)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We further motivate this local post comparison by considering the annotation flowchart distributed with the data. Many labelling decisions are affected by whether the user's state is considered to be the same, or if their condition has gotten worse. Key to this task is capturing change in author language, and identifying how this reflects a change in their stateof-mind and change of condition.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We implement a feature set based on basic post features and author history and thread context, using the sequence of replies that lead to a post as the context for that post. We experiment with a number of additional features, but our baseline approach provides our best result of 0.33, which puts our performance at the median overall.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We make use of post lexical features, author history and thread history for classification.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Prior to extracting features, we perform some basic preprocessing on post text. We unescape HTML entities, remove images and replace emoticons with the name of the emoticon to simplify processing. We remove blockquotes entirely, as we want extracted features to be from the content of the current post. We tokenise using the NLTK TweetTokenizer, as we expect the web forum text to be fairly casual and similiar to the Twitter domain for the purposes of tokenisation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preprocessing", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "We extract unigrams and bigrams as post features, and continue to use this feature space for the below contexts.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical features", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Instead of using the sequence of posts in a thread as context, we make use of the chain of replies to a post as the context for that post. We make use of two posts in that context: the most recent post before the current post that has the same author as the current post, and the most recent post to the current post. We retrieve unigrams and bigrams for these posts. We then extract three different types of features: the intersection of unigrams and bigrams with the current post; those that occur in the current post but not the previous post; and those that occur in the previous post but not the current. Note that there are separate feature spaces for author posts and non-author posts.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reply chain features", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "We experimented with a number of features which did not improve results. These include use of ngramfeatures from the first post in thread of the post; use of lemmas instead of words; cosine similarity between post bag-of-words; and thread type. We manually identify these thread types for threads which have a substantially different structure to others, such as the Turning Negatives Into Positives and TwittRO. We identify 1 post as game, 2 as media (e.g. image threads), 5 as semi-structured and 5 as short (e.g. TwittRO).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unused features", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "The released training corpus contains 65,024 posts, 947 of which are annotated with triage labels. For development, we split this into a train set of 797 posts and a development set of 250 posts. We use a scikit-learn logistic regression classifier, using a grid search over a regularization hyperparameters ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and training", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Precision Recall F-score macro-avg 0.42 0.41 0.42 crisis 0.00 (0/0) 0.00 (0/13) 0.00 red 0.58 (14/24) 0.61 (14/23) 0.60 amber 0.68 (40/59) 0.62 (40/64) 0.65 over 10-fold cross validation over the train set. Results on development data in Table 1 . Figure 1 shows the confusion matrix, including green classifications. We note that a large number of confusions happen between amber and green, largely due to their larger representation in the data. For the full task we use the full 947 posts for training. The test set adds an additional 731 posts.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 238, |
| "end": 245, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| }, |
| { |
| "start": 248, |
| "end": 256, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Label", |
| "sec_num": null |
| }, |
| { |
| "text": "We experimented with using a cascaded classification approach, classifying crisis v. non-crisis, red v. non-red and amber v. non-amber in sequence, however this approach did not perform well. We also experimented with treating the task as a regression task, mapping crisis to a value of 1.0, red to 0.66, amber to 0.33, and green to 0.0. The idea is that we expect there to be a gradient to post severity rather than a distinct underlying set of 4 labels, and this gradient may be better modelled via a regression approach. Our implementation has lower results than our approach using discrete labels, but we consider this to be a possible direction for future approaches to this task. run score accuracy ngvg ngvg accuracy 1 0.33 0.78 0.73 0.85 2 0.32 0.76 0.72 0.83 Table 2 : Official results. ngvg is non-green vs green.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 768, |
| "end": 775, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Label", |
| "sec_num": null |
| }, |
| { |
| "text": "Recall F-score crisis 0.00 (0/0) 0.00 (0/1) 0.00 red 0.61 (11/18) 0.41 (11/27) 0.49 amber 0.50 23/46) 0.49 (23/470.49 Table 3 : Run 1 per-label scores.", |
| "cite_spans": [ |
| { |
| "start": 71, |
| "end": 78, |
| "text": "(11/27)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 118, |
| "end": 125, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Label Precision", |
| "sec_num": null |
| }, |
| { |
| "text": "We submit two runs, for both L2 (run 1, with regularisation parameter C = 1) and L1 (run 2, with regularisation parameter C = 100) regularisation. Our official results are in Table 2 , with per-label breakdowns of each run in Tables 3 and 4 . While other labellings fall outside the official metric for the shared task, we are interested in the performance of a system trained on only non-green vs green as opposed to all 4 triage labels. We run this configuration with the same settings as run 1. This configuration has an F-score of 0.80 on our development data, and a score of 0.82, which above our multiple label F-score of 0.73. This may be a useful setup for a two-stage classification or an actual implementation for ReachOut.com moderators.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 175, |
| "end": 182, |
| "text": "Table 2", |
| "ref_id": null |
| }, |
| { |
| "start": 226, |
| "end": 240, |
| "text": "Tables 3 and 4", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Run 1 performs at the median, and may be an informative baseline. Interestingly, many of the features that we explored decreased or did not significantly improve performance. This is possibly due to feature sparsity: the amount of training data is relatively small, and most of these features likely are not informative. We note that L2 regularisation gives our best performance, the data set is small, and L2 keeping more features from the training data helps compensate for feature sparsity better than L1 regularisation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Notably, both of our runs returned very few crisis label Precision Recall F-score crisis 0.00 (0/0) 0.00 (0/1) 0.00 red 0.52 (11/21) 0.41 (11/27) 0.46 amber 0.50 (23/46) 0.49 (23/47) 0.49 labellings: both returned 1 labelling which was incorrect. This is somewhat surprising, particularly as a label F-score of 0% is particularly penalised with a macro-averaged metric, however given the lack of instances for training this is not unreasonable.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We participated in the CLPsych 2016 shared task, providing a baseline approach using a small feature set that gave a near-median performance of 0.33. We look forward to continuing to work on this task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Augmenting Online Mental Health Support Services. Integrating Technology in Positive Psychology Practice", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Rafael A Calvo", |
| "suffix": "" |
| }, |
| { |
| "first": "Kjartan", |
| "middle": [], |
| "last": "Sazzad Hussain", |
| "suffix": "" |
| }, |
| { |
| "first": "Ian", |
| "middle": [], |
| "last": "Nordbo", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Hickie", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Milne", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Danckwerts", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rafael A Calvo, M Sazzad Hussain, Kjartan Nordbo, Ian Hickie, David Milne, and P Danckwerts. 2016. Aug- menting Online Mental Health Support Services. In- tegrating Technology in Positive Psychology Practice, page 82.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "CLPsych 2015 Shared Task: Depression and PTSD on Twitter", |
| "authors": [ |
| { |
| "first": "Glen", |
| "middle": [], |
| "last": "Coppersmith", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Dredze", |
| "suffix": "" |
| }, |
| { |
| "first": "Craig", |
| "middle": [], |
| "last": "Harman", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristy", |
| "middle": [], |
| "last": "Hollingshead", |
| "suffix": "" |
| }, |
| { |
| "first": "Margaret", |
| "middle": [], |
| "last": "Mitchell", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality", |
| "volume": "", |
| "issue": "", |
| "pages": "31--39", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Glen Coppersmith, Mark Dredze, Craig Harman, Kristy Hollingshead, and Margaret Mitchell. 2015. CLPsych 2015 Shared Task: Depression and PTSD on Twitter. In Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, pages 31-39, Denver, Col- orado, June 5. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "An Analysis of Domestic Abuse Discourse on Reddit", |
| "authors": [ |
| { |
| "first": "Nicolas", |
| "middle": [], |
| "last": "Schrading", |
| "suffix": "" |
| }, |
| { |
| "first": "Cecilia", |
| "middle": [], |
| "last": "Ovesdotter Alm", |
| "suffix": "" |
| }, |
| { |
| "first": "Ray", |
| "middle": [], |
| "last": "Ptucha", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Homan", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "2577--2583", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nicolas Schrading, Cecilia Ovesdotter Alm, Ray Ptucha, and Christopher Homan. 2015. An Analysis of Do- mestic Abuse Discourse on Reddit. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2577-2583, Lisbon, Por- tugal, September. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Towards Assessing Changes in Degree of Depression through Facebook", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Schwartz", |
| "suffix": "" |
| }, |
| { |
| "first": "Johannes", |
| "middle": [], |
| "last": "Eichstaedt", |
| "suffix": "" |
| }, |
| { |
| "first": "Margaret", |
| "middle": [ |
| "L" |
| ], |
| "last": "Kern", |
| "suffix": "" |
| }, |
| { |
| "first": "Gregory", |
| "middle": [], |
| "last": "Park", |
| "suffix": "" |
| }, |
| { |
| "first": "Maarten", |
| "middle": [], |
| "last": "Sap", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Stillwell", |
| "suffix": "" |
| }, |
| { |
| "first": "Michal", |
| "middle": [], |
| "last": "Kosinski", |
| "suffix": "" |
| }, |
| { |
| "first": "Lyle", |
| "middle": [], |
| "last": "Ungar", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality", |
| "volume": "", |
| "issue": "", |
| "pages": "118--125", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "H. Andrew Schwartz, Johannes Eichstaedt, Margaret L. Kern, Gregory Park, Maarten Sap, David Stillwell, Michal Kosinski, and Lyle Ungar. 2014. Towards Assessing Changes in Degree of Depression through Facebook. In Proceedings of the Workshop on Com- putational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, pages 118-125, Baltimore, Maryland, USA, June. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Predicting military and veteran suicide risk: Cultural aspects", |
| "authors": [ |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Thompson", |
| "suffix": "" |
| }, |
| { |
| "first": "Craig", |
| "middle": [], |
| "last": "Bryan", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Poulin", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality", |
| "volume": "", |
| "issue": "", |
| "pages": "1--6", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Paul Thompson, Craig Bryan, and Chris Poulin. 2014. Predicting military and veteran suicide risk: Cultural aspects. In Proceedings of the Workshop on Compu- tational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, pages 1-6, Balti- more, Maryland, USA, June. Association for Compu- tational Linguistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "text": "Confusion matrix on the development data.", |
| "num": null, |
| "uris": null |
| }, |
| "TABREF0": { |
| "type_str": "table", |
| "num": null, |
| "text": "Final scores for run 1 settings on development data.", |
| "html": null, |
| "content": "<table/>" |
| }, |
| "TABREF1": { |
| "type_str": "table", |
| "num": null, |
| "text": "Run 2 per-label scores.", |
| "html": null, |
| "content": "<table/>" |
| } |
| } |
| } |
| } |