ACL-OCL / Base_JSON /prefixW /json /W16 /W16-0318.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W16-0318",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:47:35.447799Z"
},
"title": "Text Analysis and Automatic Triage of Posts in a Mental Health Forum",
"authors": [
{
"first": "Ehsaneddin",
"middle": [],
"last": "Asgari",
"suffix": "",
"affiliation": {},
"email": "asgari@ischool.berkeley.edu"
},
{
"first": "Soroush",
"middle": [],
"last": "Nasiriany",
"suffix": "",
"affiliation": {},
"email": "snasiriany@berkeley.edu"
},
{
"first": "Mohammad",
"middle": [
"R K"
],
"last": "Mofrad",
"suffix": "",
"affiliation": {},
"email": "mofrad@berkeley.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present an approach for automatic triage of message posts in ReachOut.com mental health forum, which was a shared task in the 2016 Computational Linguistics and Clinical Psychology (CLPsych). This effort is aimed at providing the trained moderators of Rea-chOut.com with a systematic triage of forum posts, enabling them to more efficiently support the young users aged 14-25 communicating with each other about their issues. We use different features and classifiers to predict the users' mental health states, marked as green, amber, red, and crisis. Our results show that random forests have significant success over our baseline mutli-class SVM classifier. In addition, we perform feature importance analysis to characterize key features in identification of the critical posts.",
"pdf_parse": {
"paper_id": "W16-0318",
"_pdf_hash": "",
"abstract": [
{
"text": "We present an approach for automatic triage of message posts in ReachOut.com mental health forum, which was a shared task in the 2016 Computational Linguistics and Clinical Psychology (CLPsych). This effort is aimed at providing the trained moderators of Rea-chOut.com with a systematic triage of forum posts, enabling them to more efficiently support the young users aged 14-25 communicating with each other about their issues. We use different features and classifiers to predict the users' mental health states, marked as green, amber, red, and crisis. Our results show that random forests have significant success over our baseline mutli-class SVM classifier. In addition, we perform feature importance analysis to characterize key features in identification of the critical posts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Mental health issues profoundly impact the wellbeing of those afflicted and the safety of society as a whole (\u00dcst\u00fcn et al., 2004) . Major effort is still needed to identify and aid those who are suffering from mental illness but doing so in a case by case basis is not practical and expensive (Mark et al., 2005) . These limitations inspired us to develop an automated mechanism that can robustly classify the mental state of a person. The abundance of publicly available data allows us to access each person's record of comments and message posts online in an effor to predict and evaluate their mental health.",
"cite_spans": [
{
"start": 109,
"end": 129,
"text": "(\u00dcst\u00fcn et al., 2004)",
"ref_id": null
},
{
"start": 293,
"end": 312,
"text": "(Mark et al., 2005)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The CLPsych 2016 Task accumulates a selection of 65,514 posts from ReachOut.com, dedicated to providing a means for members aged 14-25 to express their thoughts in an anonymous environment. These posts have all been selected from the years 2012 through 2015. Of these posts, 947 have been carefully analyzed, and each assigned a label: green (the user shows no sign of mental health issues), amber (the user's posts should be reviewed further to identify any issues), red (there is a very high likelihood that the user has mental health issues), and crisis (the user needs immediate attention). These 947 postslabel pairs represent our train data. We then use the train data to produce a model that assigns a label to any generic post. A separate selection of 241 posts are dedicated as the test data, to be used to evaluate the accuracy of the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shared Task Description",
"sec_num": "1.1"
},
{
"text": "Our approach for automatic triage of posts in the mental health forum, much like any other classification pipeline, is composed of three phases: feature extraction, selection of learning algorithm, and validation and parameter tuning in a cross validation framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "2"
},
{
"text": "Feature extraction is one of the key steps in any machine learning task, which can significantly influence the performance of learning algorithms (Bengio et al., 2013) . In the feature extraction phase we extracted the following information from the given XML files of forum posts: author, the authors rank-ing in the forum, time of submission and editing, number of likes and views, the body of the post, the subject, the thread associated to the post, and changeability of the text. For the representation of textual data (subject and body) we use both tfidf and the word embedding representation of the data (Mikolov et al., 2013b; Mikolov et al., 2013a; Zhang et al., 2011) . Skip-gram word embedding which is trained in the course of language modeling is shown to capture syntactic and semantic regularities in the data (Mikolov et al., 2013c; Mikolov et al., 2013a) . For the purpose of training the word embeddings we use skip-gram neural networks (Mikolov et al., 2013a) on the collection of all the textual data (subject/text) of 65,514 posts provided in the shared task. In our word embedding training, we use the word2vec implementation of skip-gram (Mikolov et al., 2013b) . We set the dimension of word vectors to 100, and the window size to 10 and we sub-sample the frequent words by the ratio 1 10 3 . Subsequently, to encode a body/subject of a post we use tf-idf weighted sum of word-vectors in that post (Le and Mikolov, 2014) . The features are summarized in Table 1. To ensure being inclusive in finding important features, stop words are not removed.",
"cite_spans": [
{
"start": 146,
"end": 167,
"text": "(Bengio et al., 2013)",
"ref_id": "BIBREF0"
},
{
"start": 611,
"end": 634,
"text": "(Mikolov et al., 2013b;",
"ref_id": "BIBREF8"
},
{
"start": 635,
"end": 657,
"text": "Mikolov et al., 2013a;",
"ref_id": "BIBREF7"
},
{
"start": 658,
"end": 677,
"text": "Zhang et al., 2011)",
"ref_id": "BIBREF14"
},
{
"start": 825,
"end": 848,
"text": "(Mikolov et al., 2013c;",
"ref_id": "BIBREF9"
},
{
"start": 849,
"end": 871,
"text": "Mikolov et al., 2013a)",
"ref_id": "BIBREF7"
},
{
"start": 955,
"end": 978,
"text": "(Mikolov et al., 2013a)",
"ref_id": "BIBREF7"
},
{
"start": 1161,
"end": 1184,
"text": "(Mikolov et al., 2013b)",
"ref_id": "BIBREF8"
},
{
"start": 1430,
"end": 1444,
"text": "Mikolov, 2014)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature extraction",
"sec_num": "2.1"
},
{
"text": "The Random Forest (RF) classifier (Breiman, 2001) is employed to predict the users mental health states (green, red, amber, and crisis) from the posts in the ReachOut forum. A random forest is an ensemble method based on use of multiple decision trees (Breiman, 2001) . Random forest classifiers have several advantages, including estimation of important features in the classification, efficiency when a large proportion of the data is missing, and efficiency when dealing with a large number of features (Cutler et al., 2012) ; therefore random forests fit our problem very well. The validation step is conducted over 947 labeled instances, in a 10xFold cross validation process. Different parameters of random forests, including the number of trees, the measure of split quality, the number of features in splits, and the maximum depth are tuned using cross-validation. In this work, we use Scikit implementation of Random Forests (Pedregosa et al., 2011) .",
"cite_spans": [
{
"start": 34,
"end": 49,
"text": "(Breiman, 2001)",
"ref_id": "BIBREF2"
},
{
"start": 252,
"end": 267,
"text": "(Breiman, 2001)",
"ref_id": "BIBREF2"
},
{
"start": 506,
"end": 527,
"text": "(Cutler et al., 2012)",
"ref_id": "BIBREF4"
},
{
"start": 934,
"end": 958,
"text": "(Pedregosa et al., 2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Triage",
"sec_num": "2.2"
},
{
"text": "Our results on the training set show that incorpo-ration of unlabeled data in the training using label propagation by means of nearest-neighbor search does not increase the classification accuracy. Therefore, the unlabeled data is not incorporated in the training. For the comparison phase, we consider multiclass Support Vector Machine classifier (SVM) with radial basis function kernel as a baseline method (Cortes and Vapnik, 1995; Weston and Watkins, 1998) .",
"cite_spans": [
{
"start": 409,
"end": 434,
"text": "(Cortes and Vapnik, 1995;",
"ref_id": "BIBREF3"
},
{
"start": 435,
"end": 460,
"text": "Weston and Watkins, 1998)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Triage",
"sec_num": "2.2"
},
{
"text": "Our results show that random forests have significant success over SVM classifiers. The 4-ways classification accuracies are summarized in Table 3 . The evaluations on the test set for the random forest approach are summarized in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 139,
"end": 146,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 230,
"end": 237,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "Random Forests can easily provide us with the most relevant features in the classification (Cutler et al., 2012; Breiman, 2001 ). Random Forest consists of a number of decision trees. In the training procedure, it can be calculated how much a feature decreases the weighted impurity in a tree. The impurity decrease for each feature can be averaged and normalized over all trees of the ensemble and the features can be ranked according to this measure (Breiman et al., 1984; Breiman, 2001) . We extracted the most discriminative features in the automatic triage of the posts using mean decrease impurity for the best Random Forest we obtained in the cross-validation (Breiman et al., 1984) .",
"cite_spans": [
{
"start": 91,
"end": 112,
"text": "(Cutler et al., 2012;",
"ref_id": "BIBREF4"
},
{
"start": 113,
"end": 126,
"text": "Breiman, 2001",
"ref_id": "BIBREF2"
},
{
"start": 452,
"end": 474,
"text": "(Breiman et al., 1984;",
"ref_id": "BIBREF1"
},
{
"start": 475,
"end": 489,
"text": "Breiman, 2001)",
"ref_id": "BIBREF2"
},
{
"start": 667,
"end": 689,
"text": "(Breiman et al., 1984)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Important Features",
"sec_num": "3.1"
},
{
"text": "Our results shows that from the top 100 features, 88 100 were related to the frequency of particular words in the body of the post, 4 100 were related to the posting/editing time (00:00 to 23:00) and the day in the month (1 st to 31 th ), 4 100 were indication of the author and author ranking, 2 100 were related to the frequency of words in the subject, 1 100 was the number of views, and 1 100 was the number of likes a post gets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Important Features",
"sec_num": "3.1"
},
{
"text": "The top 50 discriminative features, their importance, and their average values for each class are provided in Table 3 .1. We have also presented the inverse document frequency (IDF) to identify how much information each word has encoded within the collection of posts (Robertson, 2004) . Many interesting patterns can be observed in the word usage of each class. For example, the word 'feel' significantly more often occurs in the red and crisis posts. Surprisingly, there were some stop-words among the most important features. For instance, words 'to' and 'not', on average occur in green posts 1 2 of times of non-green posts. Another example is the usage of the word 'me', which occurs more frequently in non-green posts. Furthermore, the posts with more 'likes' are less likely to be non-green.",
"cite_spans": [
{
"start": 268,
"end": 285,
"text": "(Robertson, 2004)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 110,
"end": 117,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Important Features",
"sec_num": "3.1"
},
{
"text": "Subject: As indicated in Table 3 .1 posts which have word 're' in their subjects are more likely to belong to the green class.",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Important Features",
"sec_num": "3.1"
},
{
"text": "Time: As shown in Figure 1 and Table 3 .1 the red posts on average are submitted on a day closer to the end of the month. In addition, the portion of red and crisis message posts in the interval of 5 A.M. to 7 A.M. was much higher than the green and amber posts.",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 26,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 31,
"end": 38,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Important Features",
"sec_num": "3.1"
},
{
"text": "In this work, we explored the automatic triage of message posts in a mental health forum. Using Random Forest classifiers we obtain a higher triage accuracy in comparison with our baseline method, i.e. a mutli-class support vector machine. Our results showed that incorporation of unlabeled data did not increase the classification accuracy of Random Forest, which could be due to the fact that Random Forests themselves are efficient enough in dealing with missing data points (Cutler et al., 2012) . Furthermore, our results suggest that employing full vocabularies would be more discriminative than using sentence embedding. This could be interpreted as the importance of occurrence of particular words rather than particular concepts. In addition, taking advantage of the capability of Random Forest in the estimation of important features in classification, we explored the most relevant features contributing in the automatic triage. Table 4 : The 50 most discriminative features of posts and their mean values for each class of green, amber, red, and crisis, which are ranked according to their feature importance. For the words we have also provided their IDF. ",
"cite_spans": [
{
"start": 478,
"end": 499,
"text": "(Cutler et al., 2012)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 940,
"end": 947,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
}
],
"back_matter": [
{
"text": "Fruitful discussions with Meshkat Ahmadi, Mohsen Mahdavi, and Mohammad Soheilypour are gratefully acknowledged.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Representation learning: A review and new perspectives. Pattern Analysis and Machine Intelligence",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Vincent",
"suffix": ""
}
],
"year": 2013,
"venue": "IEEE Transactions on",
"volume": "35",
"issue": "8",
"pages": "1798--1828",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, Aaron Courville, and Pierre Vincent. 2013. Representation learning: A review and new per- spectives. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35(8):1798-1828.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Classification and regression trees",
"authors": [
{
"first": "Leo",
"middle": [],
"last": "Breiman",
"suffix": ""
},
{
"first": "Jerome",
"middle": [],
"last": "Friedman",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Charles",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"A"
],
"last": "Stone",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Olshen",
"suffix": ""
}
],
"year": 1984,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leo Breiman, Jerome Friedman, Charles J Stone, and Richard A Olshen. 1984. Classification and regres- sion trees. CRC press.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Random forests. Machine learning",
"authors": [
{
"first": "Leo",
"middle": [],
"last": "Breiman",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "45",
"issue": "",
"pages": "5--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leo Breiman. 2001. Random forests. Machine learning, 45(1):5-32.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Supportvector networks",
"authors": [
{
"first": "Corinna",
"middle": [],
"last": "Cortes",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1995,
"venue": "Machine learning",
"volume": "20",
"issue": "3",
"pages": "273--297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Corinna Cortes and Vladimir Vapnik. 1995. Support- vector networks. Machine learning, 20(3):273-297.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Random forests",
"authors": [
{
"first": "Adele",
"middle": [],
"last": "Cutler",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Cutler",
"suffix": ""
},
{
"first": "John R",
"middle": [],
"last": "Stevens",
"suffix": ""
}
],
"year": 2012,
"venue": "Ensemble Machine Learning",
"volume": "",
"issue": "",
"pages": "157--175",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adele Cutler, D Richard Cutler, and John R Stevens. 2012. Random forests. In Ensemble Machine Learn- ing, pages 157-175. Springer.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Distributed representations of sentences and documents",
"authors": [
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1405.4053"
]
},
"num": null,
"urls": [],
"raw_text": "Quoc V Le and Tomas Mikolov. 2014. Distributed repre- sentations of sentences and documents. arXiv preprint arXiv:1405.4053.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Us spending for mental health and substance abuse treatment",
"authors": [
{
"first": "L",
"middle": [],
"last": "Tami",
"suffix": ""
},
{
"first": "Rosanna",
"middle": [
"M"
],
"last": "Mark",
"suffix": ""
},
{
"first": "Rita",
"middle": [],
"last": "Coffey",
"suffix": ""
},
{
"first": "Hendrick",
"middle": [
"J"
],
"last": "Vandivort-Warren",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harwood",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "24",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tami L Mark, Rosanna M Coffey, Rita Vandivort-Warren, Hendrick J Harwood, et al. 2005. Us spending for mental health and substance abuse treatment, 1991- 2001. Health Affairs, 24:W5.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representa- tions in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111-3119.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Linguistic regularities in continuous space word representations",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2013,
"venue": "HLT-NAACL",
"volume": "",
"issue": "",
"pages": "746--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013c. Linguistic regularities in continuous space word representations. In HLT-NAACL, pages 746- 751.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Scikit-learn: Machine learning in python",
"authors": [
{
"first": "Fabian",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "Ga\u00ebl",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Bertrand",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "Mathieu",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "Ron",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Dubourg",
"suffix": ""
}
],
"year": 2011,
"venue": "The Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vin- cent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. The Journal of Machine Learning Research, 12:2825-2830.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Understanding inverse document frequency: on theoretical arguments for idf",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Robertson",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of documentation",
"volume": "60",
"issue": "5",
"pages": "503--520",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Robertson. 2004. Understanding inverse doc- ument frequency: on theoretical arguments for idf. Journal of documentation, 60(5):503-520.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Global burden of depressive disorders in the year 2000",
"authors": [
{
"first": "Joseph",
"middle": [
"L"
],
"last": "Tb\u00fcst\u00fcn",
"suffix": ""
},
{
"first": "Somnath",
"middle": [],
"last": "Ayuso-Mateos",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Chatterji",
"suffix": ""
},
{
"first": "Christopher Jl",
"middle": [],
"last": "Mathers",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Murray",
"suffix": ""
}
],
"year": 2004,
"venue": "The British journal of psychiatry",
"volume": "184",
"issue": "5",
"pages": "386--392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "TB\u00dcst\u00fcn, Joseph L Ayuso-Mateos, Somnath Chatterji, Colin Mathers, and Christopher JL Murray. 2004. Global burden of depressive disorders in the year 2000. The British journal of psychiatry, 184(5):386-392.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Multi-class support vector machines",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Watkins",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Weston and Chris Watkins. 1998. Multi-class sup- port vector machines. Technical report, Citeseer.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A comparative study of tf* idf, lsi and multi-words for text classification",
"authors": [
{
"first": "Wen",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Taketoshi",
"middle": [],
"last": "Yoshida",
"suffix": ""
},
{
"first": "Xijin",
"middle": [],
"last": "Tang",
"suffix": ""
}
],
"year": 2011,
"venue": "Expert Systems with Applications",
"volume": "38",
"issue": "3",
"pages": "2758--2765",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wen Zhang, Taketoshi Yoshida, and Xijin Tang. 2011. A comparative study of tf* idf, lsi and multi-words for text classification. Expert Systems with Applications, 38(3):2758-2765.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "Histogram of message posting time distribution for each mental health state (crisis, red, amber, and green). The left plots show distribution of posts in days of the month (1-31) and the right plots show the distribution of the hours of the day.",
"type_str": "figure"
},
"TABREF0": {
"html": null,
"text": "Features Extracted from ReachOut forum posts Feature Description Length Author One hot representation of unique authors in 65755 posts. 1605 Ranking of the author One hot representation of the author category. 25 Submission timeSeparated numerical representations of year, day, month, and the hour that a post is submitted to the forum.4Edit time Separated numerical representations of year, day, month, and the hour that a post is edited in the forum.",
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td>4</td></tr><tr><td>Likes</td><td>The number of likes a post gets.</td><td>1</td></tr><tr><td>Views</td><td>The number of times a post is viewed by the forum users.</td><td>1</td></tr><tr><td>Body</td><td>Tf-idf representation of the text in the body of the post.</td><td>55758</td></tr><tr><td>Subject</td><td>Tf-idf representation of the text in the subject of the post.</td><td>3690</td></tr><tr><td>Embedded-Body</td><td>Embedding representation of the text in the body of the</td><td>100</td></tr><tr><td/><td>post.</td><td/></tr><tr><td>Embedded-Subject</td><td>Embedding representation of the text in the subject of the</td><td>100</td></tr><tr><td/><td>post.</td><td/></tr><tr><td>Thread</td><td>One hot representation of the thread of the post.</td><td>3910</td></tr><tr><td>Read only</td><td>If the post is readonly.</td><td>1</td></tr></table>"
},
"TABREF1": {
"html": null,
"text": "List of features that have been used in the automatic triage of ReachOut forum posts",
"num": null,
"type_str": "table",
"content": "<table><tr><td>Features a a a a a a a a a a a a a a Classifiers Tf-idf features</td><td colspan=\"2\">Random Forest Classifier SVM Classifier 71.28% \u00b1 2.9% 42.2% \u00b1 3.1%</td></tr><tr><td>Embedding features</td><td>71.26% \u00b1 4.0%</td><td>42.2% \u00b1 4.0%</td></tr></table>"
},
"TABREF2": {
"html": null,
"text": "The average 4-ways classification accuracies in 10xFold cross-validation for the random forest and support vector machine classifiers tuned for the best parameters on two different sets of features. Embedding features refer to use of embeddings for the body and the subject instead of tf-idf representations.",
"num": null,
"type_str": "table",
"content": "<table><tr><td>Methods</td><td colspan=\"2\">Accuracy Non-green vs . green accuracy</td></tr><tr><td>Random Forest &amp; tf-idf features</td><td>79%</td><td>86%</td></tr><tr><td>Random Forest &amp; embedding features</td><td>78%</td><td>86%</td></tr></table>"
},
"TABREF3": {
"html": null,
"text": "The results of evaluation over 241 test data points.",
"num": null,
"type_str": "table",
"content": "<table/>"
}
}
}
}