ACL-OCL / Base_JSON /prefixE /json /E17 /E17-1015.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E17-1015",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:51:26.716011Z"
},
"title": "Multi-Task Learning for Mental Health using Social Media Text",
"authors": [
{
"first": "Adrian",
"middle": [],
"last": "Benton",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": "adrian@cs.jhu.edu"
},
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Copenhagen",
"location": {}
},
"email": "mmitchellai@google.com"
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We introduce initial groundwork for estimating suicide risk and mental health in a deep learning framework. By modeling multiple conditions, the system learns to make predictions about suicide risk and mental health at a low false positive rate. Conditions are modeled as tasks in a multitask learning (MTL) framework, with gender prediction as an additional auxiliary task. We demonstrate the effectiveness of multi-task learning by comparison to a well-tuned single-task baseline with the same number of parameters. Our best MTL model predicts potential suicide attempt, as well as the presence of atypical mental health, with AUC > 0.8. We also find additional large improvements using multi-task learning on mental health tasks with limited training data.",
"pdf_parse": {
"paper_id": "E17-1015",
"_pdf_hash": "",
"abstract": [
{
"text": "We introduce initial groundwork for estimating suicide risk and mental health in a deep learning framework. By modeling multiple conditions, the system learns to make predictions about suicide risk and mental health at a low false positive rate. Conditions are modeled as tasks in a multitask learning (MTL) framework, with gender prediction as an additional auxiliary task. We demonstrate the effectiveness of multi-task learning by comparison to a well-tuned single-task baseline with the same number of parameters. Our best MTL model predicts potential suicide attempt, as well as the presence of atypical mental health, with AUC > 0.8. We also find additional large improvements using multi-task learning on mental health tasks with limited training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Suicide is one of the leading causes of death worldwide, and over 90% of individuals who die by suicide experience mental health conditions. 1 However, detecting the risk of suicide, as well as monitoring the effects of related mental health conditions, is challenging. Traditional methods rely on both self-reports and impressions formed during short sessions with a clinical expert, but it is often unclear when suicide is a risk in particular. 2 Consequently, conditions leading to preventable suicides are often not adequately addressed.",
"cite_spans": [
{
"start": 447,
"end": 448,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Automated monitoring and risk assessment of patients' language has the potential to complement traditional assessment methods, providing objective measurements to motivate further care and additional support for people with difficulties related to mental health. This paves the way towards verifying the need for additional care with insurance coverage, for example, as well as offering direct benefits to clinicians and patients.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We explore some of the possibilities in the deep learning and mental health space using written social media text that people with different mental health conditions are already producing. Uncovering methods that work with such text provides the opportunity to help people with different mental health conditions by leveraging a task they are already participating in.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Social media text carries implicit information about the author, which has been modeled in natural language processing (NLP) to predict author characteristics such as age (Goswami et al., 2009; Rosenthal and McKeown, 2011; Nguyen et al., 2014) , gender (Sarawgi et al., 2011; Ciot et al., 2013; Liu and Ruths, 2013; Volkova et al., 2015; Hovy, 2015) , personality (Schwartz et al., 2013; Volkova et al., 2014; Plank and Hovy, 2015; Preo\u0163iuc-Pietro et al., 2015) , and occupation (Preotiuc-Pietro et al., 2015) . Similar text signals have been effectively used to predict mental health conditions such as depression (De Choudhury et al., 2013; Coppersmith et al., 2015b; Schwartz et al., 2014) , suicidal ideation (Coppersmith et al., 2016; Huang et al., 2015) , schizophrenia (Mitchell et al., 2015) or post-traumatic stress disorder (PTSD) (Pedersen, 2015) .",
"cite_spans": [
{
"start": 171,
"end": 193,
"text": "(Goswami et al., 2009;",
"ref_id": "BIBREF13"
},
{
"start": 194,
"end": 222,
"text": "Rosenthal and McKeown, 2011;",
"ref_id": "BIBREF28"
},
{
"start": 223,
"end": 243,
"text": "Nguyen et al., 2014)",
"ref_id": "BIBREF22"
},
{
"start": 253,
"end": 275,
"text": "(Sarawgi et al., 2011;",
"ref_id": "BIBREF30"
},
{
"start": 276,
"end": 294,
"text": "Ciot et al., 2013;",
"ref_id": "BIBREF4"
},
{
"start": 295,
"end": 315,
"text": "Liu and Ruths, 2013;",
"ref_id": "BIBREF19"
},
{
"start": 316,
"end": 337,
"text": "Volkova et al., 2015;",
"ref_id": "BIBREF37"
},
{
"start": 338,
"end": 349,
"text": "Hovy, 2015)",
"ref_id": "BIBREF16"
},
{
"start": 364,
"end": 387,
"text": "(Schwartz et al., 2013;",
"ref_id": "BIBREF31"
},
{
"start": 388,
"end": 409,
"text": "Volkova et al., 2014;",
"ref_id": "BIBREF36"
},
{
"start": 410,
"end": 431,
"text": "Plank and Hovy, 2015;",
"ref_id": "BIBREF25"
},
{
"start": 432,
"end": 461,
"text": "Preo\u0163iuc-Pietro et al., 2015)",
"ref_id": "BIBREF26"
},
{
"start": 479,
"end": 509,
"text": "(Preotiuc-Pietro et al., 2015)",
"ref_id": "BIBREF27"
},
{
"start": 615,
"end": 642,
"text": "(De Choudhury et al., 2013;",
"ref_id": "BIBREF10"
},
{
"start": 643,
"end": 669,
"text": "Coppersmith et al., 2015b;",
"ref_id": "BIBREF7"
},
{
"start": 670,
"end": 692,
"text": "Schwartz et al., 2014)",
"ref_id": "BIBREF32"
},
{
"start": 713,
"end": 739,
"text": "(Coppersmith et al., 2016;",
"ref_id": "BIBREF9"
},
{
"start": 740,
"end": 759,
"text": "Huang et al., 2015)",
"ref_id": "BIBREF17"
},
{
"start": 776,
"end": 799,
"text": "(Mitchell et al., 2015)",
"ref_id": "BIBREF21"
},
{
"start": 841,
"end": 857,
"text": "(Pedersen, 2015)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, these studies typically model each condition in isolation, which misses the opportunity to model coinciding influence factors. Tasks with underlying commonalities (e.g., part-of-speech tagging, parsing, and NER) have been shown to benefit from multi-task learning (MTL), as the learning implicitly leverages interactions between them (Caruana, 1993; Sutton et al., 2007; Rush et al., 2010; Collobert et al., 2011; S\u00f8gaard and Goldberg, 2016) . Suicide risk and related mental health conditions are therefore good candidates for modeling in a multi-task framework.",
"cite_spans": [
{
"start": 343,
"end": 358,
"text": "(Caruana, 1993;",
"ref_id": "BIBREF2"
},
{
"start": 359,
"end": 379,
"text": "Sutton et al., 2007;",
"ref_id": "BIBREF34"
},
{
"start": 380,
"end": 398,
"text": "Rush et al., 2010;",
"ref_id": "BIBREF29"
},
{
"start": 399,
"end": 422,
"text": "Collobert et al., 2011;",
"ref_id": "BIBREF5"
},
{
"start": 423,
"end": 450,
"text": "S\u00f8gaard and Goldberg, 2016)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose multi-task learning for detecting suicide risk and mental health conditions. The tasks of our model include neuroatypicality (i.e., atypical mental health) and suicide attempt, as well as the related mental health conditions of anxiety, depression, eating disorder, panic attacks, schizophrenia, bipolar disorder, and posttraumatic stress disorder (PTSD), and we explore the effect of task selection on model performance. We additionally include the effect of modeling gender, which has been shown to improve accuracy in tasks using social media text (Volkova et al., 2013; Hovy, 2015) .",
"cite_spans": [
{
"start": 577,
"end": 599,
"text": "(Volkova et al., 2013;",
"ref_id": "BIBREF35"
},
{
"start": 600,
"end": 611,
"text": "Hovy, 2015)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Predicting suicide risk and several mental health conditions jointly opens the possibility for the model to leverage a shared representation for conditions that frequently occur together, a phenomenon known as comorbidity. Further including gender reflects the fact that gender differences are found in the patterns of mental health (WHO, 2016), which may help to sharpen the model. The MTL framework we propose allows such shared information across predictions and enables the inclusion of several loss functions with a common shared underlying representation. This approach is flexible enough to extend to factors other than the ones shown here, provided suitable data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We find that choosing tasks that are prerequisites or related to the main task is critical for learning a strong model, similar to Caruana (1996) . We further find that modeling gender improves accuracy across a variety of conditions, including suicide risk. The best-performing model from our experiments demonstrates that multi-task learning is a promising new direction in automated assessment of mental health and suicide risk, with possible application to the clinical domain.",
"cite_spans": [
{
"start": 131,
"end": 145,
"text": "Caruana (1996)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. We demonstrate the utility of MTL in predicting mental health conditions from social user text -a notoriously difficult task (Coppersmith et al., 2015a; Coppersmith et al., 2015b ) -with potential application to detecting suicide risk. 2. We explore the influence of task selection on prediction performance, including the effect of gender. 3. We show how to model tasks with a large number of positive examples to improve the prediction accuracy of tasks with a small number of positive examples. 4. We compare the MTL model against a singletask model with the same number of parameters, which directly evaluates the multi-task learning approach.",
"cite_spans": [
{
"start": 128,
"end": 155,
"text": "(Coppersmith et al., 2015a;",
"ref_id": "BIBREF6"
},
{
"start": 156,
"end": 181,
"text": "Coppersmith et al., 2015b",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Our contributions",
"sec_num": null
},
{
"text": "Positive Rate at 10% false alarms by up to 9.7% absolute (for anxiety), a result with direct impact for clinical applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The proposed MTL model increases the True",
"sec_num": "5."
},
{
"text": "As with any author-attribute detection, there is the danger of abusing the model to single out people (overgeneralization, see Hovy and Spruit (2016) ). We are aware of this danger, and sought to minimize the risk. For this reason, we don't provide a selection of features or representative examples. The experiments in this paper were performed with a clinical application in mind, and use carefully matched (but anonymized) data, so the distribution is not representative of the population as a whole. The results of this paper should therefore not be interpreted as a means to assess mental health conditions in social media in general, but as a test for the applicability of MTL in a well-defined clinical setting.",
"cite_spans": [
{
"start": 127,
"end": 149,
"text": "Hovy and Spruit (2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Disclaimer",
"sec_num": "2"
},
{
"text": "A neural multi-task architecture opens the possibility of leveraging commonalities and differences between mental conditions. Previous work (Collobert et al., 2011; Caruana, 1996; Caruana, 1993) has indicated that such an architecture allows for sharing parameters across tasks, and can be beneficial when there is varying degrees of annotation across tasks. 3 This makes MTL particularly compelling in light of mental health comorbidity, and given that different conditions have different amounts of associated data. Previous MTL approaches have shown considerable improvements over single task models, and",
"cite_spans": [
{
"start": 140,
"end": 164,
"text": "(Collobert et al., 2011;",
"ref_id": "BIBREF5"
},
{
"start": 165,
"end": 179,
"text": "Caruana, 1996;",
"ref_id": "BIBREF3"
},
{
"start": 180,
"end": 194,
"text": "Caruana, 1993)",
"ref_id": "BIBREF2"
},
{
"start": 359,
"end": 360,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "3"
},
{
"text": "Y t W t W t+1 W t+2 t 2 T Y t W 0 W t W T+t t 2 T Single-task Multi-task",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "3"
},
{
"text": "Figure 1: STL model in plate notation (left): weights trained independently for each task t (e.g., anxiety, depression) of the T tasks. MTL model (right): shared weights trained jointly for all tasks, with task-specific hidden layers. Curves in ovals represent the type of activation used at each layer (rectified linear unit or sigmoid). Hidden layers are shaded.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "3"
},
{
"text": "the arguments are convincing: Predicting multiple related tasks should allow us to exploit any correlations between the predictions. However, in much of this work, an MTL model is only one possible explanation for improved accuracy. Another more salient factor has frequently been overlooked: The difference in the expressivity of the model class, i.e., neural architectures vs. discriminative or generative models, and critically, differences in the number of parameters for comparable models. Some comparisons might therefore have inadvertently compared apples to oranges. In the interest of examining the effect of multitask learning specifically, we compare the multitask predictions to models with equal expressivity. We evaluate the performance of a standard logistic regression model (a standard approach to text-classification problems), a multilayer perceptron single-task learning (STL) model, and a neural MTL model, the latter two with equal numbers of parameters. This ensures a fair comparison by isolating the unique properties of MTL from the dimensionality-reduction aspects of deep architectures in general.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "3"
},
{
"text": "The neural models we evaluate come in two forms. The first, depicted in plate notation on the left in Figure 1 , are the STL models. These are feedforward networks with two hidden layers, trained independently to predict each task. On the right in Figure 1 is the MTL model, where the first hidden layer from the bottom is shared between all tasks. An additional per-task hidden layer is used to give the model flexibility to map from the task-agnostic representation to a task-specific one. Each hidden layer uses a rectified linear unit as non-linearity. The output layer uses a logistic non-linearity, since all tasks are binary predictions. The MTL model can easily be extended to a stack of shared hidden layers, allowing for a more complicated mapping from input to shared space. 4 As noted in Collobert et al. 2011, MTL benefits from mini-batch training, which both allows optimization to jump out of poor local optima, and more stochastic gradient steps in a fixed amount of time (Bottou, 2012) . We create mini-batches by sampling from the users in our data, where each user has some subset of the conditions we are trying to predict, and may or may not be annotated with gender. At each mini-batch gradient step, we update weights for all tasks. This not only allows for randomization and faster convergence, it also provides a speed-up over the individual selection process reported in earlier work (Collobert et al., 2011) .",
"cite_spans": [
{
"start": 786,
"end": 787,
"text": "4",
"ref_id": null
},
{
"start": 988,
"end": 1002,
"text": "(Bottou, 2012)",
"ref_id": "BIBREF0"
},
{
"start": 1410,
"end": 1434,
"text": "(Collobert et al., 2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 102,
"end": 110,
"text": "Figure 1",
"ref_id": null
},
{
"start": 248,
"end": 256,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "3"
},
{
"text": "Another advantage of this setup is that we do not need complete information for every instance: Learning can proceed with asynchronous updates, dependent on what the data in each batch has been annotated for, while sharing representations throughout. This effectively learns a joint model with a common representation for several different tasks, allowing the use of several \"disjoint\" data sets, some with limited annotated instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "3"
},
{
"text": "Optimization and Model Selection Even in a relatively simple neural model, there are a number of hyperparameters that can (and have to) be tuned to achieve good performance. We perform a line search for every model we use, sweeping over L 2 regularization and hidden layer width. We select the best model based on the development loss. Figure 4 shows the performance on the corresponding test sets (plot smoothed by rolling mean of 10 for visibility).",
"cite_spans": [],
"ref_spans": [
{
"start": 336,
"end": 344,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "3"
},
{
"text": "In our experiments, we sweep over the L2 regularization constant applied to all weights in {10 \u22124 , 10 \u22123 , 10 \u22122 , 0.1, 0.5, 1.0, 5.0, 10.0}, and hidden layer width (same for all layers in the network) in {16, 32, 64, 128, 256, 512, 1024, 2048}. We fix the mini-batch size to 256, and 0.05 dropout on the input layer. Choosing a small minibatch size and the model with lowest development loss helps to account for overfitting. We train each model for 5,000 iterations, jointly updating all weights in our models. After this initial joint training, we select each task separately, and only update the task-specific layers of weights independently for another 1,000 iterations (selecting the set of weights achieving lowest development loss for each task individually). Weights are updated using mini-batch Adagrad (Duchi et al., 2011 ) -this converges more quickly than other optimization schemes we experimented with. We evaluate the tuning loss every 10 epochs, and select the model with the lowest tuning loss.",
"cite_spans": [
{
"start": 206,
"end": 246,
"text": "{16, 32, 64, 128, 256, 512, 1024, 2048}.",
"ref_id": null
},
{
"start": 814,
"end": 833,
"text": "(Duchi et al., 2011",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "3"
},
{
"text": "We train our models on a union of multiple Twitter user datasets: 1) users identified as having anxiety, bipolar disorder, depression, panic disorder, eating disorder, PTSD, or schizophrenia (Coppersmith et al., 2015a) , 2) those who had attempted suicide (Coppersmith et al., 2015c) , and 3) those identified as having either depression or PTSD from the 2015 Computational Linguistics and Clinical Psychology Workshop shared task (Coppersmith et al., 2015b) , along with neurotypical gendermatched controls (Twitter users not identified as having a mental condition). Users were identified as having one of these conditions if they stated explicitly they were diagnosed with this condition on Twitter (verified by a human annotator), and the data was pre-processed to remove direction indications of the condition. For a subset of 1,101 users, we also manually-annotate gender. The final dataset contains 9,611 users in total, with an average of 3521 tweets per user. The number of users with each condition is included in Table 1 . Users in this joined dataset may be tagged with multiple conditions, thus the counts in this table do not sum to the total number of users.",
"cite_spans": [
{
"start": 191,
"end": 218,
"text": "(Coppersmith et al., 2015a)",
"ref_id": "BIBREF6"
},
{
"start": 256,
"end": 283,
"text": "(Coppersmith et al., 2015c)",
"ref_id": "BIBREF8"
},
{
"start": 431,
"end": 458,
"text": "(Coppersmith et al., 2015b)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 1024,
"end": 1031,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "4"
},
{
"text": "We use the entire Twitter history of each user as input to the model, and split it into character 1-to-5-grams, which have been shown to capture more information than words for many Twitter text classification tasks (Mcnamee and Mayfield, 2004; Coppersmith et al., 2015a) . We compute the relative frequency of the 5,000 most frequent n-gram features for n \u2208 {1, 2, 3, 4, 5} in our data, and then feed this as input to all models. This input representation is common to all models, allowing for fair comparison.",
"cite_spans": [
{
"start": 216,
"end": 244,
"text": "(Mcnamee and Mayfield, 2004;",
"ref_id": "BIBREF20"
},
{
"start": 245,
"end": 271,
"text": "Coppersmith et al., 2015a)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4"
},
{
"text": "Our task is to predict suicide attempt and mental conditions for each of the users in these data. We evaluate three classes of models: baseline logistic regression over character n-gram features (LR), feed-forward multilayer perceptrons trained to predict each task separately (STL), and feedforward multi-task models trained to predict a set of conditions simultaneously (MTL). We experiment with a feed-forward network against independent logistic regression models as a way to directly test the hypothesis that MTL may work well in this domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We also perform ablation experiments to see which subsets of tasks help us learn an MTL model that predicts a particular mental condition best. For all experiments, data were divided into five equal-sized folds, three for training, one for tuning, and one for testing (we report the performance on this).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "All our models are implemented in Keras 5 with Theano backend and GPU support. We train the models for a total of up to 15,000 epochs, using mini-batches of 256 instances. Training time on all five training folds ranged from one to eight hours on a machine with Tesla K40M.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We compare the accuracy of each model at predicting each task separately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Setup",
"sec_num": null
},
{
"text": "In clinical settings, we are interested in minimizing the number of false positives, i.e., incorrect diagnoses, which can cause undue stress to the patient. We are thus interested in bounding this quantity. To evaluate the performance, we plot the false positive rate (FPR) against the true positive rate (TPR). This gives us a receiver operating characteristics (ROC) curve, allowing us to inspect the performance of each model on a specific task at any level of FPR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Setup",
"sec_num": null
},
{
"text": "While the ROC gives us a sense of how well a model performs at a fixed true positive rate, it makes it difficult to compare the individual tasks at a low false positive rate, which is also important for clinical application. We therefore report two more measures: the area under the ROC curve (AUC) and TPR performance at FPR=0.1 (TPR@FPR=0.1). We do not compare our models to a majority baseline model, since this model would achieve an expected AUC of 0.5 for all tasks, and F-score and TPR@FPR=0.1 of 0 for 6 Results Figure 2 shows the AUC-score of each model for each task separately, and Figure 3 the true positive rate at a low false positive rate of 0.1. Precisionrecall curves for model/task are in Figure 5 . STL is a multilayer perceptron with two hidden layers (with a similar number of parameters as the proposed MTL model). The MTL +gender and MTL models predict all tasks simultaneously, but are only evaluated on the main respective task.",
"cite_spans": [],
"ref_spans": [
{
"start": 520,
"end": 528,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 593,
"end": 601,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 707,
"end": 715,
"text": "Figure 5",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Evaluation Setup",
"sec_num": null
},
{
"text": "N E U R O T Y P I C A L A N X I E T Y D E P R E S S I O N S U I C I D E A T T E M P T E A T I N G S C H I Z O P H R E N I A P A N I C P T S D B I P O L A R L A B E L E D M A L E L A B E L E D F E M A L",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Setup",
"sec_num": null
},
{
"text": "Both AUC and TPR (at FPR=0.1) demonstrate that single-task models models do not perform nearly as well as multi-task models or logistic regression. This is likely because the neural networks learned by STL cannot be guided by the inductive bias provided by MTL training. Note, however, that STL and MTL are often times comparable in terms of F1-score, where false positives and false negatives are equally weighted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Setup",
"sec_num": null
},
{
"text": "As shown Figure 2 , multi-task suicide predictions reach an AUC of 0.848, and predictions for anxiety and schizophrenia are not far behind. Interestingly however, schizophrenia stands out as being the only condition to be best predicted with a single-task model. MTL models show improvements over STL and LR models for predicting suicide, neuroatypicality, depression, anxiety, panic, bipolar disorder, and PTSD. The inclusion of gender in the MTL models leads to direct gains over an LR baseline in predicting anxiety disorders: anxiety, panic, and PTSD.",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 17,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Evaluation Setup",
"sec_num": null
},
{
"text": "In Figure 3 , we illustrate the true positive rate -that is, how many cases of mental health conditions that we correctly predict -given a low false positive rate -that is, a low rate of predicting people have mental health conditions when they do not. This is particularly useful in clinical settings, where clinicians seek to minimize overdiagnosing. In this setting, MTL leads to the best performance across the board, for all tasks under consideration: Neuroatypicality, suicide, depression, anxiety, eating, panic, schizophrenia, bipolar disorder, and PTSD. Including gender in MTL further improves performance for neuroatypicality, suicide, anxiety, schizophrenia, bipolar disorder, and PTSD. nificantly improved by having the model also predict comorbid conditions with substantially more data: depression and anxiety. We are able to increase the AUC for predicting PTSD to 0.786 by MTL, from 0.770 by LR, whereas STL fails to perform as well with an AUC of 0.667. Similarly for predicting bipolar disorder (MTL:0.723, LR:0.752, STL:0.552) and panic attack (MTL:0.724, LR:0.713, STL:0.631).",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Evaluation Setup",
"sec_num": null
},
{
"text": "These differences in AUC are significant at p = 0.05 according to bootstrap sampling tests with 5000 samples. The wide difference between MTL and STL can be explained in part by the increased feature set size -MTL training may, in this case, provide a form of regularization that STL cannot exploit. Further, modeling the common mental health conditions with the most data (depression, anxiety) helps in pulling out more rare conditions comorbid with these common health conditions. This provides evidence that an MTL model can help in predicting elusive conditions by using large data for common conditions, and a small amount of data for more rare conditions. Figures 2 and 3 both suggest that adding gender as an auxiliary task leads to more predictive models, even though the difference is not statistically significant for most tasks. This is consistent with the findings in previous work (Volkova et al., 2013; Hovy, 2015) . Interestingly, though, the MTL model is worse at predicting gender itself. While this could be a direct result of data sparsity (recall that we have only a small subset annotated for gender), which could be remedied by annotating additional users for gender, this appears unlikely given the other findings of our experiments, where MTL helped in specifically these sparse scenarios.",
"cite_spans": [
{
"start": 895,
"end": 917,
"text": "(Volkova et al., 2013;",
"ref_id": "BIBREF35"
},
{
"start": 918,
"end": 929,
"text": "Hovy, 2015)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 662,
"end": 678,
"text": "Figures 2 and 3",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Evaluation Setup",
"sec_num": null
},
{
"text": "However, it has been pointed out by Caruana (1996) that not all tasks benefit from a MTL setting in the same way, and that some tasks serve purely auxiliary functions. Here, gender prediction does not benefit from including mental conditions, but helps vice versa. In other words, predicting gender is qualitatively different from predicting mental health conditions: it seems likely that the signals for anxiety ares much more similar to the ones for depression than for, say, being male, and can therefore add to detecting depression. However, the distinction between certain conditions does not add information for the distinction of gender. The effect may also be due to the fact that these data were constructed with inferred gender (used to match controls), so there might be a degree of noise in the data.",
"cite_spans": [
{
"start": 36,
"end": 50,
"text": "Caruana (1996)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Utility of Authorship Attributes",
"sec_num": null
},
{
"text": "Choosing Tasks Although MTL tends to dominate STL in our experiments, it is not clear whether modeling several tasks provide a beneficial bias in MTL models in general, or if there exists specific subsets of auxiliary tasks that are most beneficial for predicting suicide risk and related mental health conditions. We perform ablation experiments by training MTL models on a subset of auxiliary tasks, and prediction for a single main task. We focus on four conditions to predict well: suicide attempt, anxiety, depression, and bipolar disorder. For each main task, we vary the auxiliary tasks we train the MTL model with. Since considering all possible subsets of tasks is combinatorily unfeasible, we choose the following task subsets as auxiliary:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utility of Authorship Attributes",
"sec_num": null
},
{
"text": "\u2022 all: all mental conditions along with gender \u2022 all conds: all mental conditions, no gender \u2022 neuro: only neurotypicality \u2022 neuro+mood: neurotypicality, depression, and bipolar disorder (mood disorders) \u2022 neuro+anx: neurotypicality, anxiety, and panic attack (anxiety conditions) \u2022 neuro+targets: neurotypicality, anxiety, depression, suicide attempt, bipolar disorder \u2022 none: no auxiliary tasks, equivalent to STL Table 2 shows AUC for the four prediction tasks with different subsets of auxiliary tasks. Statistically significant improvements over the respective LR baselines are denoted by superscript. Restricting the auxiliary tasks to a small subset tends to Figure 4 : ROC curves for predicting each condition. The precision (diagnosed, correctly labeled) is on the y-axis, while the proportion of false alarms (control users mislabeled as diagnosed) is on the x-axis. Chance performance is indicated by the dotted diagonal line. hurt performance for most tasks, with exception to bipolar, which benefits from the prediction of depression and suicide attempt. All main tasks achieve their best performance using the full set of additional tasks as auxiliary. This suggests that the biases induced by predicting different kinds of mental conditions are mutually beneficial -e.g., multi-task models that predict suicide attempt may also be good at predicting anxiety.",
"cite_spans": [],
"ref_spans": [
{
"start": 416,
"end": 423,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 666,
"end": 674,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Utility of Authorship Attributes",
"sec_num": null
},
{
"text": "Based on these results, we find it useful to think of MTL as a framework to leverage auxiliary tasks as regularization to effectively combat data paucity and less-than-trustworthy labels. As we have demonstrated, this may be particularly useful when predicting mental health conditions and suicide risk.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utility of Authorship Attributes",
"sec_num": null
},
{
"text": "7 Discussion: Multi-task Learning",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utility of Authorship Attributes",
"sec_num": null
},
{
"text": "Our results indicate that an MTL framework can lead to significant gains over single-task models for predicting suicide risk and several mental health conditions. We find benefit from predict-ing related mental conditions and demographic attributes simultaneously.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utility of Authorship Attributes",
"sec_num": null
},
{
"text": "We experimented with all the optimizers that Keras provides, and found that Adagrad seems to converge fastest to a good optimum, although all the adaptive learning rate optimizers (such as Adam, etc.) tend to converge quickly. This indicates that the gradient is significantly steeper along certain parameters than others. Default stochastic gradient descent (SGD) was not able to converge as quickly, since it is not able to adaptively scale the learning rate for each parameter in the modeltaking too small steps in directions where the gradient is shallow, and too large steps where the gradient is steep. We further note an interesting behavior: all of the adaptive learning rate optimizers yield a strange \"step-wise\" training loss learning curve, which hits a plateau, but then drops after about 900 iterations, only to hit another plateau, and so on. Obviously, we would prefer to have a smooth training loss curve. We can indeed achieve this using SGD, but it takes much longer to con- verge than, for example, Adagrad. This suggests that a well-tuned SGD would be the best optimizer for this problem, a step that would require some more experimentation and is left for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utility of Authorship Attributes",
"sec_num": null
},
{
"text": "We also found that feature counts have a pronounced effect on the loss curves: Relative feature frequencies yield models that are much easier to train than raw feature counts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utility of Authorship Attributes",
"sec_num": null
},
{
"text": "Feature representations are therefore another area of optimization, e.g., different ranges of character n-grams (e.g., n > 5) and unigrams. We used character 1-to-5-grams, since we believe that these features generalize better to a new domain (e.g., Facebook) than word unigrams. However, there is no fundamental reason not to choose longer character n-grams, other than time constraints in regenerating the data, and accounting for overfitting with proper regularization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Utility of Authorship Attributes",
"sec_num": null
},
{
"text": "Initialization is a decisive factor in neural models, and Goldberg (2015) recommends repeated restarts with differing initializations to find the optimal model. In an earlier experiment, we tried initializing a MTL model (without task-specific hidden layers) with pretrained word2vec embeddings of unigrams trained on the Google News n-gram corpus. However, we did not notice an improvement in F-score. This could be due to the other factors, though, such as feature sparsity. Table 3 shows parameters sweeps with hidden layer width 256, training the MTL model on the social media data with character trigrams as input features. The sweet spots in this table may be good starting points for training models in future work.",
"cite_spans": [],
"ref_spans": [
{
"start": 477,
"end": 484,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Utility of Authorship Attributes",
"sec_num": null
},
{
"text": "MTL was introduced by Caruana (1993) , based on the observation that humans rarely learn things in isolation, and that it is the similarity between related tasks that helps us get better.",
"cite_spans": [
{
"start": 22,
"end": 36,
"text": "Caruana (1993)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "Some of the first works on MTL were motivated by medical risk prediction , and it is now being rediscovered for this purpose (Lipton et al., 2016) . The latter use a long shortterm memory (LSTM) structure to provide several medical diagnoses from health care features (yet no textual or demographic information), and find small, but probably not significant improvements over a structure similar to the STL we use here. 5.1 10 \u22123 2.8 32 3.0 5 * 10 \u22124 2.9 5 * 10 \u22123 2.8 64 3.0 10 \u22123 2.9 10 \u22122 2.9 128 2.9 5 * 10 \u22123 2.4 5 * 10 \u22122 3.1 256 2.9 10 \u22122 2.3 0.1 3.4 512 3.0 5 * 10 \u22122 2.2 0.5 4.6 1024 3.0 0.1 20.2 1.0 4.9 Table 3 : Average dev loss over epochs 990-1000 of joint training on all tasks as a function of different learning parameters. Optimized using Adagrad with hidden layer width 256.",
"cite_spans": [
{
"start": 125,
"end": 146,
"text": "(Lipton et al., 2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 614,
"end": 621,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "The target in previous work was medical conditions as detected in patient records, not mental health conditions in social text. The focus in this work has been on the possibility of predicting suicide attempt and other mental health conditions using social media text that a patient may already be writing, without requiring full diagnoses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "The framework proposed by Collobert et al. (2011) allows for predicting any number of NLP tasks from a convolutional neural network (CNN) representation of the input text. The model we present is much simpler: A feed-forward network with n-gram input layer, and we demonstrate how to constrain n-gram embeddings for clinical application. Comparing with further models is possible, but distracts from the question of whether MTL training can help in this domain. As we have shown, it can.",
"cite_spans": [
{
"start": 26,
"end": 49,
"text": "Collobert et al. (2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "In this paper, we develop neural MTL models for 10 prediction tasks (suicide, seven mental health conditions, neurotypicality, and gender). We compare their performance with STL models trained to predict each task independently.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "9"
},
{
"text": "Our results show that an MTL model with all task predictions performs significantly better than other models, reaching 0.846 TPR for neuroatypicality where FPR=0.1, and AUC of 0.848, TPR of 0.559 for suicide. Due to the nature of MTL, we find additional contributions that were not the original goal of this work: Pronounced gains in detecting anxiety, PTSD, and bipolar disorder. MTL predictions for anxiety, for example, reduce the error rate from a single-task model by up to 11.9%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "9"
},
{
"text": "We also investigate the influence of model depth, comparing to progressively deeper STL feed-forward networks with the same number of parameters. We find: (1) Most of the modeling power stems from the expressivity conveyed by deep architectures. (2) Choosing the right set of auxiliary tasks for a given mental condition can yield a significantly better model. 3The MTL model dramatically improves for conditions with the smallest amount of data. (4) Gender prediction does not follow the two previous points, but improves performance as an auxiliary task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "9"
},
{
"text": "Accuracy of the MTL approach is not yet ready to be used in isolation in the clinical setting. However, our experiments suggest this is a promising direction moving forward. There are strong gains to be made in using multi-task learning to aid clinicians in their evaluations, and with further partnerships between the clinical and machine learning community, we foresee improved suicide prevention efforts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "9"
},
{
"text": "Communication with clinicians at the 2016 JSALT workshop(Hollingshead, 2016).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also experimented with a graphical model architecture, but found that it did not scale as well and provided less traction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We tried training a 4-shared-layer MTL model to predict targets on a separate dataset, but did not see any gains over the standard 1-shared-layer MTL model in our application.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://keras.io/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Thanks to Kristy Hollingshead Seitz, Glen Coppersmith, and H. Andrew Schwartz, as well as the organizers of the Johns Hopkins Jelinek Summer School 2016. We are also grateful for the invaluable feedback on MTL from Yoav Goldberg, Stephan Gouws, Ed Greffenstette, Karl Moritz Hermann, and Anders S\u00f8gaard. The work reported here was started at JSALT 2016, and was supported by JHU via grants from DARPA (LORELEI), Microsoft, Amazon, Google and Facebook.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Stochastic gradient tricks. Neural Networks, Tricks of the Trade, Reloaded",
"authors": [
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "430--445",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L\u00e9on Bottou. 2012. Stochastic gradient tricks. Neural Networks, Tricks of the Trade, Reloaded, pages 430- 445.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Using the future to \"sort out\" the present: Rankprop and multitask learning for medical risk evaluation",
"authors": [
{
"first": "Rich",
"middle": [],
"last": "Caruana",
"suffix": ""
},
{
"first": "Shumeet",
"middle": [],
"last": "Baluja",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 1996,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "959--965",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rich Caruana, Shumeet Baluja, Tom Mitchell, et al. 1996. Using the future to \"sort out\" the present: Rankprop and multitask learning for medical risk evaluation. Advances in neural information process- ing systems, pages 959-965.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Multitask learning: A knowledge-based source of inductive bias",
"authors": [
{
"first": "Rich",
"middle": [],
"last": "Caruana",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the Tenth International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "41--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rich Caruana. 1993. Multitask learning: A knowledge-based source of inductive bias. In Pro- ceedings of the Tenth International Conference on Machine Learning, pages 41-48.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Algorithms and applications for multitask learning",
"authors": [
{
"first": "Rich",
"middle": [],
"last": "Caruana",
"suffix": ""
}
],
"year": 1996,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "87--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rich Caruana. 1996. Algorithms and applications for multitask learning. In ICML, pages 87-95.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Gender Inference of Twitter Users in Non-English Contexts",
"authors": [
{
"first": "Morgane",
"middle": [],
"last": "Ciot",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Sonderegger",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Ruths",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "18--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morgane Ciot, Morgan Sonderegger, and Derek Ruths. 2013. Gender Inference of Twitter Users in Non- English Contexts. In Proceedings of the 2013 Con- ference on Empirical Methods in Natural Language Processing, Seattle, Wash, pages 18-21.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493-2537.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "From ADHD to SAD: Analyzing the language of mental health on Twitter through self-reported diagnoses",
"authors": [
{
"first": "Glen",
"middle": [],
"last": "Coppersmith",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Harman",
"suffix": ""
},
{
"first": "Kristy",
"middle": [],
"last": "Hollingshead",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Glen Coppersmith, Mark Dredze, Craig Harman, and Kristy Hollingshead. 2015a. From ADHD to SAD: Analyzing the language of mental health on Twit- ter through self-reported diagnoses. In Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, pages 1-10.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Clpsych 2015 shared task: Depression and ptsd on twitter",
"authors": [
{
"first": "Glen",
"middle": [],
"last": "Coppersmith",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Harman",
"suffix": ""
},
{
"first": "Kristy",
"middle": [],
"last": "Hollingshead",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality",
"volume": "",
"issue": "",
"pages": "31--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Glen Coppersmith, Mark Dredze, Craig Harman, Kristy Hollingshead, and Margaret Mitchell. 2015b. Clpsych 2015 shared task: Depression and ptsd on twitter. In Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, pages 31-39. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Quantifying suicidal ideation via language usage on social media",
"authors": [
{
"first": "Glen",
"middle": [],
"last": "Coppersmith",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Leary",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Whyne",
"suffix": ""
},
{
"first": "Tony",
"middle": [],
"last": "Wood",
"suffix": ""
}
],
"year": 2015,
"venue": "Joint Statistics Meetings Proceedings, Statistical Computing Section",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Glen Coppersmith, Ryan Leary, Eric Whyne, and Tony Wood. 2015c. Quantifying suicidal ideation via language usage on social media. In Joint Statistics Meetings Proceedings, Statistical Computing Sec- tion, JSM.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Exploratory analysis of social media prior to a suicide attempt",
"authors": [
{
"first": "Glen",
"middle": [],
"last": "Coppersmith",
"suffix": ""
},
{
"first": "Kim",
"middle": [],
"last": "Ngo",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Leary",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Wood",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Third Workshop on Computational Lingusitics and Clinical Psychology",
"volume": "",
"issue": "",
"pages": "106--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Glen Coppersmith, Kim Ngo, Ryan Leary, and An- thony Wood. 2016. Exploratory analysis of social media prior to a suicide attempt. In Proceedings of the Third Workshop on Computational Lingusitics and Clinical Psychology, pages 106-117. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Predicting depression via social media",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Munmun De Choudhury",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Gamon",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Counts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Horvitz",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Munmun De Choudhury, Michael Gamon, Scott Counts, and Eric Horvitz. 2013. Predicting depres- sion via social media.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Adaptive subgradient methods for online learning and stochastic optimization",
"authors": [
{
"first": "John",
"middle": [],
"last": "Duchi",
"suffix": ""
},
{
"first": "Elad",
"middle": [],
"last": "Hazan",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2121--2159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121-2159.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A primer on neural network models for natural language processing",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1510.00726"
]
},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg. 2015. A primer on neural net- work models for natural language processing. arXiv preprint arXiv:1510.00726.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Stylometric analysis of bloggers' age and gender",
"authors": [
{
"first": "Sumit",
"middle": [],
"last": "Goswami",
"suffix": ""
},
{
"first": "Sudeshna",
"middle": [],
"last": "Sarkar",
"suffix": ""
},
{
"first": "Mayur",
"middle": [],
"last": "Rustagi",
"suffix": ""
}
],
"year": 2009,
"venue": "Third International AAAI Conference on Weblogs and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sumit Goswami, Sudeshna Sarkar, and Mayur Rustagi. 2009. Stylometric analysis of bloggers' age and gender. In Third International AAAI Conference on Weblogs and Social Media.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Detecting risk and protective factors of mental health using social media linked with electronic health records",
"authors": [
{
"first": "Kristy",
"middle": [],
"last": "Hollingshead",
"suffix": ""
}
],
"year": 2016,
"venue": "JSALT 2016 Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristy Hollingshead. 2016. Detecting risk and pro- tective factors of mental health using social media linked with electronic health records. In JSALT 2016 Workshop. Johns Hopkins University.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The social impact of natural language processing",
"authors": [
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Shannon",
"middle": [
"L"
],
"last": "Spruit",
"suffix": ""
}
],
"year": 2016,
"venue": "The 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dirk Hovy and Shannon L. Spruit. 2016. The so- cial impact of natural language processing. In The 54th Annual Meeting of the Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Demographic factors improve classification performance",
"authors": [
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "752--762",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dirk Hovy. 2015. Demographic factors improve clas- sification performance. In Proceedings of ACL, pages 752-762.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Topic model for identifying suicidal ideation in chinese microblog",
"authors": [
{
"first": "Xiaolei",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Tianli",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "Tingshao",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation",
"volume": "",
"issue": "",
"pages": "553--562",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaolei Huang, Xin Li, Tianli Liu, David Chiu, Ting- shao Zhu, and Lei Zhang. 2015. Topic model for identifying suicidal ideation in chinese microblog. In Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation, pages 553-562.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Learning to diagnose with lstm recurrent neural networks",
"authors": [
{
"first": "",
"middle": [],
"last": "Zachary C Lipton",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Kale",
"suffix": ""
},
{
"first": "Randall",
"middle": [],
"last": "Elkan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wetzell",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zachary C Lipton, David C Kale, Charles Elkan, and Randall Wetzell. 2016. Learning to diagnose with lstm recurrent neural networks. In Proceedings of ICLR.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "What's in a name? Using first names as features for gender inference in Twitter",
"authors": [
{
"first": "Wendy",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Ruths",
"suffix": ""
}
],
"year": 2013,
"venue": "Analyzing Microtext: 2013 AAAI Spring Symposium",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wendy Liu and Derek Ruths. 2013. What's in a name? Using first names as features for gender inference in Twitter. In Analyzing Microtext: 2013 AAAI Spring Symposium.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Character n-gram tokenization for european language text retrieval",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Mcnamee",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Mayfield",
"suffix": ""
}
],
"year": 2004,
"venue": "Information retrieval",
"volume": "7",
"issue": "1-2",
"pages": "73--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Mcnamee and James Mayfield. 2004. Charac- ter n-gram tokenization for european language text retrieval. Information retrieval, 7(1-2):73-97.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Quantifying the language of schizophrenia in social media",
"authors": [
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Kristy",
"middle": [],
"last": "Hollingshead",
"suffix": ""
},
{
"first": "Glen",
"middle": [],
"last": "Coppersmith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality",
"volume": "",
"issue": "",
"pages": "11--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Margaret Mitchell, Kristy Hollingshead, and Glen Coppersmith. 2015. Quantifying the language of schizophrenia in social media. In Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, pages 11-20, Denver, Colorado, June 5. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Predicting Author Gender and Age from Tweets: Sociolinguistic Theories and Crowd Wisdom",
"authors": [
{
"first": "Dong",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Dolf",
"middle": [],
"last": "Trieschnigg",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Seza Dogru\u00f6z",
"suffix": ""
},
{
"first": "Rilana",
"middle": [],
"last": "Gravel",
"suffix": ""
},
{
"first": "Mariet",
"middle": [],
"last": "Theune",
"suffix": ""
},
{
"first": "Theo",
"middle": [],
"last": "Meder",
"suffix": ""
},
{
"first": "Franciska",
"middle": [
"De"
],
"last": "Jong",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dong Nguyen, Dolf Trieschnigg, A. Seza Dogru\u00f6z, Ri- lana Gravel, Mariet Theune, Theo Meder, and Fran- ciska De Jong. 2014. Predicting Author Gender and Age from Tweets: Sociolinguistic Theories and Crowd Wisdom. In Proceedings of COLING 2014.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Automatic personality assessment through social media language",
"authors": [
{
"first": "Greg",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Johannes",
"middle": [
"C"
],
"last": "Eichstaedt",
"suffix": ""
},
{
"first": "Margaret",
"middle": [
"L"
],
"last": "Kern",
"suffix": ""
},
{
"first": "David",
"middle": [
"J"
],
"last": "Stillwell",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Kosinski",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Lyle",
"suffix": ""
},
{
"first": "Martin",
"middle": [
"Ep"
],
"last": "Ungar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Seligman",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of Personality and Social Psychology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Greg Park, H Andrew Schwartz, Johannes C Eich- staedt, Margaret L Kern, David J Stillwell, Michal Kosinski, Lyle H Ungar, and Martin EP Seligman. 2015. Automatic personality assessment through social media language. Journal of Personality and Social Psychology.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Screening twitter users for depression and ptsd with lexical decision lists",
"authors": [
{
"first": "Ted",
"middle": [],
"last": "Pedersen",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality",
"volume": "",
"issue": "",
"pages": "46--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ted Pedersen. 2015. Screening twitter users for de- pression and ptsd with lexical decision lists. In Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguis- tic Signal to Clinical Reality, pages 46-53. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Personality traits on twitter-or-how to get 1,500 personality tests in a week",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 6th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "92--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara Plank and Dirk Hovy. 2015. Personality traits on twitter-or-how to get 1,500 personality tests in a week. In Proceedings of the 6th Workshop on Computational Approaches to Subjectivity, Sen- timent and Social Media Analysis, pages 92-98.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "The role of personality, age and gender in tweeting about mental illnesses",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Preo\u0163iuc-Pietro",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Eichstaedt",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Victoria",
"middle": [],
"last": "Tobolsky",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Hansen",
"suffix": ""
},
{
"first": "Lyle",
"middle": [
"H"
],
"last": "Schwartz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ungar",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Preo\u0163iuc-Pietro, Johannes Eichstaedt, Gregory Park, Maarten Sap, Laura Smith, Victoria Tobolsky, Hansen Andrew Schwartz, and Lyle H Ungar. 2015. The role of personality, age and gender in tweet- ing about mental illnesses. In Proceedings of the Workshop on Computational Linguistics and Clini- cal Psychology: From Linguistic Signal to Clinical Reality, NAACL.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "An analysis of the user occupational class through twitter content",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Preotiuc-Pietro",
"suffix": ""
},
{
"first": "Vasileios",
"middle": [],
"last": "Lampos",
"suffix": ""
},
{
"first": "Nikolaos",
"middle": [],
"last": "Aletras",
"suffix": ""
}
],
"year": 2015,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Preotiuc-Pietro, Vasileios Lampos, and Niko- laos Aletras. 2015. An analysis of the user occu- pational class through twitter content. In ACL.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Age prediction in blogs: A study of style, content, and online behavior in pre-and post-social media generations",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "763--772",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara Rosenthal and Kathleen McKeown. 2011. Age prediction in blogs: A study of style, content, and online behavior in pre-and post-social media genera- tions. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies-Volume 1, pages 763- 772. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "On dual decomposition and linear programming relaxations for natural language processing",
"authors": [
{
"first": "M",
"middle": [],
"last": "Alexander",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Rush",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Sontag",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander M Rush, David Sontag, Michael Collins, and Tommi Jaakkola. 2010. On dual decomposition and linear programming relaxations for natural lan- guage processing. In Proceedings of the 2010 Con- ference on Empirical Methods in Natural Language Processing, pages 1-11. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Gender attribution: tracing stylometric evidence beyond topic and genre",
"authors": [
{
"first": "Ruchita",
"middle": [],
"last": "Sarawgi",
"suffix": ""
},
{
"first": "Kailash",
"middle": [],
"last": "Gajulapalli",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "78--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruchita Sarawgi, Kailash Gajulapalli, and Yejin Choi. 2011. Gender attribution: tracing stylometric evi- dence beyond topic and genre. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning, pages 78-86. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Toward personality insights from language exploration in social media",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Hansen",
"suffix": ""
},
{
"first": "Johannes",
"middle": [
"C"
],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Eichstaedt",
"suffix": ""
},
{
"first": "Margaret",
"middle": [
"L"
],
"last": "Dziurzynski",
"suffix": ""
},
{
"first": "Eduardo",
"middle": [],
"last": "Kern",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Blanco",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Kosinski",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Stillwell",
"suffix": ""
},
{
"first": "E",
"middle": [
"P"
],
"last": "Martin",
"suffix": ""
},
{
"first": "Lyle",
"middle": [
"H"
],
"last": "Seligman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ungar",
"suffix": ""
}
],
"year": 2013,
"venue": "AAAI Spring Symposium: Analyzing Microtext",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hansen Andrew Schwartz, Johannes C Eichstaedt, Lukasz Dziurzynski, Margaret L Kern, Eduardo Blanco, Michal Kosinski, David Stillwell, Martin EP Seligman, and Lyle H Ungar. 2013. Toward per- sonality insights from language exploration in social media. In AAAI Spring Symposium: Analyzing Mi- crotext.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Towards assessing changes in degree of depression through Facebook",
"authors": [
{
"first": "Andrew",
"middle": [
"H"
],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Eichstaedt",
"suffix": ""
},
{
"first": "L",
"middle": [
"Margaret"
],
"last": "Kern",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Stillwell",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Kosinski",
"suffix": ""
},
{
"first": "Lyle",
"middle": [],
"last": "Ungar",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality",
"volume": "",
"issue": "",
"pages": "118--125",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew H. Schwartz, Johannes Eichstaedt, L. Mar- garet Kern, Gregory Park, Maarten Sap, David Still- well, Michal Kosinski, and Lyle Ungar. 2014. To- wards assessing changes in degree of depression through Facebook. In Proceedings of the Workshop on Computational Linguistics and Clinical Psychol- ogy: From Linguistic Signal to Clinical Reality, pages 118-125. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Deep multi-task learning with low level tasks supervised at lower layers",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "The 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders S\u00f8gaard and Yoav Goldberg. 2016. Deep multi-task learning with low level tasks supervised at lower layers. In The 54th Annual Meeting of the Association for Computational Linguistics, page 231. Association for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Dynamic conditional random fields: Factorized probabilistic models for labeling and segmenting sequence data",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Sutton",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Khashayar",
"middle": [],
"last": "Rohanimanesh",
"suffix": ""
}
],
"year": 2007,
"venue": "Journal of Machine Learning Research",
"volume": "8",
"issue": "",
"pages": "693--723",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Sutton, Andrew McCallum, and Khashayar Rohanimanesh. 2007. Dynamic conditional random fields: Factorized probabilistic models for labeling and segmenting sequence data. Journal of Machine Learning Research, 8(Mar):693-723.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Exploring demographic language variations to improve multilingual sentiment analysis in social media",
"authors": [
{
"first": "Svitlana",
"middle": [],
"last": "Volkova",
"suffix": ""
},
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1815--1827",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Svitlana Volkova, Theresa Wilson, and David Yarowsky. 2013. Exploring demographic language variations to improve multilingual sentiment anal- ysis in social media. In Proceedings of EMNLP, pages 1815-1827.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Inferring user political preferences from streaming communications",
"authors": [
{
"first": "Svitlana",
"middle": [],
"last": "Volkova",
"suffix": ""
},
{
"first": "Glen",
"middle": [],
"last": "Coppersmith",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd annual meeting of the ACL",
"volume": "",
"issue": "",
"pages": "186--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Svitlana Volkova, Glen Coppersmith, and Benjamin Van Durme. 2014. Inferring user political prefer- ences from streaming communications. In Proceed- ings of the 52nd annual meeting of the ACL, pages 186-196.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Inferring latent user properties from texts published in social media (demo)",
"authors": [
{
"first": "Svitlana",
"middle": [],
"last": "Volkova",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Bachrach",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Armstrong",
"suffix": ""
},
{
"first": "Vijay",
"middle": [],
"last": "Sharma",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Twenty-Ninth Conference on Artificial Intelligence (AAAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Svitlana Volkova, Yoram Bachrach, Michael Arm- strong, and Vijay Sharma. 2015. Inferring latent user properties from texts published in social media (demo). In Proceedings of the Twenty-Ninth Confer- ence on Artificial Intelligence (AAAI), Austin, TX, January.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Gender and Women's Mental Health",
"authors": [],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "World Health Organization WHO. 2016. Gender and Women's Mental Health.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "AUC for different main tasks MTL Leveraging Comorbid Conditions Improves Prediction Accuracy We find that the prediction of the conditions with the least amount of data -bipolar disorder and PTSD -are sig-"
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "TPR at 0.10 FPR for different main tasks"
},
"FIGREF2": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Precision-recall curves for predicting each condition."
},
"TABREF1": {
"content": "<table><tr><td>all mental conditions -users exhibiting a condi-</td></tr><tr><td>tion are the minority, meaning a majority baseline</td></tr><tr><td>classifier would achieve zero recall.</td></tr></table>",
"type_str": "table",
"num": null,
"html": null,
"text": "Frequency and comorbidity across mental health conditions."
},
"TABREF2": {
"content": "<table><tr><td>all conds</td><td>0.786</td><td>0.743 \u2020</td><td colspan=\"2\">0.772 \u2020 0.833 * \u2020</td></tr><tr><td>neuro</td><td>0.763</td><td>0.740 \u2020</td><td>0.759</td><td>0.797</td></tr><tr><td>neuro+mood</td><td>0.756</td><td>0.742 \u2020</td><td>0.761</td><td>0.804</td></tr><tr><td>neuro+anx</td><td>0.770</td><td>0.744 \u2020</td><td>0.746</td><td>0.792</td></tr><tr><td>neuro+targets</td><td>0.750</td><td>0.747 \u2020</td><td>0.764</td><td>0.817</td></tr><tr><td>none (STL)</td><td>0.777</td><td>0.552</td><td>0.749</td><td>0.810</td></tr><tr><td>LR</td><td>0.791</td><td>0.723 \u2020</td><td>0.763</td><td>0.817</td></tr></table>",
"type_str": "table",
"num": null,
"html": null,
"text": "813 * \u2020 0.752 * \u2020 0.769 \u2020 0.835 * \u2020"
},
"TABREF3": {
"content": "<table><tr><td>Learning Loss</td><td>L2 Loss Hidden Loss</td></tr><tr><td>Rate</td><td>Width</td></tr><tr><td>10 \u22124</td><td/></tr></table>",
"type_str": "table",
"num": null,
"html": null,
"text": "Test AUC when predicting Main Task after training to predict a subset of auxiliary tasks. Significant improvement over LR baseline at p = 0.05 is denoted by * , and over no auxiliary tasks (STL) by \u2020 ."
}
}
}
}