ACL-OCL / Base_JSON /prefixC /json /clpsych /2022.clpsych-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:31:05.009562Z"
},
"title": "Detecting Suicidality with a Contextual Graph Neural Network",
"authors": [
{
"first": "Daeun",
"middle": [],
"last": "Lee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sungkyunkwan University",
"location": {}
},
"email": ""
},
{
"first": "Migyeong",
"middle": [],
"last": "Kang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sungkyunkwan University",
"location": {}
},
"email": ""
},
{
"first": "Minji",
"middle": [],
"last": "Kim",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sungkyunkwan University",
"location": {}
},
"email": ""
},
{
"first": "Jinyoung",
"middle": [],
"last": "Han",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sungkyunkwan University",
"location": {}
},
"email": "jinyounghan@skku.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Discovering individuals' suicidality on social media has become increasingly important. Many researchers have studied to detect suicidality by using a suicide dictionary. However, while prior work focused on matching a word in a post with a suicide dictionary without considering contexts, little attention has been paid to how the word can be associated with the suicide-related context. To address this problem, we propose a suicidality detection model based on a graph neural network to grasp the dynamic semantic information of the suicide vocabulary by learning the relations between a given post and words. The extensive evaluation demonstrates that the proposed model achieves higher performance than the state-of-the-art methods. We believe the proposed model has great utility in identifying the suicidality of individuals and hence preventing individuals from potential suicide risks at an early stage.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Discovering individuals' suicidality on social media has become increasingly important. Many researchers have studied to detect suicidality by using a suicide dictionary. However, while prior work focused on matching a word in a post with a suicide dictionary without considering contexts, little attention has been paid to how the word can be associated with the suicide-related context. To address this problem, we propose a suicidality detection model based on a graph neural network to grasp the dynamic semantic information of the suicide vocabulary by learning the relations between a given post and words. The extensive evaluation demonstrates that the proposed model achieves higher performance than the state-of-the-art methods. We believe the proposed model has great utility in identifying the suicidality of individuals and hence preventing individuals from potential suicide risks at an early stage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Suicide has become a serious problem in society. The OECD (Organization for Economic Cooperation and Development) reported that the suicide rate of South Korea and the USA was 23.0 and 14.5 deaths per 100,000 population in 2017, which ranked 1st and 8th, respectively 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The awareness of the severity of suicide has led researchers to develop suicidality detection models using a deluge of user activity data on social media, which can help capture latent warning signs of suicide in an early stage (Sawhney et al., 2020; Lee et al., 2020; Shing et al., 2020) . For example, the prior work showed that linguistic characteristics revealed in social media posts could be linked to suicide risks (De Choudhury et al., 2016 Figure 1 : An example of how a word in a suicide dictionary can be misleading in prior work. et al., 2018) . Specifically, applying the lexiconbased methods using suicide dictionaries made by domain experts has been reported as effective in capturing linguistic characteristics to detect suicidality. (Gaur et al., 2019; Lv et al., 2015) .",
"cite_spans": [
{
"start": 228,
"end": 250,
"text": "(Sawhney et al., 2020;",
"ref_id": "BIBREF28"
},
{
"start": 251,
"end": 268,
"text": "Lee et al., 2020;",
"ref_id": "BIBREF20"
},
{
"start": 269,
"end": 288,
"text": "Shing et al., 2020)",
"ref_id": "BIBREF32"
},
{
"start": 422,
"end": 448,
"text": "(De Choudhury et al., 2016",
"ref_id": "BIBREF5"
},
{
"start": 542,
"end": 555,
"text": "et al., 2018)",
"ref_id": null
},
{
"start": 750,
"end": 769,
"text": "(Gaur et al., 2019;",
"ref_id": "BIBREF9"
},
{
"start": 770,
"end": 786,
"text": "Lv et al., 2015)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 449,
"end": 457,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While applying the lexicon-based method has been known to be explainable and easy to implement (Kotelnikova et al., 2021; Razova et al., 2021) , it may have a limitation: only focusing on whether each word in a post is matched with the suicide lexicon, not considering the context. For example, as illustrated in Figure 1 , there are two sentences: \"I cut my wrist\" and \"I have my hair cut\". Assuming that the word 'cut' belongs to the suicide dictionary, only the former sentence should be evaluated as having suicidality. However, the latter sentence could also appear to have suicidality if the methods of prior work (Lv et al., 2015; Gaur et al., 2019) are applied. In other words, if the context is incorrectly captured, a model using a suicide lexicon created by experts may not be able to accurately assess the risk of suicidality (Limsopatham and Collier, 2016) .",
"cite_spans": [
{
"start": 95,
"end": 121,
"text": "(Kotelnikova et al., 2021;",
"ref_id": "BIBREF19"
},
{
"start": 122,
"end": 142,
"text": "Razova et al., 2021)",
"ref_id": "BIBREF26"
},
{
"start": 620,
"end": 637,
"text": "(Lv et al., 2015;",
"ref_id": "BIBREF22"
},
{
"start": 638,
"end": 656,
"text": "Gaur et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 838,
"end": 869,
"text": "(Limsopatham and Collier, 2016)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 313,
"end": 321,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To address this problem, we propose to model the dynamic semantic knowledge between posts and multiple suicide-related words in a suicide dictionary. Capturing the posts' document-word association and word co-occurrence is crucial to un-derstanding the contextualized suicidality revealed in social media posts. To this end, we apply a graph neural network to jointly learn word and document embeddings over a contextual graph representing the relations between posts and multiple suicide-related words in the dictionary. We build a heterogeneous network describing the relations (i) between social media posts and multiple words in a suicide dictionary and (ii) between suicide words based on the co-occurrence. As node information in the given graph, a post node includes the contextual representation obtained from pretrained BERT (Devlin et al., 2018) , and a word node contains the suicide risk level information and contextual representation obtained from the fine-tuned Word2Vec (Mikolov et al., 2013) . We learn the proposed heterogeneous graph using the modified GraphSAGE (Hamilton et al., 2017) , Contextual GraphSAGE (C-GraphSAGE), to derive a contextualized graph representation.",
"cite_spans": [
{
"start": 834,
"end": 855,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 986,
"end": 1008,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF23"
},
{
"start": 1072,
"end": 1105,
"text": "GraphSAGE (Hamilton et al., 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Instead of using existing suicide dictionaries, we create a word-level suicide dictionary based on social media data using a computational method (Section 3). Since the existing suicide-related lexicon mostly consists of clinical terms (e.g., 'Suicide by self-administered drug') validated by domain experts (Gaur et al., 2019) , it may result in a discrepancy with the language used in social media. The created suicide dictionary consists of 279 words and four categories of suicidality levels.",
"cite_spans": [
{
"start": 308,
"end": 327,
"text": "(Gaur et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We summarize our contributions as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose a contextualized suicidality detection model Contextual GraphSAGE (C-GraphSAGE) using a graph neural network, which can effectively utilize a suicide dictionary. Our evaluation of the real-world dataset demonstrates that the proposed model outperforms the state-of-the-art methods for detecting suicide risk levels using a suicide dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We make a word-level English suicide dictionary based on social media data publicly available 2 . We believe the created dictionary can be useful for researchers who want to assess suicidal ideation on social media to prevent potential suicide risks at an early stage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Related Work",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Researchers have investigated that user activity data on social media can provide a cue for analyzing individual suicidality (De Choudhury et al., 2016; Shing et al., 2018) . Specifically, prior research showed that linguistic characteristics revealed in social media posts (Sawhney et al., 2020 (Sawhney et al., , 2021a could be linked to suicidal ideation. In particular, utilizing suicide dictionaries made by domain experts has been demonstrated as effective (Lv et al., 2015; Cao et al., 2019; Gaur et al., 2019; Lee et al., 2020) , and such lexicon-based methods are known to be fast, explainable, and easy to implement (Kotelnikova et al., 2021; Razova et al., 2021) . For example, Lv et al. (2015) developed and validated that a Chinese suicide dictionary made by domain experts helps predict suicidality. Similarly, Gaur et al. (2019) demonstrated the predictive power of suicide dictionaries with domain knowledge. With the recent advancement of deep learning technologies, high-performing deep learning models have been proposed for accurately assessing suicidality (Sawhney et al., 2021a,b; Cao et al., 2020) . In this way, incorporating a suicide dictionary into a deep learning model has received great attention (Cao et al., 2019; Lee et al., 2020) . For example, Cao et al. (2019) built suicide-oriented word embeddings to intensify the sensibility of suiciderelated lexicons and employed a two-layered attention mechanism. Lee et al. (2020) proposed a deep learning method to utilize existing suicide dictionaries for the low-resource language where a knowledge-based suicide dictionary has not yet been developed. However, the prior work focused on how each word in a post is associated with the words/phrases in a suicide dictionary, e.g., via lexical matching (Lv et al., 2015; Gaur et al., 2019) or fixed word embeddings (Cao et al., 2019; Lee et al., 2020) , which may fail to capture the semantic information of suicide lexicons in the suicide-related context.",
"cite_spans": [
{
"start": 125,
"end": 152,
"text": "(De Choudhury et al., 2016;",
"ref_id": "BIBREF5"
},
{
"start": 153,
"end": 172,
"text": "Shing et al., 2018)",
"ref_id": "BIBREF31"
},
{
"start": 274,
"end": 295,
"text": "(Sawhney et al., 2020",
"ref_id": "BIBREF28"
},
{
"start": 296,
"end": 320,
"text": "(Sawhney et al., , 2021a",
"ref_id": "BIBREF29"
},
{
"start": 463,
"end": 480,
"text": "(Lv et al., 2015;",
"ref_id": "BIBREF22"
},
{
"start": 481,
"end": 498,
"text": "Cao et al., 2019;",
"ref_id": "BIBREF3"
},
{
"start": 499,
"end": 517,
"text": "Gaur et al., 2019;",
"ref_id": "BIBREF9"
},
{
"start": 518,
"end": 535,
"text": "Lee et al., 2020)",
"ref_id": "BIBREF20"
},
{
"start": 626,
"end": 652,
"text": "(Kotelnikova et al., 2021;",
"ref_id": "BIBREF19"
},
{
"start": 653,
"end": 673,
"text": "Razova et al., 2021)",
"ref_id": "BIBREF26"
},
{
"start": 689,
"end": 705,
"text": "Lv et al. (2015)",
"ref_id": "BIBREF22"
},
{
"start": 825,
"end": 843,
"text": "Gaur et al. (2019)",
"ref_id": "BIBREF9"
},
{
"start": 1077,
"end": 1102,
"text": "(Sawhney et al., 2021a,b;",
"ref_id": null
},
{
"start": 1103,
"end": 1120,
"text": "Cao et al., 2020)",
"ref_id": "BIBREF2"
},
{
"start": 1227,
"end": 1245,
"text": "(Cao et al., 2019;",
"ref_id": "BIBREF3"
},
{
"start": 1246,
"end": 1263,
"text": "Lee et al., 2020)",
"ref_id": "BIBREF20"
},
{
"start": 1279,
"end": 1296,
"text": "Cao et al. (2019)",
"ref_id": "BIBREF3"
},
{
"start": 1440,
"end": 1457,
"text": "Lee et al. (2020)",
"ref_id": "BIBREF20"
},
{
"start": 1780,
"end": 1797,
"text": "(Lv et al., 2015;",
"ref_id": "BIBREF22"
},
{
"start": 1798,
"end": 1816,
"text": "Gaur et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 1842,
"end": 1860,
"text": "(Cao et al., 2019;",
"ref_id": "BIBREF3"
},
{
"start": 1861,
"end": 1878,
"text": "Lee et al., 2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Suicidality Assessment with Suicide Lexicon",
"sec_num": "2.1"
},
{
"text": "Among the recent deep learning technologies, graph neural networks (GNNs) have received growing attention in the suicidality assessment task. In particular, GNNs were adopted to extract social information from a user's neighborhood in a social network formed between different users posting about suicidality (Sinha et al., 2019; Sawhney et al., 2021b) . Furthermore, Cao et al. (2020) built personal knowledge graphs on Sina Weibo to utilize rich social interaction data in suicidal ideation detection. Since capturing the posts' documentword association and word co-occurrence is crucial to understanding the contextualized suicide intent revealed in social media posts using the suicide dictionary, we apply a GNN to jointly learn word and document embeddings over a textual graph representing the relations between posts and multiple suicide-related words in the dictionary. Note that GNN has been explored to be useful in jointly learning word and document embeddings over a textual graph representation from the perspective of using lexicon for many NLP tasks (Yao et al., 2019; Tang et al., 2020) .",
"cite_spans": [
{
"start": 309,
"end": 329,
"text": "(Sinha et al., 2019;",
"ref_id": "BIBREF33"
},
{
"start": 330,
"end": 352,
"text": "Sawhney et al., 2021b)",
"ref_id": "BIBREF30"
},
{
"start": 368,
"end": 385,
"text": "Cao et al. (2020)",
"ref_id": "BIBREF2"
},
{
"start": 1066,
"end": 1084,
"text": "(Yao et al., 2019;",
"ref_id": "BIBREF35"
},
{
"start": 1085,
"end": 1103,
"text": "Tang et al., 2020)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Suicidality Assessment with Graph Neural Networks",
"sec_num": "2.2"
},
{
"text": "A suicide-related word list can help build a simple detector that automatically responds with helpline links to suicidal content. However, the existing English suicide-related lexicon 3 mainly was made of clinical terms validated by domain experts (Gaur et al., 2019) , which results in the discrepancy with the language used in social media. Hence, the authors (Gaur et al., 2019) just used the suicide lexicon as a criterion for checking the presence of a concept in the user's posts. Instead of using the existing English suicide lexicon mostly consisting of clinical terms, we propose to create a word-level English suicide dictionary based on social media data. The proposed computational method can be easily applied to other languages that do not have their own suicide lexicons.",
"cite_spans": [
{
"start": 248,
"end": 267,
"text": "(Gaur et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 362,
"end": 381,
"text": "(Gaur et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Suicide Dictionary",
"sec_num": "3"
},
{
"text": "Creating a Suicide Dictionary. We create a word-level English suicide dictionary in a computational way using the UMD Reddit Suicidality Dataset (Shing et al., 2018; Zirikly et al., 2019) . The dataset contains 79,569 posts uploaded to 37,083 subreddits of 866 Reddit users posted on the r/SuicideWatch subreddit from 2008 to 2015. In addition, each post is labeled the suicidality severity conducted by crowdsourcing and domain experts (i.e., No risk, Low risk, Moderate risk, and Severe risk). We only use the posts uploaded to the r/SuicideWatch and 15 mental-health-related 3 https://github.com/AmanuelF/ Suicide-Risk-Assessment-using-Reddit subreddits (e.g., r/depression, r/anxiety, r/selfharm, etc.) (Gaur et al., 2018) as a target group and use the posts of users who had not posted on either r/SuicideWatch or mental-health related subreddits as a control group. Before constructing a dictionary, we anonymize the dataset by removing personally identifiable information such as names, email addresses, and URLs. After removing stopwords and lemmatizing the text using spaCy (Honnibal and Montani, 2017) , we extract keywords for each post using KeyBERT (Grootendorst, 2020) , and then apply the sparse additive generative model (SAGE) (Eisenstein et al., 2011) to determine the words specialized for each label compared to the entire lexicon. Finally, the constructed dictionary includes 297 suicide-related words. Note that the words belonging to the control group are excluded from the corpus set of each label.",
"cite_spans": [
{
"start": 145,
"end": 165,
"text": "(Shing et al., 2018;",
"ref_id": "BIBREF31"
},
{
"start": 166,
"end": 187,
"text": "Zirikly et al., 2019)",
"ref_id": "BIBREF36"
},
{
"start": 707,
"end": 726,
"text": "(Gaur et al., 2018)",
"ref_id": "BIBREF10"
},
{
"start": 1083,
"end": 1111,
"text": "(Honnibal and Montani, 2017)",
"ref_id": null
},
{
"start": 1162,
"end": 1182,
"text": "(Grootendorst, 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Suicide Dictionary",
"sec_num": "3"
},
{
"text": "Validation and Correction. We recruited two clinical psychotherapists and a psychiatrist to validate and correct the computationally generated suicide dictionary. All annotators verify how well each label of the suicide word complies with the existing sharing task guideline (Shing et al., 2018; Zirikly et al., 2019) , and correct it if it does not meet the criteria. Each annotator performs the validation process independently. The final risk label of each suicide word is set to the label agreed by more than or equal to two annotators. As a result of removing 18 differently validated words from all three annotators, there are 279 words in the final dictionary. Table 1 describes the example of words for each class in the generated suicide dictionary.",
"cite_spans": [
{
"start": 275,
"end": 295,
"text": "(Shing et al., 2018;",
"ref_id": "BIBREF31"
},
{
"start": 296,
"end": 317,
"text": "Zirikly et al., 2019)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 668,
"end": 675,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Suicide Dictionary",
"sec_num": "3"
},
{
"text": "[CLS]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT Suicide Dictionary",
"sec_num": null
},
{
"text": "[CLS] lustrates the overall architecture of the proposed model. The model first takes a heterogeneous network that includes posts and suicide words as input. We then apply GraphSAGE (Hamilton et al., 2017) to the given graph to learn the informative representation of suicide-related context by capturing (i) post-words associations and (ii) relations between suicide-related words. Finally, the extracted node presentation from the network is fed into the classification layer. The given post is classified into one of five risk categories: Support (SU), Indicator (IN), Ideation (ID), Behavior (BR), and Attempt (AT).",
"cite_spans": [
{
"start": 172,
"end": 205,
"text": "GraphSAGE (Hamilton et al., 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT Suicide Dictionary",
"sec_num": null
},
{
"text": "P 1 P 2 P 3 W v W i W j W 1 W W s W W 3 W W m W W 2 W n W W Post Fully Connected Layer 0 0 1 0 0 0 Soft Labels: softmax(\u00d8 ) 0.1 0.1^A GG {1, \u2026, K-1} AGGK k = K k = K-1 k = 1 \u2026 y One-hot vector Reddit Post \u2026 \u2026 \u2026 n 1 0 \u2026 Word Word Word2Vec",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT Suicide Dictionary",
"sec_num": null
},
{
"text": "We build a heterogeneous graph G = (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Heterogeneous Network",
"sec_num": "4.1"
},
{
"text": "V P \u222a V W , E P W \u222a E W W )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Heterogeneous Network",
"sec_num": "4.1"
},
{
"text": "to represent the relations between social media posts {p i } m i=1 \u2208 P and multiple words in a suicide dictionary {w i } n i=1 \u2208 W , where m and n indicate the number of posts and suicide words, respectively. A graph G consists of two types of nodes, post V P and suicide word V W nodes, and two types of edges, post-word E P W and word-word E W W edges. An edge in E P W is linked between a post and its corresponding word if a post contains a specific word in the dictionary. Note that no weight is attached on E P W . An edge in E W W is linked if two words in the suicide dictionary occur together in a post in the UMD Reddit Suicidality Dataset (Shing et al., 2018; Zirikly et al., 2019) , which is utilized in constructing a suicide dictionary (in Section 3). A weight on an edge in E W W can be computed by the positive Point-wise Mutual Information (PMI) score that can capture collocations and relations between two terms (Yao et al., 2019; Tang et al., 2020) as follows:",
"cite_spans": [
{
"start": 650,
"end": 670,
"text": "(Shing et al., 2018;",
"ref_id": "BIBREF31"
},
{
"start": 671,
"end": 692,
"text": "Zirikly et al., 2019)",
"ref_id": "BIBREF36"
},
{
"start": 931,
"end": 949,
"text": "(Yao et al., 2019;",
"ref_id": "BIBREF35"
},
{
"start": 950,
"end": 968,
"text": "Tang et al., 2020)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Heterogeneous Network",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P M I(w i , w j ) = log p(w i , w j ) p(w i )p(w j )",
"eq_num": "(1)"
}
],
"section": "Heterogeneous Network",
"sec_num": "4.1"
},
{
"text": "Note that we only attach the edge weight on a suicide word pair with the positive PMI value, which indicates a high semantic correlation of two words in a document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Heterogeneous Network",
"sec_num": "4.1"
},
{
"text": "Contextualized Node Features. In order to generate node features of posts X P and suicide words X W , we employ the pre-trained BERT for posts and pre-trained Word2Vec for suicide words, respectively, to capture the contextual representation of text features. Specifically, to obtain X P , a post p is fed into the BERT model and obtain the [CLS] token as a sentence-level representation of the claim as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Heterogeneous Network",
"sec_num": "4.1"
},
{
"text": "X p i = BERT (p i ) \u2208 IR 1\u00d7d cls (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Heterogeneous Network",
"sec_num": "4.1"
},
{
"text": "where d cls is the dimension size of a contextualized embedding of [CLS] and p i is i th post. For representing each suicide word w i , we apply the word-embedding from the pre-processed texts using the Word2Vec model, Gensim (Rehurek and Sojka, 2010) . The word vectors are pre-trained with the Skip-Gram representation model using the UMD Reddit Suicidality Dataset (Shing et al., 2018; Zirikly et al., 2019) , while the size of the window and the dimension are set to 5 and 200, respectively. Finally, X W is (i) the suicide risk level (i.e., 0, 1, 2, 3) of each word RL W and (ii) word embeddings W V W from pre-trained Word2Vec as follows:",
"cite_spans": [
{
"start": 226,
"end": 251,
"text": "(Rehurek and Sojka, 2010)",
"ref_id": "BIBREF27"
},
{
"start": 368,
"end": 388,
"text": "(Shing et al., 2018;",
"ref_id": "BIBREF31"
},
{
"start": 389,
"end": 410,
"text": "Zirikly et al., 2019)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Heterogeneous Network",
"sec_num": "4.1"
},
{
"text": "RL w i = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 3, Severe Risk 2, Moderate Risk 1, Low Risk 0, No Risk (3) W V w i = W ord2V ec(w i ) \u2208 IR 1\u00d7dwv (4) X w i = RL w i \u2295 W V w i \u2208 IR 1\u00d7(dwv+1) (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Heterogeneous Network",
"sec_num": "4.1"
},
{
"text": "where d wv is the dimension size of a Word2Vec and w i is i th word in the suicide dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Heterogeneous Network",
"sec_num": "4.1"
},
{
"text": "To generate node embedding from the given heterogeneous graph model, we apply the Graph-SAGE (Hamilton et al., 2017), a well-known model for a graph neural network (GNN) that supports batch-training without updating states over the whole graph and has shown experimental success compared to other graph representation learning models (Tang et al., 2020) . The model first recursively updates embedding for each node ",
"cite_spans": [
{
"start": 334,
"end": 353,
"text": "(Tang et al., 2020)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contextualized Graph Convolutional Encoder",
"sec_num": "4.2"
},
{
"text": "v from V P and V W by aggregating information from node v's immediate neighbors N (v), u \u2208 N (v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextualized Graph Convolutional Encoder",
"sec_num": "4.2"
},
{
"text": "N (v) = aggregate k {h k\u22121 u , \u2200u \u2208 N (v)} (6) h (k) v = \u03c3 W k \u2022 concat(h k\u22121 v , h k N (v) ) ) (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextualized Graph Convolutional Encoder",
"sec_num": "4.2"
},
{
"text": "As shown in Figure 3 , we propose to use an aggregation function (Eq. 6) based on a convolutional neural network (CNN) instead of existing aggregators such as pool, LSTM, and mean, used in Hamilton et al. (2017). A CNN is proven to be effective in detecting local patterns (Minaee et al., 2021) , hence it generates a feature map over the neighbor node embeddings that can explicitly capture relations of words in the suicide dictionary. Given the target node v's neighboring nodes",
"cite_spans": [
{
"start": 273,
"end": 294,
"text": "(Minaee et al., 2021)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Contextualized Graph Convolutional Encoder",
"sec_num": "4.2"
},
{
"text": "{u i } j i=1 \u2208 N (v)' embedding h k\u22121 u 1 , h k\u22121 u 2 , \u2022 \u2022 \u2022 , h k\u22121 u j \u2208 IR j\u00d7d ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextualized Graph Convolutional Encoder",
"sec_num": "4.2"
},
{
"text": "where d is the dimension of node feature, a convolution operation involving a filter q \u2208 IR l * d generates a feature c i from a window of nodes u i:i+l\u22121 as follows. where b is a bias term and ReLu (Nair and Hinton, 2010) is adopted as the non-linear function \u03c3. The filter is employed to each possible window of neighboring nodes to produce a feature map as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextualized Graph Convolutional Encoder",
"sec_num": "4.2"
},
{
"text": "c i = \u03c3(q \u2022 u i:i+l\u22121 + b) (8) \u22ee h k\u22121 u 1 h k\u22121 u 2 h k\u22121 u j h (k) (v) j \u00d7 d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextualized Graph Convolutional Encoder",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c = [c 1 , c 2 , \u2022 \u2022 \u2022 , c j\u2212l+1 ] \u2208 IR j\u2212l+1",
"eq_num": "(9)"
}
],
"section": "Contextualized Graph Convolutional Encoder",
"sec_num": "4.2"
},
{
"text": "To capture the diverse local structure, we adopt multiple filters with different sizes. For example, the set of kernel sizes used in this paper is [1, 2, 3] . In this way, the filter can create up to 3 neighbor nodes' combinations. We then apply a max-pooling operation (Collobert et al., 2011) over the feature map and take the maximum value\u0109 = max {c} as the feature corresponding to the filter. Finally, we derive a node v's neighbor nodes' representation as follows.",
"cite_spans": [
{
"start": 147,
"end": 150,
"text": "[1,",
"ref_id": null
},
{
"start": 151,
"end": 153,
"text": "2,",
"ref_id": null
},
{
"start": 154,
"end": 156,
"text": "3]",
"ref_id": null
},
{
"start": 270,
"end": 294,
"text": "(Collobert et al., 2011)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contextualized Graph Convolutional Encoder",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h (k) N (v) = F c (\u0109) \u2208 IR 1\u00d7d",
"eq_num": "(10)"
}
],
"section": "Contextualized Graph Convolutional Encoder",
"sec_num": "4.2"
},
{
"text": "Note that, if node v has neighbors with different node types, we sum representations of neighbor nodes. Since we predict the suicidality level of the post, we only consider the node V p 's representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextualized Graph Convolutional Encoder",
"sec_num": "4.2"
},
{
"text": "To predict the suicidality level of a post, the proposed decoder identifies suicidal severity for each node by learning the graph representation as follows.\u0177",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Suicidality Detection Decoder",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= F c (h (k) v )",
"eq_num": "(11)"
}
],
"section": "Suicidality Detection Decoder",
"sec_num": "4.3"
},
{
"text": "Like Sawhney et al. (2021a) , we adopt the ordinal regression loss (Diaz and Marathe, 2019) as an objective function. Instead of using an onehot vector representation of the true labels, they used a soft encoded vector representation by considering the ordinal nature between suicidality levels. While ground truth labels are denoted as Y = {SU = 0, IN = 1, ID = 2, BR = 3, AT = 4} = r i 4 i=0 , soft labels as probability distributions of ground truth labels is denoted by y = [y 0 , y 1 , y 2 , y 3 , y 4 ]. The probability y i of each risklevel r i is",
"cite_spans": [
{
"start": 5,
"end": 27,
"text": "Sawhney et al. (2021a)",
"ref_id": "BIBREF29"
},
{
"start": 67,
"end": 91,
"text": "(Diaz and Marathe, 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Suicidality Detection Decoder",
"sec_num": "4.3"
},
{
"text": "y i = e \u2212\u03c6(rt,r i ) \u03bb k=1 e \u2212\u03c6(rt,r i ) \u2200r i \u2208 Y (12)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Suicidality Detection Decoder",
"sec_num": "4.3"
},
{
"text": "where e \u2212\u03c6(rt,r i ) is a cost function that penalizes how far the true risk-level r t is from a risk-level r i \u2208 Y, which is formulated as e \u2212\u03c6(rt,r i ) = \u03b1 |r t \u2212 r i |, where \u03b1 is a penalty parameter for incorrect prediction. Finally, the cross-entropy loss is calculated using the probability distribution y and classification score\u0177 obtained in Eq( 11) as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Suicidality Detection Decoder",
"sec_num": "4.3"
},
{
"text": "L = \u2212 1 n n j=1 \u03bb i=1 y ij log\u0177 ij (13)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Suicidality Detection Decoder",
"sec_num": "4.3"
},
{
"text": "where n is the batch size and \u03bb is the number of risk-levels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Suicidality Detection Decoder",
"sec_num": "4.3"
},
{
"text": "We evaluate the our proposed model by answering the following research questions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "\u2022 RQ1: Is the proposed suicide dictionary made by a computational method effective in detecting suicidality risk?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "\u2022 RQ2: Can using the suicide dictionary help improve the model performance?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "\u2022 RQ3: Is the C-GraphSAGE efficient in utilizing the suicide dictionary?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "To learn our proposed model, we utilize The Golden Standard Dataset introduced by (Gaur et al., 2019) , which consists of Reddit posts collected from the 9 suicide-related subreddits (e.g., r/SuicideWatch and r/depression). The dataset is within the time frame from 2005 to 2016 and annotated with 5 suicidality levels (i.e., Supportive, Indicator, Ideation, Behavior, and Attempt) by mental health experts 4 . While the dataset contains both user-level and post-level data, we utilize the postlevel data in this paper since our model aims to detect suicidality levels for a given social media post, and a post-level prediction can be useful for immediate or early intervention on suicidality risks. Finally, the dataset includes 1346, 420, 337, 77, and 49 posts for the Supportive, Indicator, Ideation, Behavior, and Attempt levels, respectively. In addition, we implement a stratified 60:20:20 split such that the train, validation, and test sets consist of 1,427, 356, and 446 posts, respectively.",
"cite_spans": [
{
"start": 82,
"end": 101,
"text": "(Gaur et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "5.1"
},
{
"text": "To consider the ordinal nature of suicidality risk levels, we adopt the modified definitions of False Positive (F P ), False Negative (F N ) (Gaur et al., 2019) in our experiments as follows.",
"cite_spans": [
{
"start": 141,
"end": 160,
"text": "(Gaur et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "F P = N T i=1 I(\u0177 i >y i ) N T (14) F N = N T i=1 I(y i >\u0177 i ) N T",
"eq_num": "(15)"
}
],
"section": "Evaluation Metrics",
"sec_num": "5.2"
},
{
"text": "where\u0177 i is the predicted level, y i is the actual level for i th test data, and N T is the size of the test data. \u2206 (y i ,\u0177 i ) is the difference between y i and\u0177 i . The evaluation metric terms for precision and recall are renamed as graded precision and graded recall, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5.2"
},
{
"text": "We compare the proposed model against the following three types of models: (1) Lexicon-based approaches; Rule-based (Gaur et al., 2019) , SVM (Lv et al., 2015) , and Random Forest (RF) (Amini et al., 2016) , (2) Deep learning approaches w/o lexicon; Contextual CNN (Gaur et al., 2019) , SISMO (Sawhney et al., 2021a) , and BERT (Devlin et al., 2018) , and (3) Lexicon + deep learning; Cao et al. (2019) and Reformed BERT. Detailed experimental settings for reproducibility are summarized in the Appendix ??. We tune hyperparameters based on the highest FScore obtained from the validation set for all the models. We use the grid search to explore (i) the number of kernel output size in aggregate functio\u00f1 q, (ii) the number of post features in hidden stat\u1ebd H D , (iii) the initial learning rate lr, and (iv) the dropout rate \u03c3. The optimal hyperparameters were Type of Model",
"cite_spans": [
{
"start": 116,
"end": 135,
"text": "(Gaur et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 142,
"end": 159,
"text": "(Lv et al., 2015)",
"ref_id": "BIBREF22"
},
{
"start": 185,
"end": 205,
"text": "(Amini et al., 2016)",
"ref_id": "BIBREF0"
},
{
"start": 265,
"end": 284,
"text": "(Gaur et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 293,
"end": 316,
"text": "(Sawhney et al., 2021a)",
"ref_id": "BIBREF29"
},
{
"start": 328,
"end": 349,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 385,
"end": 402,
"text": "Cao et al. (2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines and Experiment Settings",
"sec_num": "5.3"
},
{
"text": "Loss G-Precision G-Recall G-F1 Rule-based (Gaur et al., 2019) / 0.33 0.74 0.46 SVM (Lv et al., 2015) Hinge Loss 0.51 0.66 0.58 Suicide lexicon only RF (Amini et al., 2016) Gini Impurity 0.65 0.67 0.66 Contextual CNN (Gaur et al., 2019) Cross Entropy 0.78 0.57 0.66 SISMO (Sawhney et al., 2021a) Soft Label 0.77 0.77 0.77 SDM w/o Lexicon (Cao et al., 2019) Cross Entropy 0.73 0.75 0.74 Deep learning only BERT w/o Lexicon (Devlin et al., 2018) Soft Label 0.81 0.80 0.80 SDM w/ Lexicon (Cao et al., 2019) Cross Entropy 0.75 0.78 0.77 BERT w/ Lexicon (Devlin et al., 2018) Soft Label 0.82 0.79 0.81 Suicide lexicon + Deep learning C-GraphSAGE (Ours) Soft Label 0.85 0.82 0.84 Table 2 : Performance comparisons of the proposed model and baselines.",
"cite_spans": [
{
"start": 42,
"end": 61,
"text": "(Gaur et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 83,
"end": 100,
"text": "(Lv et al., 2015)",
"ref_id": "BIBREF22"
},
{
"start": 151,
"end": 171,
"text": "(Amini et al., 2016)",
"ref_id": "BIBREF0"
},
{
"start": 216,
"end": 235,
"text": "(Gaur et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 271,
"end": 294,
"text": "(Sawhney et al., 2021a)",
"ref_id": "BIBREF29"
},
{
"start": 337,
"end": 355,
"text": "(Cao et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 421,
"end": 442,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 484,
"end": 502,
"text": "(Cao et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 548,
"end": 569,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 673,
"end": 680,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "found to be:q = 50,H D = 512, lr = 3e \u2212 5, and \u03c3 = 0.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "In this section, we present our experiment results to answer the three above research questions. Table 2 summarizes the overall performance results of the proposed model (C-GraphSAGE) and the baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "6.1 RQ1: Is the proposed suicide dictionary made by a computational method effective in detecting suicidality risks? Table 3 : Performance Comparisons between the existing suicide dictionary made by domain experts and the proposed computationally created dictionary (Ours).",
"cite_spans": [],
"ref_spans": [
{
"start": 117,
"end": 124,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "To answer the first question, we evaluate the suicidality detection models (Rule-based (Gaur et al., 2019) and Random Forest (RF)) with two different suicide dictionaries: (1) the domain knowledgebased one made by experts (Gaur et al., 2019) , and (2) a computationally created one (Ours). As shown in Table 3 , the performance with the suicide dictionary created by a computation method (Ours) outperforms the domain knowledge-based lexicon. Furthermore, it indicates that a word-level English suicide dictionary based on social media data is helpful to be mapped with social media posts for detecting suicidality. In other words, the proposed computational method to create a suicide dictionary effectively detects suicidality.",
"cite_spans": [
{
"start": 87,
"end": 106,
"text": "(Gaur et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 222,
"end": 241,
"text": "(Gaur et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 302,
"end": 309,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "6.2 RQ2: Can using the suicide dictionary help improve the model performance?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Overall, deep learning models with a suicide dictionary (i.e., C-GraphSAGE, 'SDM w/ lexicon', and 'BERT w/ lexicon') perform better than the models that use only text information such as C-CNN, SISMO, 'SDM w/o lexicon', and 'BERT w/o lexicon'. This shows that a model using a suicide dictionary can present the suicide-related context of posts, resulting in high performance. Note that 'SDM w/ lexicon' uses the fine-tuned word embedding model to capture domain knowledge from a pre-built suicide dictionary (Cao et al., 2019) , whereas 'SDM w/o lexicon' adopts pre-trained FastText embeddings (Bojanowski et al., 2017) for encoding posts. Also, 'the BERT w/ lexicon' adds the suicide words on the BERT-Tokenizer.",
"cite_spans": [
{
"start": 508,
"end": 526,
"text": "(Cao et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 594,
"end": 619,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "6.3 RQ3: Is the C-GraphSAGE efficient in utilizing the suicide dictionary? C-GraphSAGE outperforms the other model using a suicide dictionary, the Reformed BERT, offering an insight that capturing dynamic semantic information from a suicide dictionary is beneficial rather than considering only the presence of suicide words. We attribute this to the strength of the graph neural network model that can learn better representations from the relations between posts and words in the suicide dictionary and the associations between suicide words in the suicide-related context. As a result, C-GraphSAGE is helpful in accurately identifying suicidality levels, which shows outstanding utility in preventing suicide risks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "We perform an ablation study to examine the effectiveness of different aggregation functions over the proposed C-GraphSAGE, as shown in Table 4 . We compare the proposed CNN-based aggregation Figure 4 : A qualitative analysis on the two cases shows the C-GraphSAGE can capture the risk levels accurately. function with the three popular aggregation functions \u2208 {LST M, P ool, M ean} (Hamilton et al., 2017) as well as bi \u2212 LST M (Tang et al., 2020) . As shown in Table 4 , the model performance significantly improves when we use the aggregation function based on a CNN than other aggregators. Notably, the CNN aggregator outperforms the biL-STM (Tang et al., 2020) . This is because an RNN works well in capturing long-term dependencies, whereas a CNN can effectively identify structural patterns. In other words, it is crucial to capture local relations between words than the order of words in our case. We believe that the proposed aggregator can effectively capture neighboring node information, thereby enhancing the robustness of the model for unseen data.",
"cite_spans": [
{
"start": 361,
"end": 406,
"text": "{LST M, P ool, M ean} (Hamilton et al., 2017)",
"ref_id": null
},
{
"start": 429,
"end": 448,
"text": "(Tang et al., 2020)",
"ref_id": "BIBREF34"
},
{
"start": 646,
"end": 665,
"text": "(Tang et al., 2020)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [
{
"start": 136,
"end": 143,
"text": "Table 4",
"ref_id": "TABREF7"
},
{
"start": 192,
"end": 200,
"text": "Figure 4",
"ref_id": null
},
{
"start": 463,
"end": 470,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "6.4"
},
{
"text": "Aggregation Function G-Precision G-Recall G-F1 C-GraphSAGE +",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "6.4"
},
{
"text": "To provide detailed insight and interpretability, we qualitatively analyze two cases where C-GraphSAGE performs better than other models in Figure 4 . We compare how to predict suicidality by each model given the input that contains the same suicide words. Both posts contain high-level suicide words, but the actual suicidality is relatively low. The proposed model C-GraphSAGE predicts the corresponding risk accurately, whereas other models that assess risk only by the presence of suicide words are likely to classify suicidality levels more highly than actual levels.",
"cite_spans": [],
"ref_spans": [
{
"start": 140,
"end": 148,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "6.5"
},
{
"text": "This paper proposed a suicidality detection model, C-GraphSAGE, which can capture the context of suicidality by learning the relations between social media posts and suicide-related words. Using a word-level English suicide dictionary validated by domain experts, the proposed model achieved higher performance than the state-of-the-art methods in detecting suicidality levels. We believe the proposed model has great utility in identifying potential suicidality levels of individuals with social media data, preventing individuals from potential suicide risks at an early stage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding Discussion",
"sec_num": "7"
},
{
"text": "Ethical Concerns. This study is reviewed and approved by the Institutional Review Board (SKKU2020-10-021). All datasets are anonymized. Hence no personal information can be identifiable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding Discussion",
"sec_num": "7"
},
{
"text": "Limitation. Assessing suicidality using social media data is subjective (Keilp et al., 2012) , and the analysis of this paper can be interpreted in diverse ways across the researchers. The experiment data may be sensitive to demographic, annotator, and media-specific biases (Hovy and Spruit, 2016) . The analytical patterns learned by C-GraphSAGE may fail to generalize to other social media due to the relatively small data and/or short time window appeared in Reddit. Nevertheless, an interpretable model can help to follow and improve other targets with different statistical patterns and biases (Jacobson et al., 2020) . There is an overlap in data collection periods between the data used to create the suicide dictionary (2008 -2015) and the data used in the experiment (2005 -2016) . Since all the datasets are anonymized, a Jaccard similarity analysis (Jaccard, 1908) is performed in a grid manner to determine a similarity between all post pairs in two datasets. The result shows that the Jaccard coefficient is quite low (max = 0.5 , mean = 0.1, std = 0.05), meaning that both groups are unrelated.",
"cite_spans": [
{
"start": 72,
"end": 92,
"text": "(Keilp et al., 2012)",
"ref_id": null
},
{
"start": 275,
"end": 298,
"text": "(Hovy and Spruit, 2016)",
"ref_id": "BIBREF14"
},
{
"start": 600,
"end": 623,
"text": "(Jacobson et al., 2020)",
"ref_id": "BIBREF16"
},
{
"start": 728,
"end": 740,
"text": "(2008 -2015)",
"ref_id": null
},
{
"start": 777,
"end": 789,
"text": "(2005 -2016)",
"ref_id": null
},
{
"start": 861,
"end": 876,
"text": "(Jaccard, 1908)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding Discussion",
"sec_num": "7"
},
{
"text": "Practical Applicability. The proposed suicidality detection model can be used for screening or identifying individuals at risk on social media to prioritize early intervention for clinical support.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding Discussion",
"sec_num": "7"
},
{
"text": "https://sites.google.com/view/ daeun-lee/dataset",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The ModelWe propose a suicidality detection model C-GraphSAGE that can capture the severity of suicidality of a post on social media.Figure 2il-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/AmanuelF/ Suicide-Risk-Assessment-using-Reddit",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Evaluating the high risk groups for suicide: a comparison of logistic regression, support vector machine, decision tree and artificial neural network",
"authors": [
{
"first": "Payam",
"middle": [],
"last": "Amini",
"suffix": ""
},
{
"first": "Hasan",
"middle": [],
"last": "Ahmadinia",
"suffix": ""
},
{
"first": "Jalal",
"middle": [],
"last": "Poorolajal",
"suffix": ""
},
{
"first": "Mohammad Moqaddasi",
"middle": [],
"last": "Amiri",
"suffix": ""
}
],
"year": 2016,
"venue": "Iranian journal of public health",
"volume": "45",
"issue": "9",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Payam Amini, Hasan Ahmadinia, Jalal Poorolajal, and Mohammad Moqaddasi Amiri. 2016. Evaluating the high risk groups for suicide: a comparison of lo- gistic regression, support vector machine, decision tree and artificial neural network. Iranian journal of public health, 45(9):1179.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Building and using personal knowledge graph to improve suicidal ideation detection on social media",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Huijun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ling",
"middle": [],
"last": "Feng",
"suffix": ""
}
],
"year": 2020,
"venue": "IEEE Transactions on Multimedia",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lei Cao, Huijun Zhang, and Ling Feng. 2020. Build- ing and using personal knowledge graph to improve suicidal ideation detection on social media. IEEE Transactions on Multimedia.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Latent suicide risk detection on microblog via suicideoriented word embeddings and layered attention",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Huijun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ling",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Zihan",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ningyun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiaohao",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "1718--1728",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lei Cao, Huijun Zhang, Ling Feng, Zihan Wei, Xin Wang, Ningyun Li, and Xiaohao He. 2019. La- tent suicide risk detection on microblog via suicide- oriented word embeddings and layered attention. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 1718- 1728.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of machine learning research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of machine learning research, 12(ARTICLE):2493-2537.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Discovering shifts to suicidal ideation from mental health content in social media",
"authors": [
{
"first": "Emre",
"middle": [],
"last": "Munmun De Choudhury",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Kiciman",
"suffix": ""
},
{
"first": "Glen",
"middle": [],
"last": "Dredze",
"suffix": ""
},
{
"first": "Mrinal",
"middle": [],
"last": "Coppersmith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 CHI conference on human factors in computing systems",
"volume": "",
"issue": "",
"pages": "2098--2110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Munmun De Choudhury, Emre Kiciman, Mark Dredze, Glen Coppersmith, and Mrinal Kumar. 2016. Dis- covering shifts to suicidal ideation from mental health content in social media. In Proceedings of the 2016 CHI conference on human factors in com- puting systems, pages 2098-2110.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Soft labels for ordinal regression",
"authors": [
{
"first": "Raul",
"middle": [],
"last": "Diaz",
"suffix": ""
},
{
"first": "Amit",
"middle": [],
"last": "Marathe",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "4738--4747",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raul Diaz and Amit Marathe. 2019. Soft labels for or- dinal regression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recog- nition, pages 4738-4747.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Sparse additive generative models of text",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Amr",
"middle": [],
"last": "Ahmed",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 28th international conference on machine learning (ICML-11)",
"volume": "",
"issue": "",
"pages": "1041--1048",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Eisenstein, Amr Ahmed, and Eric P Xing. 2011. Sparse additive generative models of text. In Pro- ceedings of the 28th international conference on ma- chine learning (ICML-11), pages 1041-1048. Cite- seer.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Knowledge-aware assessment of severity of suicide risk for early intervention",
"authors": [
{
"first": "Manas",
"middle": [],
"last": "Gaur",
"suffix": ""
},
{
"first": "Amanuel",
"middle": [],
"last": "Alambo",
"suffix": ""
},
{
"first": "Joy",
"middle": [],
"last": "Prakash Sain",
"suffix": ""
},
{
"first": "Ugur",
"middle": [],
"last": "Kursuncu",
"suffix": ""
},
{
"first": "Krishnaprasad",
"middle": [],
"last": "Thirunarayan",
"suffix": ""
},
{
"first": "Ramakanth",
"middle": [],
"last": "Kavuluru",
"suffix": ""
},
{
"first": "Amit",
"middle": [],
"last": "Sheth",
"suffix": ""
},
{
"first": "Randy",
"middle": [],
"last": "Welton",
"suffix": ""
},
{
"first": "Jyotishman",
"middle": [],
"last": "Pathak",
"suffix": ""
}
],
"year": 2019,
"venue": "proceedings of the 2019 World Wide Web Conference",
"volume": "",
"issue": "",
"pages": "514--525",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manas Gaur, Amanuel Alambo, Joy Prakash Sain, Ugur Kursuncu, Krishnaprasad Thirunarayan, Ra- makanth Kavuluru, Amit Sheth, Randy Welton, and Jyotishman Pathak. 2019. Knowledge-aware assess- ment of severity of suicide risk for early intervention. In proceedings of the 2019 World Wide Web Confer- ence, pages 514-525.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "let me tell you about your mental health!\" contextualized classification of reddit posts to dsm-5 for web-based intervention",
"authors": [
{
"first": "Manas",
"middle": [],
"last": "Gaur",
"suffix": ""
},
{
"first": "Ugur",
"middle": [],
"last": "Kursuncu",
"suffix": ""
},
{
"first": "Amanuel",
"middle": [],
"last": "Alambo",
"suffix": ""
},
{
"first": "Amit",
"middle": [],
"last": "Sheth",
"suffix": ""
},
{
"first": "Raminta",
"middle": [],
"last": "Daniulaityte",
"suffix": ""
},
{
"first": "Krishnaprasad",
"middle": [],
"last": "Thirunarayan",
"suffix": ""
},
{
"first": "Jyotishman",
"middle": [],
"last": "Pathak",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th ACM International Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "753--762",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manas Gaur, Ugur Kursuncu, Amanuel Alambo, Amit Sheth, Raminta Daniulaityte, Krishnaprasad Thirunarayan, and Jyotishman Pathak. 2018. \" let me tell you about your mental health!\" contextu- alized classification of reddit posts to dsm-5 for web-based intervention. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, pages 753-762.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Keybert: Minimal keyword extraction with bert",
"authors": [
{
"first": "Maarten",
"middle": [],
"last": "Grootendorst",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.5281/zenodo.4461265"
]
},
"num": null,
"urls": [],
"raw_text": "Maarten Grootendorst. 2020. Keybert: Minimal key- word extraction with bert.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Inductive representation learning on large graphs",
"authors": [
{
"first": "Rex",
"middle": [],
"last": "William L Hamilton",
"suffix": ""
},
{
"first": "Jure",
"middle": [],
"last": "Ying",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Leskovec",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 31st International Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1025--1035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William L Hamilton, Rex Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Proceedings of the 31st International Conference on Neural Information Processing Sys- tems, pages 1025-1035.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Honnibal",
"suffix": ""
},
{
"first": "Ines",
"middle": [],
"last": "Montani",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embed- dings, convolutional neural networks and incremen- tal parsing. To appear.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The social impact of natural language processing",
"authors": [
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Shannon",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Spruit",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "591--598",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dirk Hovy and Shannon L Spruit. 2016. The social impact of natural language processing. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 591-598.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Nouvelles recherches sur la distribution florale",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Jaccard",
"suffix": ""
}
],
"year": 1908,
"venue": "Bull. Soc. Vaud. Sci. Nat",
"volume": "44",
"issue": "",
"pages": "223--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Jaccard. 1908. Nouvelles recherches sur la distri- bution florale. Bull. Soc. Vaud. Sci. Nat., 44:223- 270.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Ethical dilemmas posed by mobile health and machine learning in psychiatry research",
"authors": [
{
"first": "Kate",
"middle": [
"H"
],
"last": "Nicholas C Jacobson",
"suffix": ""
},
{
"first": "Ashley",
"middle": [],
"last": "Bentley",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Walton",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Shirley",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [
"G"
],
"last": "Wang",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"J"
],
"last": "Fortgang",
"suffix": ""
},
{
"first": "Garth",
"middle": [],
"last": "Millner",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "Coombs",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [
"M"
],
"last": "Rodman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Coppersmith",
"suffix": ""
}
],
"year": 2020,
"venue": "Bulletin of the World Health Organization",
"volume": "98",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas C Jacobson, Kate H Bentley, Ashley Walton, Shirley B Wang, Rebecca G Fortgang, Alexander J Millner, Garth Coombs III, Alexandra M Rodman, and Daniel DL Coppersmith. 2020. Ethical dilem- mas posed by mobile health and machine learning in psychiatry research. Bulletin of the World Health Organization, 98(4):270.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Suicidal ideation and the subjective aspects of depression",
"authors": [],
"year": null,
"venue": "Journal of affective disorders",
"volume": "140",
"issue": "1",
"pages": "75--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suicidal ideation and the subjective aspects of de- pression. Journal of affective disorders, 140(1):75- 81.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Lexiconbased methods vs. bert for text sentiment analysis",
"authors": [
{
"first": "Anastasia",
"middle": [],
"last": "Kotelnikova",
"suffix": ""
},
{
"first": "Danil",
"middle": [],
"last": "Paschenko",
"suffix": ""
},
{
"first": "Klavdiya",
"middle": [],
"last": "Bochenina",
"suffix": ""
},
{
"first": "Evgeny",
"middle": [],
"last": "Kotelnikov",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2111.10097"
]
},
"num": null,
"urls": [],
"raw_text": "Anastasia Kotelnikova, Danil Paschenko, Klavdiya Bochenina, and Evgeny Kotelnikov. 2021. Lexicon- based methods vs. bert for text sentiment analysis. arXiv preprint arXiv:2111.10097.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Cross-lingual suicidaloriented word embedding toward suicide prevention",
"authors": [
{
"first": "Daeun",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Soyoung",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Jiwon",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Daejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Jinyoung",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
"volume": "",
"issue": "",
"pages": "2208--2217",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daeun Lee, Soyoung Park, Jiwon Kang, Daejin Choi, and Jinyoung Han. 2020. Cross-lingual suicidal- oriented word embedding toward suicide prevention. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 2208-2217.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Normalising medical concepts in social media texts by learning semantic representation",
"authors": [
{
"first": "Nut",
"middle": [],
"last": "Limsopatham",
"suffix": ""
},
{
"first": "Nigel",
"middle": [],
"last": "Collier",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1014--1023",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nut Limsopatham and Nigel Collier. 2016. Normalis- ing medical concepts in social media texts by learn- ing semantic representation. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1014-1023.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Creating a chinese suicide dictionary for identifying suicide risk on social media",
"authors": [
{
"first": "Meizhen",
"middle": [],
"last": "Lv",
"suffix": ""
},
{
"first": "Ang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Tianli",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Tingshao",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2015,
"venue": "PeerJ",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meizhen Lv, Ang Li, Tianli Liu, and Tingshao Zhu. 2015. Creating a chinese suicide dictionary for iden- tifying suicide risk on social media. PeerJ, 3:e1455.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Deep learning-based text classification: A comprehensive review",
"authors": [
{
"first": "Shervin",
"middle": [],
"last": "Minaee",
"suffix": ""
},
{
"first": "Nal",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Narjes",
"middle": [],
"last": "Nikzad",
"suffix": ""
},
{
"first": "Meysam",
"middle": [],
"last": "Chenaghlu",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2021,
"venue": "ACM Computing Surveys (CSUR)",
"volume": "54",
"issue": "3",
"pages": "1--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Nar- jes Nikzad, Meysam Chenaghlu, and Jianfeng Gao. 2021. Deep learning-based text classification: A comprehensive review. ACM Computing Surveys (CSUR), 54(3):1-40.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Rectified linear units improve restricted boltzmann machines",
"authors": [
{
"first": "Vinod",
"middle": [],
"last": "Nair",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vinod Nair and Geoffrey E Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In Icml.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Does bert look at sentiment lexicon?",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Razova",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Vychegzhanin",
"suffix": ""
},
{
"first": "Evgeny",
"middle": [],
"last": "Kotelnikov",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2111.10100"
]
},
"num": null,
"urls": [],
"raw_text": "Elena Razova, Sergey Vychegzhanin, and Evgeny Kotelnikov. 2021. Does bert look at sentiment lex- icon? arXiv preprint arXiv:2111.10100.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Software framework for topic modelling with large corpora",
"authors": [
{
"first": "Radim",
"middle": [],
"last": "Rehurek",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Sojka",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the LREC 2010 workshop on new challenges for NLP frameworks",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radim Rehurek and Petr Sojka. 2010. Software frame- work for topic modelling with large corpora. In In Proceedings of the LREC 2010 workshop on new challenges for NLP frameworks. Citeseer.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A time-aware transformer based model for suicide ideation detection on social media",
"authors": [
{
"first": "Ramit",
"middle": [],
"last": "Sawhney",
"suffix": ""
},
{
"first": "Harshit",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Saumya",
"middle": [],
"last": "Gandhi",
"suffix": ""
},
{
"first": "Rajiv",
"middle": [],
"last": "Shah",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "7685--7697",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramit Sawhney, Harshit Joshi, Saumya Gandhi, and Rajiv Shah. 2020. A time-aware transformer based model for suicide ideation detection on social media. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7685-7697.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Towards ordinal suicide ideation detection on social media",
"authors": [
{
"first": "Ramit",
"middle": [],
"last": "Sawhney",
"suffix": ""
},
{
"first": "Harshit",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Saumya",
"middle": [],
"last": "Gandhi",
"suffix": ""
},
{
"first": "Rajiv Ratn",
"middle": [],
"last": "Shah",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 14th ACM International Conference on Web Search and Data Mining",
"volume": "",
"issue": "",
"pages": "22--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramit Sawhney, Harshit Joshi, Saumya Gandhi, and Rajiv Ratn Shah. 2021a. Towards ordinal suicide ideation detection on social media. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, pages 22-30.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Suicide ideation detection via social and temporal user representations using hyperbolic learning",
"authors": [
{
"first": "Ramit",
"middle": [],
"last": "Sawhney",
"suffix": ""
},
{
"first": "Harshit",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Rajiv",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Lucie",
"middle": [],
"last": "Flek",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "2176--2190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramit Sawhney, Harshit Joshi, Rajiv Shah, and Lucie Flek. 2021b. Suicide ideation detection via social and temporal user representations using hyperbolic learning. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 2176-2190.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Expert, crowdsourced, and machine assessment of suicide risk via online postings",
"authors": [
{
"first": "Han-Chin",
"middle": [],
"last": "Shing",
"suffix": ""
},
{
"first": "Suraj",
"middle": [],
"last": "Nair",
"suffix": ""
},
{
"first": "Ayah",
"middle": [],
"last": "Zirikly",
"suffix": ""
},
{
"first": "Meir",
"middle": [],
"last": "Friedenberg",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Fifth Workshop on Computational Linguistics and Clinical Psychology: From Keyboard to Clinic",
"volume": "",
"issue": "",
"pages": "25--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Han-Chin Shing, Suraj Nair, Ayah Zirikly, Meir Friedenberg, Hal Daum\u00e9 III, and Philip Resnik. 2018. Expert, crowdsourced, and machine assess- ment of suicide risk via online postings. In Proceed- ings of the Fifth Workshop on Computational Lin- guistics and Clinical Psychology: From Keyboard to Clinic, pages 25-36.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A prioritization model for suicidality risk assessment",
"authors": [
{
"first": "Han-Chin",
"middle": [],
"last": "Shing",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
},
{
"first": "Douglas",
"middle": [
"W"
],
"last": "Oard",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8124--8137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Han-Chin Shing, Philip Resnik, and Douglas W Oard. 2020. A prioritization model for suicidality risk as- sessment. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 8124-8137.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "# suicidal-a multipronged approach to identify and explore suicidal ideation in twitter",
"authors": [
{
"first": "",
"middle": [],
"last": "Pradyumna Prakhar",
"suffix": ""
},
{
"first": "Rohan",
"middle": [],
"last": "Sinha",
"suffix": ""
},
{
"first": "Ramit",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "Debanjan",
"middle": [],
"last": "Sawhney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mahata",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rajiv Ratn",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 28th ACM International Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "941--950",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pradyumna Prakhar Sinha, Rohan Mishra, Ramit Sawh- ney, Debanjan Mahata, Rajiv Ratn Shah, and Huan Liu. 2019. # suicidal-a multipronged approach to identify and explore suicidal ideation in twitter. In Proceedings of the 28th ACM International Confer- ence on Information and Knowledge Management, pages 941-950.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Multi-label patent categorization with non-local attention-based graph convolutional network",
"authors": [
{
"first": "Pingjie",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Ning Xia",
"suffix": ""
},
{
"first": "Jed",
"middle": [
"W"
],
"last": "Pitera",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Welser",
"suffix": ""
},
{
"first": "Nitesh V",
"middle": [],
"last": "Chawla",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "34",
"issue": "",
"pages": "9024--9031",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pingjie Tang, Meng Jiang, Bryan Ning Xia, Jed W Pitera, Jeffrey Welser, and Nitesh V Chawla. 2020. Multi-label patent categorization with non-local attention-based graph convolutional network. In Proceedings of the AAAI Conference on Artificial In- telligence, volume 34, pages 9024-9031.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Graph convolutional networks for text classification",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Chengsheng",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "7370--7377",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. Graph convolutional networks for text classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7370-7377.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Clpsych 2019 shared task: Predicting the degree of suicide risk in reddit posts",
"authors": [
{
"first": "Ayah",
"middle": [],
"last": "Zirikly",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
},
{
"first": "Ozlem",
"middle": [],
"last": "Uzuner",
"suffix": ""
},
{
"first": "Kristy",
"middle": [],
"last": "Hollingshead",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the sixth workshop on computational linguistics and clinical psychology",
"volume": "",
"issue": "",
"pages": "24--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ayah Zirikly, Philip Resnik, Ozlem Uzuner, and Kristy Hollingshead. 2019. Clpsych 2019 shared task: Pre- dicting the degree of suicide risk in reddit posts. In Proceedings of the sixth workshop on computational linguistics and clinical psychology, pages 24-33.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "The overall architecture of the model."
},
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "), through the aggregation function at each search depth k.After that, h k v , node v's representation at step k, is updated by combining h k(v), which is the representation of v's neighboring nodes at step k. As suggested in Hamilton et al.(2017), the neighboring nodes are uniformly sampled with a fixed-size set for each search depth. The initial output is h 0 v = X v . The series of updating processes is defined as follows.h (k)"
},
"FIGREF2": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "The example of aggregating information from neighborhood of the target node by CNN."
},
"TABREF2": {
"type_str": "table",
"content": "<table/>",
"text": "Example words of the generated suicide dictionary.",
"html": null,
"num": null
},
"TABREF5": {
"type_str": "table",
"content": "<table><tr><td/><td/><td>Post 1</td><td/><td/><td>Post 1</td><td>Post 2</td></tr><tr><td>die</td><td>die</td><td/><td/><td/><td>C-CNN</td><td>BR (3)</td><td>BR (3)</td></tr><tr><td/><td/><td/><td/><td/><td>SISMO</td><td>BR (3)</td><td>ID (2)</td></tr><tr><td/><td/><td>suicidal</td><td/><td/><td>BERT</td><td>BR (3)</td><td>ID (2)</td></tr><tr><td/><td/><td>Post 2</td><td/><td/><td>R-BERT</td><td>IN (1)</td><td>IN (1)</td></tr><tr><td/><td/><td/><td/><td>schizophrenia</td><td>C-GraphSAGE</td><td>SU (0)</td><td>IN (1)</td></tr><tr><td>mom</td><td/><td>end</td><td/><td/></tr><tr><td>life</td><td/><td/><td/><td/><td>True Risk</td><td>SU (0)</td><td>IN (1)</td></tr><tr><td>SU (0)</td><td>IN (1)</td><td>ID (2)</td><td>BR (3)</td><td>AT (4)</td></tr></table>",
"text": "\"I know the easiest way to die. To die of old age. Giving up is not what you really want to do. You came here for support because there is a part of you that doesn't want this. Think about that part and don't give in to the other side; the suicidal. side.\"\" From the day I was born, it's been a problem. there's no break. My Schizophrenia. , my mom and I went from house to house, we ended up in the ghetto ... and now I don't remember past of my life. I got through it, everything was ok, and now I can't do it all again. \"",
"html": null,
"num": null
},
"TABREF7": {
"type_str": "table",
"content": "<table><tr><td>: An ablation study on different aggregation functions over C-GraphSAGE.</td></tr></table>",
"text": "",
"html": null,
"num": null
}
}
}
}