ACL-OCL / Base_JSON /prefixL /json /ltedi /2022.ltedi-1.15.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:11:58.393012Z"
},
"title": "NYCU TWD@LT-EDI-ACL2022: Ensemble Models with VADER and Contrastive Learning for Detecting Signs of Depression from Social Media",
"authors": [
{
"first": "Wei-Yao",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Yang Ming Chiao Tung University",
"location": {
"settlement": "Hsinchu",
"country": "Taiwan"
}
},
"email": ""
},
{
"first": "Yu-Chien",
"middle": [],
"last": "Tang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Yang Ming Chiao Tung University",
"location": {
"settlement": "Hsinchu",
"country": "Taiwan"
}
},
"email": ""
},
{
"first": "Wei-Wei",
"middle": [],
"last": "Du",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Yang Ming Chiao Tung University",
"location": {
"settlement": "Hsinchu",
"country": "Taiwan"
}
},
"email": ""
},
{
"first": "Wen-Chih",
"middle": [],
"last": "Peng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Yang Ming Chiao Tung University",
"location": {
"settlement": "Hsinchu",
"country": "Taiwan"
}
},
"email": "wcpeng@nctu.edu.tw"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a state-of-the-art solution to the LT-EDI-ACL 2022 Task 4: Detecting Signs of Depression from Social Media Text. The goal of this task is to detect the severity levels of depression of people from social media posts, where people often share their feelings on a daily basis. To detect the signs of depression, we propose a framework with pre-trained language models using rich information instead of training from scratch, gradient boosting and deep learning models for modeling various aspects, and supervised contrastive learning for the generalization ability. Moreover, ensemble techniques are also employed in consideration of the different advantages of each method. Experiments show that our framework achieves a 2nd prize ranking with a macro F1-score of 0.552, showing the effectiveness and robustness of our approach.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a state-of-the-art solution to the LT-EDI-ACL 2022 Task 4: Detecting Signs of Depression from Social Media Text. The goal of this task is to detect the severity levels of depression of people from social media posts, where people often share their feelings on a daily basis. To detect the signs of depression, we propose a framework with pre-trained language models using rich information instead of training from scratch, gradient boosting and deep learning models for modeling various aspects, and supervised contrastive learning for the generalization ability. Moreover, ensemble techniques are also employed in consideration of the different advantages of each method. Experiments show that our framework achieves a 2nd prize ranking with a macro F1-score of 0.552, showing the effectiveness and robustness of our approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Social media enable people to communicate and acquire information regardless of distance due to the rapid growth of the Internet. Besides, people can express their emotions about posts, news, and discussions on social media through texts and videos, which has thus attracted researchers who are interested in analyzing the emotional behavior of user comments. For instance, Saha et al. (2021) introduced a speech act classification Twitter dataset and presented an attention mechanism to incorporate intra-modal and inter-modal information. AudiB-ERT, which adopts the multimodal nature of the human voice, was proposed to screen depression (Toto et al., 2021) . , and Pirayesh et al. (2021) proposed a social-contagion based framework based on meta-learning for early detection of depression.",
"cite_spans": [
{
"start": 374,
"end": 392,
"text": "Saha et al. (2021)",
"ref_id": "BIBREF12"
},
{
"start": 641,
"end": 660,
"text": "(Toto et al., 2021)",
"ref_id": "BIBREF15"
},
{
"start": 665,
"end": 691,
"text": "and Pirayesh et al. (2021)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this challenge hosted by LT-EDI 1 , given social media posts in English, the goal is to detect the \u2020 Equal contributions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1 https://sites.google.com/view/lt-edi-2022/home signs of depression and classify them into three labels, namely not depression, moderate, and severe. To tackle the shared task, we propose a framework with three methods for modeling the given texts. Specifically, the sentence embedding is produced by pre-trained models, and the VAD score (positive, neutral, negative, and compound) is generated by VADER (Hutto and Gilbert, 2014). Then, our first method utilized sentence embedding and VAD scores in gradient boosting models using SMOTE (Chawla et al., 2002) to mitigate the imbalance issue. The second method used a multi-layer perceptron (MLP) to fine-tune the pre-trained models.",
"cite_spans": [
{
"start": 539,
"end": 560,
"text": "(Chawla et al., 2002)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In addition, the third method further incorporated VAD embedding with MLP to classify the signs of depression. Furthermore, the third method adopted supervised contrastive learning (Gunel et al., 2021) in both sentence embedding and VAD embedding to enhance the capability of generalization. Afterwards, we used ensemble techniques, which have been used for substantially improving model performance (Wang et al., 2020; Wang and Peng, 2022) , to consider the advantage of each method for boosting the performance. We use the dataset provided by (Sampath et al., 2022) to detect the signs of depression from social media text. The dataset contains 8,891 posts for training, 4,496 posts for validation, and 3,245 posts for evaluation, while each sample is composed of three columns: PID, Text, and Label. Table 1 shows some examples of the dataset.",
"cite_spans": [
{
"start": 181,
"end": 201,
"text": "(Gunel et al., 2021)",
"ref_id": "BIBREF4"
},
{
"start": 400,
"end": 419,
"text": "(Wang et al., 2020;",
"ref_id": "BIBREF16"
},
{
"start": 420,
"end": 440,
"text": "Wang and Peng, 2022)",
"ref_id": "BIBREF17"
},
{
"start": 545,
"end": 567,
"text": "(Sampath et al., 2022)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 803,
"end": 810,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In summary, our main results and observations are described as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose a framework with three methods including gradient boosting models, finetuning pre-trained models, and fine-tuning pre-trained models by supervised contrastive learning for modeling different aspects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Besides, the VAD score provides additional sentiment scores for detecting the signs of depression, and we adopt ensemble techniques to take advantage of each model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Our ensemble method achieved competitive performance in the shared task and won the 2nd prize (0.552 macro F1-score) in detecting signs of depression from social media text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Social media are among the platforms used to express one's emotions. They can therefore be viewed as an environment to study and discover user feelings. Recently, there have been several approaches to detecting signs of depression to eliminate the negative impact of emotions. For instance, Toto et al. (2021) introduced a framework with transfer learning to the multi-modality of textual context and audio characteristics of the human voice. Zogan et al. 2021proposed DepressionNet by summarizing history posts as a summary of the user and applying different modalities to infer user behavior, which motivated us to include VAD scores as the additional post feature in this challenge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "3 Method Figure 1 illustrates the pipeline of our framework. Given the input text, we first generate sentiment features (i.e., VAD scores) by VADER and sentence embeddings from pre-trained models. Then, we adopt three methods to model various aspects of the text, and apply ensemble techniques for integrating these predictions. Specifically, we use an unsupervised sentiment prediction, VADER, to assign sentiment scores to each sentence for measuring the sentiment effect of the word.",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 17,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We use SentenceTransformers (Reimers and Gurevych, 2019) to generate pre-trained sentence embeddings, and concatenate the sentiment feature embeddings and pre-trained sentence embedding to take different perspectives into account. Besides, SMOTE (Chawla et al., 2002) and CondensedNear-estNeighbour (Gowda and Krishna, 1979) is used for tackling the imbalanced classification problem. Then, LightGBM (Ke et al., 2017) and XGBoost (Chen and Guestrin, 2016) are applied as classifiers to predict the probability of each category in order to reduce the bias and variance through combining different learners. Cross-entropy is applied to optimize the values of the hyper-parameters.",
"cite_spans": [
{
"start": 28,
"end": 56,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF11"
},
{
"start": 240,
"end": 267,
"text": "SMOTE (Chawla et al., 2002)",
"ref_id": null
},
{
"start": 310,
"end": 324,
"text": "Krishna, 1979)",
"ref_id": "BIBREF3"
},
{
"start": 400,
"end": 417,
"text": "(Ke et al., 2017)",
"ref_id": "BIBREF8"
},
{
"start": 430,
"end": 455,
"text": "(Chen and Guestrin, 2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method 1: Gradient Boosting Models",
"sec_num": "3.1"
},
{
"text": "Fine-tuning pre-trained language models has demonstrated success in a wide range of natural language tasks since they provide fruitful information without the effort of training from scratch. To this end, we use three different pre-trained language models for fine-tuning in this task, including RoBERTa (Liu et al., 2019) , ELECTRA (Clark et al., 2020) , and DeBERTa (He et al., 2021) . Specifically, for each pre-trained model, each given text is first tokenized and then produces the sentence embedding. Then, the sentence embedding is fed into a MLP to generate the predicted probabilities of depression. To tackle the imbalance issue, we employ torchsampler 2 for rebalancing the class distributions. The objective function is trained to minimize the cross-entropy, and the pre-trained models are applied from (Wolf et al., 2019) .",
"cite_spans": [
{
"start": 304,
"end": 322,
"text": "(Liu et al., 2019)",
"ref_id": null
},
{
"start": 333,
"end": 353,
"text": "(Clark et al., 2020)",
"ref_id": "BIBREF2"
},
{
"start": 368,
"end": 385,
"text": "(He et al., 2021)",
"ref_id": "BIBREF5"
},
{
"start": 815,
"end": 834,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method 2: Pre-Trained Models",
"sec_num": "3.2"
},
{
"text": "To combine the ideas of the previous two methods, the sentence embedding is generated in the same way as the Sec. 3.2. Moreover, we apply VAD scores through an embedding layer with GeLU activation function (Hendrycks and Gimpel, 2016) , which has been used in several natural language tasks. Afterwards, we concatenate the sentence embedding and VAD embedding as the input of an MLP to classify the probabilities of each sign of depression. The imbalance technique is also used as in Sec. 3.2. We jointly train supervised contrastive learning (Gunel et al., 2021) and cross-entropy for enhancing the generalization of our method. Specifically, sentence embeddings and VAD embeddings are adopted supervised contrastive learning, respectively. Thus, similar sentences would become closer, while irrelevant sentences would increase the distance. The VAD scores would follow this phenomenon since it is reasonable that similar sentiment features would have a closer distance compared to the dissimilar sentiment features.",
"cite_spans": [
{
"start": 206,
"end": 234,
"text": "(Hendrycks and Gimpel, 2016)",
"ref_id": "BIBREF6"
},
{
"start": 543,
"end": 563,
"text": "(Gunel et al., 2021)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method 3: Contrastive Pre-Trained Models",
"sec_num": "3.3"
},
{
"text": "To combine the different advantages of each model, soft-voting ensemble is used for ensembling each method. Specifically, the predicted probabilities of Method 1 P 1 are averaged by LightGBM and XG-Boost, and the predicted probabilities of Method 2 P 2 are averaged by RoBERTa, ELECTRA, and DeBERTa. The predicted probabilities of Method 3 P 3 are weighted averaged by RoBERTa, ELEC-TRA, and DeBERTa with the weights of 0.15, 0.5, and 0.35, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Techniques",
"sec_num": "3.4"
},
{
"text": "To boost the performance, the final predicted probabilities P are computed with power weighted sum as in (Wang and Peng, 2022) :",
"cite_spans": [
{
"start": 105,
"end": 126,
"text": "(Wang and Peng, 2022)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Techniques",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P = P N 1 \u00d7 w 1 + P N 2 \u00d7 w 2 + P N 3 \u00d7 w 3 ,",
"eq_num": "(1)"
}
],
"section": "Ensemble Techniques",
"sec_num": "3.4"
},
{
"text": "where w 1 , w 2 , w 3 are weights of the corresponding model, and N is the weight of power. In this paper, we tune these hyper-parameters based on the validation set and use N as 4 and ensemble weights as 1.00, 0.67, and 0.69, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Techniques",
"sec_num": "3.4"
},
{
"text": "Due to the page limit, we report the selected hyperparameters of each method and the official code in the appendix 3 . It is noted that all hyper-parameters are tuned with the validation set by grid search.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.1"
},
{
"text": "We first examine the advantages of each model, and Table 2 reports the F1-score of each category of each method in the validation set. It is observed that each model specializes in detecting various signs of depression, respectively. For instance, gradient boosting models are adept at identifying not depression. As a result, ensemble techniques incorporate different models to improve performance and robustness.",
"cite_spans": [],
"ref_spans": [
{
"start": 51,
"end": 58,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Depression Performance",
"sec_num": "4.2"
},
{
"text": "The results for the testing set are shown as Table 3 in terms of accuracy and macro-F1. Our ensemble model performs the best compared to each method we introduced and won 2nd prize among all the participants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Depression Performance",
"sec_num": "4.2"
},
{
"text": "In this paper, we introduce a framework for the detecting signs of depression from social media text challenge which incorporates three different methods, namely gradient boosting models, pretrained models, and contrastive pre-trained models. Furthermore, ensemble techniques are adopted to enable our model's ability to integrate the strengths of each model. The experimental results demonstrate the effectiveness of our framework and verify the different capabilities of each method. Thus, ensembling three approaches achieves better performance on both the validation set and the testing set, resulting in a second ranking and achieving a competitive performance. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://github.com/ufoym/imbalanced-dataset-sampler",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/wywyWang/Depression-Detection-LT-EDI-ACL-2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "SMOTE: synthetic minority over-sampling technique",
"authors": [
{
"first": "V",
"middle": [],
"last": "Nitesh",
"suffix": ""
},
{
"first": "Kevin",
"middle": [
"W"
],
"last": "Chawla",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [
"O"
],
"last": "Bowyer",
"suffix": ""
},
{
"first": "W",
"middle": [
"Philip"
],
"last": "Hall",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kegelmeyer",
"suffix": ""
}
],
"year": 2002,
"venue": "J. Artif. Intell. Res",
"volume": "16",
"issue": "",
"pages": "321--357",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitesh V. Chawla, Kevin W. Bowyer, Lawrence O. Hall, and W. Philip Kegelmeyer. 2002. SMOTE: synthetic minority over-sampling technique. J. Artif. Intell. Res., 16:321-357.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Xgboost: A scalable tree boosting system",
"authors": [
{
"first": "Tianqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2016,
"venue": "KDD",
"volume": "",
"issue": "",
"pages": "785--794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianqi Chen and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. In KDD, pages 785- 794. ACM.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "ELECTRA: pretraining text encoders as discriminators rather than generators",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: pre- training text encoders as discriminators rather than generators. In ICLR. OpenReview.net.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The condensed nearest neighbor rule using the concept of mutual nearest neighborhood (corresp.)",
"authors": [
{
"first": "K",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Chidananda",
"middle": [],
"last": "Gowda",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Krishna",
"suffix": ""
}
],
"year": 1979,
"venue": "IEEE Trans. Inf. Theory",
"volume": "25",
"issue": "4",
"pages": "488--490",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Chidananda Gowda and G. Krishna. 1979. The con- densed nearest neighbor rule using the concept of mutual nearest neighborhood (corresp.). IEEE Trans. Inf. Theory, 25(4):488-490.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Supervised contrastive learning for pre-trained language model fine-tuning",
"authors": [
{
"first": "Beliz",
"middle": [],
"last": "Gunel",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2021,
"venue": "ICLR. OpenReview.net",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beliz Gunel, Jingfei Du, Alexis Conneau, and Veselin Stoyanov. 2021. Supervised contrastive learning for pre-trained language model fine-tuning. In ICLR. OpenReview.net.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Deberta: decoding-enhanced bert with disentangled attention",
"authors": [
{
"first": "Pengcheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Weizhu",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2021,
"venue": "ICLR. OpenReview.net",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. Deberta: decoding-enhanced bert with disentangled attention. In ICLR. OpenRe- view.net.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Bridging nonlinearities and stochastic regularizers with gaussian error linear units",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Hendrycks",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Hendrycks and Kevin Gimpel. 2016. Bridging non- linearities and stochastic regularizers with gaussian error linear units. CoRR, abs/1606.08415.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "VADER: A parsimonious rule-based model for sentiment analysis of social media text",
"authors": [
{
"first": "J",
"middle": [],
"last": "Clayton",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Hutto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gilbert",
"suffix": ""
}
],
"year": 2014,
"venue": "ICWSM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clayton J. Hutto and Eric Gilbert. 2014. VADER: A par- simonious rule-based model for sentiment analysis of social media text. In ICWSM. The AAAI Press.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Lightgbm: A highly efficient gradient boosting decision tree",
"authors": [
{
"first": "Guolin",
"middle": [],
"last": "Ke",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Finley",
"suffix": ""
},
{
"first": "Taifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Weidong",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Qiwei",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "3146--3154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. 2017. Lightgbm: A highly efficient gradient boosting decision tree. In NIPS, pages 3146-3154.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Mentalspot: Effective early screening for depression based on social contagion",
"authors": [
{
"first": "Jahandad",
"middle": [],
"last": "Pirayesh",
"suffix": ""
},
{
"first": "Haiquan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Wei-Shinn",
"middle": [],
"last": "Ku",
"suffix": ""
},
{
"first": "Da",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2021,
"venue": "CIKM",
"volume": "",
"issue": "",
"pages": "1437--1446",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jahandad Pirayesh, Haiquan Chen, Xiao Qin, Wei-Shinn Ku, and Da Yan. 2021. Mentalspot: Effective early screening for depression based on social contagion. In CIKM, pages 1437-1446. ACM.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Sentence-bert: Sentence embeddings using siamese bert-networks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3980--3990",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In EMNLP/IJCNLP (1), pages 3980-3990. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Towards sentiment and emotion aided multi-modal speech act classification in twitter",
"authors": [
{
"first": "Tulika",
"middle": [],
"last": "Saha",
"suffix": ""
},
{
"first": "Apoorva",
"middle": [],
"last": "Upadhyaya",
"suffix": ""
},
{
"first": "Sriparna",
"middle": [],
"last": "Saha",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2021,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "5727--5737",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tulika Saha, Apoorva Upadhyaya, Sriparna Saha, and Pushpak Bhattacharyya. 2021. Towards sentiment and emotion aided multi-modal speech act classifi- cation in twitter. In NAACL-HLT, pages 5727-5737. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Findings of the shared task on Detecting Signs of Depression from Social Media",
"authors": [
{
"first": "Chakravarthi",
"middle": [],
"last": "Bharathi Raja",
"suffix": ""
},
{
"first": "Jerin",
"middle": [],
"last": "Mahibha",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2022,
"venue": "Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion. Association Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bharathi Raja Chakravarthi, and Jerin Mahibha C. 2022. Findings of the shared task on Detecting Signs of Depression from Social Media. In Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion. Association Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Audibert: A deep transfer learning multimodal classification framework for depression screening",
"authors": [
{
"first": "M",
"middle": [
"L"
],
"last": "Ermal Toto",
"suffix": ""
},
{
"first": "Elke",
"middle": [
"A"
],
"last": "Tlachac",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rundensteiner",
"suffix": ""
}
],
"year": 2021,
"venue": "CIKM",
"volume": "",
"issue": "",
"pages": "4145--4154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ermal Toto, M. L. Tlachac, and Elke A. Rundensteiner. 2021. Audibert: A deep transfer learning multimodal classification framework for depression screening. In CIKM, pages 4145-4154. ACM.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Emotiongif-yankee: A sentiment classifier with robust model based ensemble methods. CoRR, abs",
"authors": [
{
"first": "Wei-Yao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kai-Shiang",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Yu-Chien",
"middle": [],
"last": "Tang",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei-Yao Wang, Kai-Shiang Chang, and Yu-Chien Tang. 2020. Emotiongif-yankee: A sentiment classifier with robust model based ensemble methods. CoRR, abs/2007.02259.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Team yao at factify 2022: Utilizing pre-trained models and coattention networks for multi-modal fact verification",
"authors": [
{
"first": "Wei-Yao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Wen-Chih",
"middle": [],
"last": "Peng",
"suffix": ""
}
],
"year": 2022,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei-Yao Wang and Wen-Chih Peng. 2022. Team yao at factify 2022: Utilizing pre-trained models and co- attention networks for multi-modal fact verification. CoRR, abs/2201.11664.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing. CoRR, abs/1910.03771.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Depressionnet: Learning multimodalities with user post summarization for depression detection on social media",
"authors": [
{
"first": "Hamad",
"middle": [],
"last": "Zogan",
"suffix": ""
},
{
"first": "Imran",
"middle": [],
"last": "Razzak",
"suffix": ""
},
{
"first": "Shoaib",
"middle": [],
"last": "Jameel",
"suffix": ""
},
{
"first": "Guandong",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2021,
"venue": "SIGIR",
"volume": "",
"issue": "",
"pages": "133--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hamad Zogan, Imran Razzak, Shoaib Jameel, and Guan- dong Xu. 2021. Depressionnet: Learning multi- modalities with user post summarization for depres- sion detection on social media. In SIGIR, pages 133- 142. ACM.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "The illustration of our proposed framework.",
"type_str": "figure"
},
"TABREF0": {
"content": "<table><tr><td>PID</td><td>Text</td><td>Label</td></tr><tr><td>train-pid-1</td><td/><td>moderate</td></tr><tr><td>train-pid-2</td><td>Words can't describe how bad I feel right now : I just want to fall asleep forever.</td><td>severe</td></tr><tr><td>train-pid-3</td><td>Is anybody else hoping the Coronavirus shuts everybody down?</td><td>not depression</td></tr></table>",
"num": null,
"html": null,
"text": "Samples from the depression dataset. My life gets worse every year : That's what it feels like anyway...",
"type_str": "table"
},
"TABREF1": {
"content": "<table><tr><td/><td>Gradient boosting models</td><td>Pre-trained models</td><td>Contrastive pre-trained models</td><td>Ensemble model</td></tr><tr><td>Not Depression</td><td>0.638</td><td>0.578</td><td>0.613</td><td>0.630</td></tr><tr><td>Moderate</td><td>0.633</td><td>0.704</td><td>0.667</td><td>0.707</td></tr><tr><td>Severe</td><td>0.416</td><td>0.510</td><td>0.506</td><td>0.532</td></tr><tr><td>Macro-F1</td><td>0.562</td><td>0.597</td><td>0.595</td><td>0.623</td></tr></table>",
"num": null,
"html": null,
"text": "F1-score of each category of each method for the validation set.",
"type_str": "table"
},
"TABREF2": {
"content": "<table><tr><td/><td>Gradient boosting models</td><td>Pre-trained models</td><td>Contrastive pre-trained models</td><td>Ensemble model</td></tr><tr><td>Accuracy</td><td>0.571</td><td>0.635</td><td>0.597</td><td>0.633</td></tr><tr><td>Macro-F1</td><td>0.496</td><td>0.528</td><td>0.523</td><td>0.552</td></tr></table>",
"num": null,
"html": null,
"text": "Performance of our approach for the testing set.",
"type_str": "table"
}
}
}
}