| { |
| "paper_id": "2022", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T12:31:13.161139Z" |
| }, |
| "title": "Learning to Automate Follow-up Question Generation using Process Knowledge for Depression Triage on Reddit Posts", |
| "authors": [ |
| { |
| "first": "Shrey", |
| "middle": [], |
| "last": "Gupta", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "shrey.gupta@students.iiit.ac.in" |
| }, |
| { |
| "first": "Anmol", |
| "middle": [], |
| "last": "Agarwal", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "anmol.agarwal@students.iiit.ac.in" |
| }, |
| { |
| "first": "Manas", |
| "middle": [], |
| "last": "Gaur", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of South Carolina", |
| "location": { |
| "region": "SC", |
| "country": "USA" |
| } |
| }, |
| "email": "mgaur@email.sc.edu" |
| }, |
| { |
| "first": "Kaushik", |
| "middle": [], |
| "last": "Roy", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of South Carolina", |
| "location": { |
| "region": "SC", |
| "country": "USA" |
| } |
| }, |
| "email": "kaushikr@email.sc.edu" |
| }, |
| { |
| "first": "Vignesh", |
| "middle": [], |
| "last": "Narayanan", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of South Carolina", |
| "location": { |
| "region": "SC", |
| "country": "USA" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Ponnurangam", |
| "middle": [], |
| "last": "Kumaraguru", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Amit", |
| "middle": [], |
| "last": "Sheth", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of South Carolina", |
| "location": { |
| "region": "SC", |
| "country": "USA" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "", |
| "pdf_parse": { |
| "paper_id": "2022", |
| "_pdf_hash": "", |
| "abstract": [], |
| "body_text": [ |
| { |
| "text": "Conversational Agents (CAs) powered with deep language models (DLMs) have shown tremendous promise in the domain of mental health. Prominently, the CAs have been used to provide informational or therapeutic services (e.g., cognitive behavioral therapy) to patients. However, the utility of CAs to assist in mental health triaging has not been explored in the existing work as it requires a controlled generation of follow-up questions (FQs), which are often initiated and guided by the mental health professionals (MHPs) in clinical settings. In the context of 'depression', our experiments show that DLMs coupled with process knowledge in a mental health questionnaire generate 12.54% and 9.37% better FQs based on similarity and longest common subsequence matches to questions in the PHQ-9 dataset respectively, when compared with DLMs without process knowledge support. Despite coupling with process knowledge, we find that DLMs are still prone to hallucination, i.e., generating redundant, irrelevant, and unsafe FQs. We demonstrate the challenge of using existing datasets to train a DLM for generating FQs that adhere to clinical process knowledge. To address this limitation, we prepared an extended PHQ-9 based dataset, PRIMATE, in collaboration with MHPs. PRI-MATE contains annotations regarding whether a particular question in the PHQ-9 dataset has already been answered in the user's initial description of the mental health condition. We used PRIMATE to train a DLM in a supervised setting to identify which of the PHQ-9 questions can be answered directly from the user's post and which ones would require more information from the user. Using performance analysis based on MCC scores, we show that PRI-MATE is appropriate for identifying questions in PHQ-9 that could guide generative DLMs towards controlled FQ generation (with minimal hallucination) suitable for aiding triaging. The", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Conversational agents (CAs) powered by DLMs are software designed to interact with human users for specific tasks. For mental health purposes, particularly depression, CAs have been studied extensively in prior work for helping patients follow generic mental health guidelines, typically by providing reminders to assist patients in adhering to the medication and therapy strategy outlined by a mental health professional (MHP) 12 . However, previous work on depression have not examined the use of CAs for triaging. For the purpose of triaging, CAs should learn to generate controlled and clinical process knowledge-guided discourse that can assist MHPs in diagnosis. Our research suggests a clinically grounded and explainable methodology to develop conversational information-seeking tools, first to learn \"what symptoms the user is suffering\" and \"what extra information is needed for triaging.\"", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "CAs are susceptible to irrelevant and sometimes harmful questions when generating FQs or responses to a patient suffering from depression (Miner et al., 2016) . The primary reason for irrelevant and harmful questions is that CAs cannot incorporate contextual information in generating appropriate follow-up questions (FQs) (see Figure 1) . Further, the sensitivity of the conversation and a controlled generation process are essential characteristics of patient-clinician interactions, which are difficult to embed in DLM-based CAs. Therefore, question generation (QG) in mental health is challenging, and research to develop CAs for automating triage has not been explored. Reddit is a rich source for bringing crowd perspective in training DLMs over conversational data. On the left is a sample post from r/depression help which sees inquisitive interaction from other Reddit users. At the top-right are the FQs asked by the Reddit users in the comments. These FQs are aimed at understanding the severity of the mental health situation of the user and are hence, diagnostically relevant. At the bottom-right are the questions generated by DLMs. It can be seen that these are not suitable FQs.", |
| "cite_spans": [ |
| { |
| "start": 138, |
| "end": 158, |
| "text": "(Miner et al., 2016)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 328, |
| "end": 338, |
| "text": "Figure 1)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Procedures for generating semantically related and logically ordered questions in the mental health domain are a form of process knowledge manifested in various clinical instruments for mental health triage. For example, the severity of depression is measured using Patient Health Questionnaire (PHQ-9). Enforcing DLMs to follow process knowledge, like in PHQ-9, would make CAs generate FQs similar to an MHP when they are seeking information from the patient (Karasz et al., 2012) . Unfortunately, datasets that meet this criterion are currently unavailable. Though clinical diagnostic interviews exist, they are not rich, sufficiently dense, and varied to train DLMs (Manas et al., 2021; Gratch et al., 2014) . Further, we require dataset(s) that includes support seeking queries and natural questions that show help providing behavior. For this purpose, anonymized usergenerated conversational data in Mental Health support communities on Reddit provides a rich source of fine-grained, contextual, and diverse information suitable for fine-tuning DLMs. Specific to depression, we explored posts and comments in r/depression help.", |
| "cite_spans": [ |
| { |
| "start": 460, |
| "end": 481, |
| "text": "(Karasz et al., 2012)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 669, |
| "end": 689, |
| "text": "(Manas et al., 2021;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 690, |
| "end": 710, |
| "text": "Gratch et al., 2014)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In the current research, we emphasize the limitations of T5, a state-of-the-art DLM 3 to generate process knowledge-like FQs using the data from 3 Current DLMs are either variants of T5 or built from T5 r/depression help (Raffel et al., 2019) . We filtered the dataset by retaining only posts with at least one comment that seeks additional information from the user seeking support. Further filtering of comments was performed using PHQ-9 to assist T5 in generating relevant FQs (see Figure 2 ). We found that the outcome is substantial for the single turn question answering model; however, not suitable for mental health triage, which is a discourse. We conducted a series of experiments keeping our focus on 'depression' and leveraged its associated process knowledge for mental health triage: the PHQ-9 (Kroenke et al., 2001 ). To the best of our knowledge, FQ generation relating to depression has never been studied using PHQ-9 for discourse modeling and generation.", |
| "cite_spans": [ |
| { |
| "start": 221, |
| "end": 242, |
| "text": "(Raffel et al., 2019)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 808, |
| "end": 829, |
| "text": "(Kroenke et al., 2001", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 485, |
| "end": 493, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We make the following key contributions: (a) Extending PHQ-9: PHQ-9 questions are limited in scope for common NLP tasks like finetuning. In collaboration with MHPs, we prepared a list of 134 sub-questions for nine PHQ-9 questions for better fine-tuning of T5. (b) We analyzed the performance of three variants of T5 using BLEURT (Sellam et al., 2020) and ROUGE-L scores that measure semantic relatedness and exact match similarity of generated question to sub-questions of PHQ-9. (c) PRIMATE Dataset: Lessons learned during our experiments suggested that T5 must be trained in a supervised setting to capture 'what the user has already mentioned about his/her depression condition in the post-text' and then generate FQs. Along with MHPs, we constructed a novel PRIMATE (PRocess knowledge Integrated Mental heAlth daTasEt) dataset that would train DLMs to capture PHQ-9-answerable information from user text. In this research, we restrict our experiments and discussion on whether PRIMATE can help capture context from the user post relevant to some PHQ-9 questions and pointing out which other PHQ-9 questions would form candidates to direct FQ generation. Our approach and insights have applications to Anxiety (GAD-7), Suicide (C-SSRS), and other mental health disorders as well.", |
| "cite_spans": [ |
| { |
| "start": 329, |
| "end": 350, |
| "text": "(Sellam et al., 2020)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Recently, DLMs have attracted much attention for question answering, thanks to their successes in NLP applications (Thoppilan et al., 2022; Borgeaud et al., 2021) . Research on question generation has focused on improving the legibility and relevance of questions. This is because DLMs continue to hallucinate while generating questions in general-purpose domains, which can lead to factually incorrect responses. This can have severe consequences in the mental health domain (Thoppilan et al., 2022) . Recently, inappropriate and toxic behaviors of language models have been extensively studied and reported in the literature (Dinan et al., 2021; Weidinger et al., 2021) . Solutions around fine-tuning, augmenting a neural retriever to support generation, and rules on generation quality have been defined as possible remedies (Manas et al., 2021) . These have been effective for the general-purpose domain; however, the research surrounding DLMs is yet to unfold in mental health. ELIZA (Weizenbaum, 1983) could transform users' statements into questions but employs labor-intensive templates to generate safe and relevant questions. Models like RAG and REALM were developed to include external knowledge to support question generation (Lewis et al., 2020; Guu et al., 2020) . However, these models are still susceptible to incoherent and irrelevant FQ generation . Further, their end-to-end learning approach is rigid to support process-guided question generation and discourse, often followed in a clinical setting for triage (Gaur et al., 2021) .", |
| "cite_spans": [ |
| { |
| "start": 115, |
| "end": 139, |
| "text": "(Thoppilan et al., 2022;", |
| "ref_id": null |
| }, |
| { |
| "start": 140, |
| "end": 162, |
| "text": "Borgeaud et al., 2021)", |
| "ref_id": null |
| }, |
| { |
| "start": 476, |
| "end": 500, |
| "text": "(Thoppilan et al., 2022)", |
| "ref_id": null |
| }, |
| { |
| "start": 627, |
| "end": 647, |
| "text": "(Dinan et al., 2021;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 648, |
| "end": 671, |
| "text": "Weidinger et al., 2021)", |
| "ref_id": null |
| }, |
| { |
| "start": 828, |
| "end": 848, |
| "text": "(Manas et al., 2021)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 989, |
| "end": 1007, |
| "text": "(Weizenbaum, 1983)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 1238, |
| "end": 1258, |
| "text": "(Lewis et al., 2020;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 1259, |
| "end": 1276, |
| "text": "Guu et al., 2020)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 1530, |
| "end": 1549, |
| "text": "(Gaur et al., 2021)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In theory, DLMs should be capable enough of extracting pieces of information from user description that portrays the understanding of the user and leverage it for generating the next FQ. For such a task, supervised training of DLMs with process knowledge and coupling it with information retrieval over domain-specific mental health knowledge is a viable solution. This is because mental health knowledge sources (e.g., SCID (Structured Clinical Interviews for DSM-5) have structured/semi-structured information on how interviews are performed (Brodey et al., 2018) . Our research substantiates that DLMs (e.g., T5) generate low quality follow-up questions in the context of depression for triage, and granting external knowledge through PHQ-9 reduces the rate at which models generate meaningless FQs (Thoppilan et al., 2022; Komeili et al., 2021 ). In the current research, we define an approach for supervised training of DLMs on a specific dataset that would yield probability distribution over PHQ-9 (with support from Extended PHQ-9). These probabilities will confirm whether the DLM can identify cues from user text that can inform a set of PHQ-9 questions. Remaining PHQ-9 questions are potential FQs.", |
| "cite_spans": [ |
| { |
| "start": 544, |
| "end": 565, |
| "text": "(Brodey et al., 2018)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 802, |
| "end": 826, |
| "text": "(Thoppilan et al., 2022;", |
| "ref_id": null |
| }, |
| { |
| "start": 827, |
| "end": 847, |
| "text": "Komeili et al., 2021", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Datasets: Prior datasets such as Counsel Chat (CounselChat), Counseling Conversations (Huang, 2015) , Role Play (Demasi et al., 2019) , Crisis Text Line (Althoff et al., 2016) and Reddit C-SSRS (Gaur et al., 2019) have been created to train CA for mental health counseling. Trained CAs can engage in a single turn question answering; however, conducting a conversation requires capturing user context and leveraging clinical instruments to guide the generation of FQs.", |
| "cite_spans": [ |
| { |
| "start": 86, |
| "end": 99, |
| "text": "(Huang, 2015)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 112, |
| "end": 133, |
| "text": "(Demasi et al., 2019)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 153, |
| "end": 175, |
| "text": "(Althoff et al., 2016)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 194, |
| "end": 213, |
| "text": "(Gaur et al., 2019)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Dataset for QG: Our approach to data collection involves scraping posts and comments from r/depression help, a subreddit on Reddit, which is meant to provide advice and support to help individuals suffering from depression. The posts on this subreddit contain flair tags such as SEEKING HELP, SEEKING ADVICE, and REQUESTING SUPPORT. We filter down the data curated from this subreddit based on the flair tag attribute to retain only advice, help or support seeking posts and their comments. After filtering, our dataset had approximately 21,000 posts. Each post contains a title, description, and comments. On average, each post has 5 comments. Next, we chunked the main text of each post into smaller groups of sentences (chunks) of less than 512 tokens while making sure Figure 2 : An illustration of our pipeline for developing Model 2 and Model 3 using T5 as the deep language models. Starting with posts (including comments) from r/depression help, we filter out comments that are neither interrogative nor information seeking in nature to yield a posts-questions dataset for fine-tuning T5. This dataset was further filtered using extended PHQ-9 before using it to fine-tune T5 (Model 3). Table 1 : Examples of questions generated by T5 when tasked to generate FQs when the user query for the post in Figure 1 was provided as input. Model 1, which is a pre-trained T5 (Raffel et al., 2019) , often generates questions which are irrelevant, unsafe, incoherent, and redundant. Model 2, which is T5 fine-tuned on r/depression help seems to be relatively coherent and inquisitive compared to Model 1. However, both models generate questions about the topic that user has discussed in their query. As a result, we see that pre-trained and fine-tuned DLMs fail to generate FQs. By enforcing FQ generation using using a dataset curated using extended PHQ-9, generated questions have been mostly inquisitive. This is shown by Model 3. Still, a lot of generations are around the problem the user mentioned.", |
| "cite_spans": [ |
| { |
| "start": 1374, |
| "end": 1395, |
| "text": "(Raffel et al., 2019)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 773, |
| "end": 781, |
| "text": "Figure 2", |
| "ref_id": null |
| }, |
| { |
| "start": 1195, |
| "end": 1202, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 1307, |
| "end": 1315, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Question Generation (QG)", |
| "sec_num": "3" |
| }, |
| { |
| "text": "no sentence is segmented in between. The motivation for chunking is to ensure no context is lost from the post due to the limitation of T5 to process 512 tokens as input (DLMs in general suffer from such representation limits). We also appended the post title to each chunk to ensure that main idea of each post was captured in it's chunks. This curated dataset tests T5's capability to generate FQs similar to any of the questions in the extended PHQ-9 questionnaire.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Question Generation (QG)", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Extending PHQ-9 to support FQ generation: PHQ-9 questions are subject to different interpreta- 2: In this example, the generated questions from both Model 2 and Model 3 seem to be relevant FQs, but they are not assessing the severity of the mental health condition, despite Model 3 being fine-tuned on a dataset filtered by PHQ-9 questions. In comparison to the qualitative outcome in Table 1 , this showcases the inability of T5 to support mental health triage.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 385, |
| "end": 392, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Question Generation (QG)", |
| "sec_num": "3" |
| }, |
| { |
| "text": "tions depending on patient-MHP interaction. Additionally, nine questions are limited in scope for use in tasks like fine-tuning and similarity-based performance evaluations. Therefore, to increase the strength of PHQ-9, we collaborated with MHPs to create sub-questions for each question in PHQ-9. First, we used Google SERP API 4 and Microsoft Bing Search API 5 to retrieve \"People-Also-Ask\" questions. For each question, we retrieved 40 questions by manually searching and assessing their relevance to PHQ-9 questions. Next, we provided the set of 360 questions to three MHPs for assessment. MHPs evaluated the questions on two grounds:(a) Whether they would ask such a question to a patient? (relevance) (b) If yes, when should such a question be asked? (rank). Based on their ratings, we created a final set of 134 sub-questions for the nine questions in PHQ-9 6 resulting in a total of 143 questions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Question Generation (QG)", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We used an offthe-bench T5-base QG model that was fine-tuned on the SQuAD 2.0 question generation dataset (Rajpurkar et al., 2018) [Model 1]. Next, we fine-tuned Model 1 on r/depression help posts and comments. To align with our task of making T5 generate relevant FQs, we filtered out comments which were non-interrogative. We kept only the interrogative statements asked by Reddit users in the comments [Model 2] . Not all interrogative comments by Reddit users are diagnostically relevant FQs (Eg: \"Can you use MS Excel?\", \"Were you interactions on FaceTime?\"). To remove such questions, we further filtered the dataset by calculating the maximum BLEURT score between the question (present in the comments) and the questions in extended PHQ-9. We applied a threshold of 0.60 to this score 7 . This removed harmful and diagnostically irrelevant questions while preserving contextual, semantically relevant, and legible questions [Model 3]. See Fig 1 for examples of Table 3 : Experimental results comparing different models in generating questions that match the sub-questions in PHQ-9.Q is the set of generated questions in each chunk. The performance is recorded over all the generated questions (Q). \u03b4 was used as the threshold on the similarity between generated question and PHQ-9 sub-questions while calculating hit rate. BLEURT records semantic similarity, whereas Rouge-L records the longest common subsequence exact match between generated question and PHQ-9 sub-questions. The highest performance on semantic and string similarity is bolded. Acceptable performance in Model 3 achieved using PHQ-9 motivated us to prepare PRIMATE. Figure 3 : A post in PRIMATE which is annotated with PHQ-9.The questions marked \"YES\" are answerable by DLMs using the mental health specific cues from user text. The questions marked \"NO\" are the questions a DLM should consider asking as FQs. Sentences within [] were taken as signals that the \"YES\" marked questions had already been answered in the post .", |
| "cite_spans": [ |
| { |
| "start": 405, |
| "end": 414, |
| "text": "[Model 2]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 942, |
| "end": 967, |
| "text": "See Fig 1 for examples of", |
| "ref_id": null |
| }, |
| { |
| "start": 968, |
| "end": 975, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 1642, |
| "end": 1650, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Models for FQ Generation:", |
| "sec_num": null |
| }, |
| { |
| "text": "Out of the 21k questions, performance of Models 1, 2, and 3 were examined on those 2003 posts that had at least one interrogative comment. Each of the three models was made to generate FQs in sets of 5, 10, and 15 through nucleus sampling (Holtzman et al., 2019) . For a generated question, BLEURT score was computed with each question in Extended PHQ-9 and the maximum among those scores was taken as the score for the generated question. A clear distinction between models 1, 2, and 3 is the nature of the questions asked. Model 1 generated closed book questions, whereas Model 2 and 3 seem to show some inquisitive nature and seem more focused on the mental health domain, which can be attributed to the after effect of finetuning on Reddit (see Table 1 and 2). We captured the performance of the models quantitatively using 'hit rate' as a metric. For a generated question (q), we denote :", |
| "cite_spans": [ |
| { |
| "start": 239, |
| "end": 262, |
| "text": "(Holtzman et al., 2019)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 749, |
| "end": 756, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Analysis of Models for Question Generation:", |
| "sec_num": null |
| }, |
| { |
| "text": "score(q) = max(bleurt score(q, q 1 ),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis of Models for Question Generation:", |
| "sec_num": null |
| }, |
| { |
| "text": "bleurt score(q, q 2 ), ..., bleurt score(q, q 143 )),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis of Models for Question Generation:", |
| "sec_num": null |
| }, |
| { |
| "text": "where q 1 , q 2 ...q 143 \u2208 Extended-PHQ-9. Across all 2003 posts, we had C = 2575 chunks 8 . Let total number of questions generated by a model be |Q| and |Q| denote the number of question generated by the model for a given chunk. For experimentation, we set |Q| to have values {5, 10, 15}. Thus, |Q| = |Q| * C. Then the Hit Rate for a model was computed as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis of Models for Question Generation:", |
| "sec_num": null |
| }, |
| { |
| "text": "Hit Rate(model, |Q|) = q \u03f5Q I(score(q) > \u03b4) |Q| ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis of Models for Question Generation:", |
| "sec_num": null |
| }, |
| { |
| "text": "where \u03b4 is the threshold on the similarity between generated question in a chunk and sub-questions in PHQ-9 and I[\u03c6] is the indicator function taking values 0 or 1 for a predicate \u03c6 ( Table 3 has the scores).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 184, |
| "end": 191, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Analysis of Models for Question Generation:", |
| "sec_num": null |
| }, |
| { |
| "text": "Inference: (1) Regardless of fine-tuning and filtering based on PHQ-9 questions, inherently, T5 does not capture the meaning and usage of the words in the mental health context. Moreover, T5 fails to generate legible and relevant FQs as safe as PHQ-9 questions. Therefore, we scrutinize the generated FQs by mapping them to most similar questions in extended PHQ-9. Examples of irrelevant generations by T5 that it thought were relevant are: (a) \"Wtf?\" (generated FQ) was found most similar to \"Do you have hope?\" (PHQ-9) (b) \"What did Boyfriend suffocate me with during his break up a week after I got a diagnosis?\" (generated FQ) was found most similar to \"What do you think makes you a failure\" (PHQ-9). The previous generated question is redundant as the answer to it was already present in the original post.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis of Models for Question Generation:", |
| "sec_num": null |
| }, |
| { |
| "text": "(2) Many generated questions contain extreme language due to the informal nature of the Reddit platform, which is very sensitive issue, especially in the mental health domain. Examples are: \"Did you f***ing realize that f***ing people are f***ing too?\" (generated FQ) was found to be the most similar to \"What do you think makes you a failure?\". Thus, T5 and its variants need to capture \"what the user knows and has already mentioned in his post\" by checking which PHQ-9 questions are already answerable using the user's post before generating the next probable FQs in order to avoid redundancy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis of Models for Question Generation:", |
| "sec_num": null |
| }, |
| { |
| "text": "We conceptualize our approach on the duality of data and the process knowledge contained in PHQ-9 (see Figure 4) . First, a BERT Answerability Evaluator identifies which questions in PHQ-9 are already answerable (using the user's initial description of his/her condition in the post) and which ones need more information to be answerable. The latter type of questions form candidates for training a generative DLM for FQ generation. We present PRIMATE, a dataset consisting of Reddit posts containing user situations describing their health conditions and whether the questions in PHQ-9 are answerable using the content in the posts. Each question is attributed with a binary \"yes\" or \"no\" label stating whether the user's description already contains the answer to that question (see Table 4 ). PRIMATE was created from a month long annotation-evaluation cycle between MHPs and crowd workers. A total of five crowd workers performed this task, achieving an initial annotator agreement of 67% using Fleiss kappa. Subsequently, the MHPs assessed the quality of annotations and provided their suggestion for improvement, leading to an acceptable agreement score of 85%. A sample annotated post in PRIMATE is shown in Figure 3 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 103, |
| "end": 112, |
| "text": "Figure 4)", |
| "ref_id": null |
| }, |
| { |
| "start": 785, |
| "end": 793, |
| "text": "Table 4", |
| "ref_id": null |
| }, |
| { |
| "start": 1216, |
| "end": 1224, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "PRIMATE for FQ Generation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "BERT as Answerability Evaluator: While Model 3 shows respectable performance (Table 3) , even the FQs generated by Model 3 may not yield the most efficient capture of the PHQ-9 related questions (evident from the low hit rate at a higher threshold) (\u03b4). The MHPs would probably have a more streamlined, focused questioning strategy. For efficient MHPs and AI collaboration, we propose to guide the questioning in a more systematic way by predicting if the user post already has answers to the PHQ-9 questions. This is first posed as a binary classification problem over nine PHQ-9 questions. Thereafter, the approach is to generate questions similar to the PHQ-9 questions that do not have answers in the post. Thus, we train Figure 4 : 1. Answerability evaluator: A BERT model is trained in a supervised setting to be an evaluator of whether a PHQ-9 question can be answered in a given user post (binary) using PRIMATE. For nine PHQ-9 questions, we require nine such evaluators. 2. Follow up questions: PHQ-9 questions that are not already answerable using the user post form candidates for follow up. 3. SCID: Corresponding to each PHQ-9 question, the SCID describes a clinician approved sub-sequence of questions to obtain the answer to the follow up question. 4. Use existing PHQ-9 and DSM-5 lexicons (Yazdavar et al., 2017) to filter the question to be generated. 5. Generate FQs using T5 fine-tuned on external domain-specific knowledge and the large-scale depression support conversation dataset created from Reddit and PRIMATE. Table 4 : Distribution of 2003 posts in PRIMATE according to whether the text in the post answers a particular PHQ-9 question. Through this imbalance, PRI-MATE presents its importance in training DLM(s) to identify potential FQs in PHQ-9 that would guide a generative DLM(s) to conduct a discourse with a patient with a vision to assist MHPs in triage. Q1-Q9 are described in Figure 3 BERT 9 (a transformer-based DLM) as a classifier on the PRIMATE dataset. We plan to further use the classification outcome from the BERT model to drive the direction of further questioning with the patient in a more controlled manner. This process can lead to high efficiency and completion of the mental health triaging in as few questions as possible. Figure 4) . The MCC score for all 9 questions across different thresholds is in the range 0 to +1 (low to high positive relationships). The MCC for some configurations runs into a divide by zero error, and we replace this value with 0.0. W: model is unable to learn cues to determine answerability in a post. M: model is uncertain whether a particular PHQ-9 question is answerable or not. S: answerability can be determined by the model with high reliability. Class-Type: Classification Type when \u03b4 = 0.9", |
| "cite_spans": [ |
| { |
| "start": 1306, |
| "end": 1329, |
| "text": "(Yazdavar et al., 2017)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 77, |
| "end": 87, |
| "text": "(Table 3)", |
| "ref_id": null |
| }, |
| { |
| "start": 727, |
| "end": 735, |
| "text": "Figure 4", |
| "ref_id": null |
| }, |
| { |
| "start": 1537, |
| "end": 1544, |
| "text": "Table 4", |
| "ref_id": null |
| }, |
| { |
| "start": 1913, |
| "end": 1921, |
| "text": "Figure 3", |
| "ref_id": null |
| }, |
| { |
| "start": 2276, |
| "end": 2285, |
| "text": "Figure 4)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "PRIMATE for FQ Generation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Performance Analysis: We report the Matthews Correlation Coefficient (MCC) scores in table 5. MCC is a reliable metric to assess a model's classification over an imbalanced dataset, particularly useful when we are interested in all four categories of confusion matrix: true positives (answerable questions (AQ)), true negatives (FQ candidates), and false alarms (false negatives and positives). As PRIMATE shows a disproportional distribution of AQs (yes) and FQs (no), MCC is an appropriate metric (Chicco and Jurman, 2020) . We base our analysis on the consistency of BERT classifier on varying threshold (\u03b4) in table 5. A score between 0.0 to 0.30 (Type W: Weak) on MCC means the model is only able to find a negligible to weak positive relationship between input and output. In our context, a score in this range for a particular PHQ-9 question means that model is unable to effectively learn the cues needed to judge the answerability of that question in user posts. A score between 0.30 and 0.40 (Type M: Maybe) means that the model is able to learn a moderately positive relationship, interpreted as ambiguity in the model to judge whether a particular PHQ-9 question is answerable from user posts. MCC scores between 0.40 to 0.70 (Type S: Strong) for a question in PHQ-9 means that the model can effectively judge whether that question is answerable in user posts . Any score above 0.70 makes the model's judgements even more reliable. This experiment completes steps 1 and 2 in Figure 4 . Steps 3, 4 and 5 are concerned with the task of FQ generation by fine-tuning the T5 DLM as a generator over r/depression help and other depression support communities on Reddit. The FQ generations will be controlled using the process knowledge in SCID which is consulted for interviewing by MHPs. Further, PHQ-9 lexicons are leveraged for promoting diversity and filtering irrelevant FQ generations. We leave this process of FQ generations to shape discourse as future work.", |
| "cite_spans": [ |
| { |
| "start": 499, |
| "end": 524, |
| "text": "(Chicco and Jurman, 2020)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1487, |
| "end": 1495, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "PRIMATE for FQ Generation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "This paper demonstrated the importance of data and process knowledge to adapt DLMs for generating FQs that would assist MHPs in triaging depression. Our experiments show that without process knowledge, DLMs hallucinate by generating unsafe, incoherent, and irrelevant questions that are not helpful for MHPs in pre-screening or triaging. The challenge lies in the inability of the DLMs to judge from the set of generated questions, which is a potential effective FQ to ask based on the user information. The improved question generation performance of DLMs fine-tuned on conversational data filtered by process knowledge encouraged us to prepare PRIMATE. PRIMATE can train DLMs to judge 'whether a user's description of their mental health condition already contains an answer to a particular question in PHQ-9', which would eventually guide coherent FQ generations. We leave our approach for FQ generation as future work, but provide sufficient details on the broader forms of knowledge needed in realizing such a pipeline.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Limitations:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We are yet to scale our understanding to other mental health disorders, such as anxiety using GAD-7 and Suicidality using C-SSRS (Jiang et al., 2020) . Further, we are yet to investigate whether PRIMATE, along with the knowledge in SCID can make DLMs transferable across multiple mental health disorders, especially the ones comorbid with depression. Also, there is a need for a clinically explainable safety metric for our task.", |
| "cite_spans": [ |
| { |
| "start": 129, |
| "end": 149, |
| "text": "(Jiang et al., 2020)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Ethical Considerations: Mental health communities on Reddit offer a crowd perspective on various disorders wherein the FQs in the comments highlight the good intentions of Reddit users to help users with conditions, such as depression. We take such interactions as a proxy for improving patient-MHP interactions. (Benton et al., 2017) described that studies involving user-generated content are exempted from the IRB requirement as long as the data source is public and the user's identity is not recognizable. Apart from being publicly available, Reddit users are anonymous, and we further work with random user IDs. Since we make PRIMATE public for research use, we use a Data Use Agreement (Losada and Crestani, 2016) for responsible dissemination of the dataset.", |
| "cite_spans": [ |
| { |
| "start": 313, |
| "end": 334, |
| "text": "(Benton et al., 2017)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 693, |
| "end": 720, |
| "text": "(Losada and Crestani, 2016)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "https://tinyurl.com/yfp3bhr2 2 https://woebothealth.com/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://serpapi.com/ 5 https://www.microsoft.com/en-us/bing/ apis/bing-web-search-api 6 Questions in extended PHQ-9 : link", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "empirically judged", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Chunking was done as DLM accepts a maximum input length of 512 tokens.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "BERT end-to-end training perform well compared to baselines Electra(Clark et al., 2019), and MedBERT(Gu et al., 2021)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We acknowledge partial support from the National Science Foundation (NSF) award #2133842 \"EA-GER: Advancing Neuro-symbolic AI with Deep Knowledge-infused Learning,\" with PI Dr. Amit Sheth. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgment", |
| "sec_num": "6" |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Large-scale analysis of counseling conversations: An application of natural language processing to mental health", |
| "authors": [ |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Althoff", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "Jure", |
| "middle": [], |
| "last": "Leskovec", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "4", |
| "issue": "", |
| "pages": "463--476", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tim Althoff, Kevin Clark, and Jure Leskovec. 2016. Large-scale analysis of counseling conversations: An application of natural language processing to mental health. Transactions of the Association for Computa- tional Linguistics, 4:463-476.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Ethical research protocols for social media health research", |
| "authors": [ |
| { |
| "first": "Adrian", |
| "middle": [], |
| "last": "Benton", |
| "suffix": "" |
| }, |
| { |
| "first": "Glen", |
| "middle": [], |
| "last": "Coppersmith", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Dredze", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the First ACL Workshop on Ethics in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "94--102", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/W17-1612" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adrian Benton, Glen Coppersmith, and Mark Dredze. 2017. Ethical research protocols for social media health research. In Proceedings of the First ACL Workshop on Ethics in Natural Language Process- ing, pages 94-102, Valencia, Spain. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Rapid and accurate behavioral health diagnostic screening: initial validation study of a web-based, self-report tool (the sage-sr)", |
| "authors": [ |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Brodey", |
| "suffix": "" |
| }, |
| { |
| "first": "Susan", |
| "middle": [ |
| "E" |
| ], |
| "last": "Purcell", |
| "suffix": "" |
| }, |
| { |
| "first": "Karen", |
| "middle": [], |
| "last": "Rhea", |
| "suffix": "" |
| }, |
| { |
| "first": "Philip", |
| "middle": [], |
| "last": "Maier", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "First", |
| "suffix": "" |
| }, |
| { |
| "first": "Lisa", |
| "middle": [], |
| "last": "Zweede", |
| "suffix": "" |
| }, |
| { |
| "first": "Manuela", |
| "middle": [], |
| "last": "Sinisterra", |
| "suffix": "" |
| }, |
| { |
| "first": "Brad", |
| "middle": [], |
| "last": "Nunn", |
| "suffix": "" |
| }, |
| { |
| "first": "Marie-Paule", |
| "middle": [], |
| "last": "Austin", |
| "suffix": "" |
| }, |
| { |
| "first": "Inger", |
| "middle": [ |
| "S" |
| ], |
| "last": "Brodey", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Journal of Medical Internet Research", |
| "volume": "20", |
| "issue": "3", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Benjamin Brodey, Susan E Purcell, Karen Rhea, Philip Maier, Michael First, Lisa Zweede, Manuela Sinis- terra, M Brad Nunn, Marie-Paule Austin, and Inger S Brodey. 2018. Rapid and accurate behavioral health diagnostic screening: initial validation study of a web-based, self-report tool (the sage-sr). Journal of Medical Internet Research, 20(3):e9428.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "The advantages of the matthews correlation coefficient (mcc) over f1 score and accuracy in binary classification evaluation", |
| "authors": [ |
| { |
| "first": "Davide", |
| "middle": [], |
| "last": "Chicco", |
| "suffix": "" |
| }, |
| { |
| "first": "Giuseppe", |
| "middle": [], |
| "last": "Jurman", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "BMC genomics", |
| "volume": "21", |
| "issue": "1", |
| "pages": "1--13", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Davide Chicco and Giuseppe Jurman. 2020. The advan- tages of the matthews correlation coefficient (mcc) over f1 score and accuracy in binary classification evaluation. BMC genomics, 21(1):1-13.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Electra: Pre-training text encoders as discriminators rather than generators", |
| "authors": [ |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "Minh-Thang", |
| "middle": [], |
| "last": "Luong", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Quoc", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher D", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2019. Electra: Pre-training text encoders as discriminators rather than generators. In International Conference on Learning Representa- tions.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Mental health answers from counselors", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Counselchat", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "CounselChat. Mental health answers from counselors.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Towards augmenting crisis counselor training by improving message retrieval", |
| "authors": [ |
| { |
| "first": "Orianna", |
| "middle": [], |
| "last": "Demasi", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Marti", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Hearst", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Recht", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology", |
| "volume": "", |
| "issue": "", |
| "pages": "1--11", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Orianna Demasi, Marti A Hearst, and Benjamin Recht. 2019. Towards augmenting crisis counselor training by improving message retrieval. In Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology, pages 1-11.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Anticipating safety issues in E2E conversational AI: framework and tooling", |
| "authors": [ |
| { |
| "first": "Emily", |
| "middle": [], |
| "last": "Dinan", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [ |
| "Stevie" |
| ], |
| "last": "Gavin Abercrombie", |
| "suffix": "" |
| }, |
| { |
| "first": "Shannon", |
| "middle": [ |
| "L" |
| ], |
| "last": "Bergman", |
| "suffix": "" |
| }, |
| { |
| "first": "Dirk", |
| "middle": [], |
| "last": "Spruit", |
| "suffix": "" |
| }, |
| { |
| "first": "Y-Lan", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| }, |
| { |
| "first": "Verena", |
| "middle": [], |
| "last": "Boureau", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Rieser", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Emily Dinan, Gavin Abercrombie, A. Stevie Bergman, Shannon L. Spruit, Dirk Hovy, Y-Lan Boureau, and Verena Rieser. 2021. Anticipating safety issues in E2E conversational AI: framework and tooling. CoRR, abs/2107.03451.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Knowledge-aware assessment of severity of suicide risk for early intervention", |
| "authors": [ |
| { |
| "first": "Manas", |
| "middle": [], |
| "last": "Gaur", |
| "suffix": "" |
| }, |
| { |
| "first": "Amanuel", |
| "middle": [], |
| "last": "Alambo", |
| "suffix": "" |
| }, |
| { |
| "first": "Joy", |
| "middle": [], |
| "last": "Prakash Sain", |
| "suffix": "" |
| }, |
| { |
| "first": "Ugur", |
| "middle": [], |
| "last": "Kursuncu", |
| "suffix": "" |
| }, |
| { |
| "first": "Krishnaprasad", |
| "middle": [], |
| "last": "Thirunarayan", |
| "suffix": "" |
| }, |
| { |
| "first": "Ramakanth", |
| "middle": [], |
| "last": "Kavuluru", |
| "suffix": "" |
| }, |
| { |
| "first": "Amit", |
| "middle": [], |
| "last": "Sheth", |
| "suffix": "" |
| }, |
| { |
| "first": "Randy", |
| "middle": [], |
| "last": "Welton", |
| "suffix": "" |
| }, |
| { |
| "first": "Jyotishman", |
| "middle": [], |
| "last": "Pathak", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "The World Wide Web Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "514--525", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Manas Gaur, Amanuel Alambo, Joy Prakash Sain, Ugur Kursuncu, Krishnaprasad Thirunarayan, Ramakanth Kavuluru, Amit Sheth, Randy Welton, and Jyotish- man Pathak. 2019. Knowledge-aware assessment of severity of suicide risk for early intervention. In The World Wide Web Conference, pages 514-525.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Iseeq: Information seeking question generation using dynamic meta-information retrieval and knowledge graphs", |
| "authors": [ |
| { |
| "first": "Manas", |
| "middle": [], |
| "last": "Gaur", |
| "suffix": "" |
| }, |
| { |
| "first": "Kalpa", |
| "middle": [], |
| "last": "Gunaratna", |
| "suffix": "" |
| }, |
| { |
| "first": "Vijay", |
| "middle": [], |
| "last": "Srinivasan", |
| "suffix": "" |
| }, |
| { |
| "first": "Hongxia", |
| "middle": [], |
| "last": "Jin", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:2112.07622" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Manas Gaur, Kalpa Gunaratna, Vijay Srinivasan, and Hongxia Jin. 2021. Iseeq: Information seeking question generation using dynamic meta-information retrieval and knowledge graphs. arXiv preprint arXiv:2112.07622.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "The distress analysis interview corpus of human and computer interviews", |
| "authors": [ |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Gratch", |
| "suffix": "" |
| }, |
| { |
| "first": "Ron", |
| "middle": [], |
| "last": "Artstein", |
| "suffix": "" |
| }, |
| { |
| "first": "Gale", |
| "middle": [], |
| "last": "Lucas", |
| "suffix": "" |
| }, |
| { |
| "first": "Giota", |
| "middle": [], |
| "last": "Stratou", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Scherer", |
| "suffix": "" |
| }, |
| { |
| "first": "Angela", |
| "middle": [], |
| "last": "Nazarian", |
| "suffix": "" |
| }, |
| { |
| "first": "Rachel", |
| "middle": [], |
| "last": "Wood", |
| "suffix": "" |
| }, |
| { |
| "first": "Jill", |
| "middle": [], |
| "last": "Boberg", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Devault", |
| "suffix": "" |
| }, |
| { |
| "first": "Stacy", |
| "middle": [], |
| "last": "Marsella", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)", |
| "volume": "", |
| "issue": "", |
| "pages": "3123--3128", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jonathan Gratch, Ron Artstein, Gale Lucas, Giota Stra- tou, Stefan Scherer, Angela Nazarian, Rachel Wood, Jill Boberg, David DeVault, Stacy Marsella, et al. 2014. The distress analysis interview corpus of hu- man and computer interviews. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 3123- 3128.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Domain-specific language model pretraining for biomedical natural language processing", |
| "authors": [ |
| { |
| "first": "Yu", |
| "middle": [], |
| "last": "Gu", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Tinn", |
| "suffix": "" |
| }, |
| { |
| "first": "Hao", |
| "middle": [], |
| "last": "Cheng", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Lucas", |
| "suffix": "" |
| }, |
| { |
| "first": "Naoto", |
| "middle": [], |
| "last": "Usuyama", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaodong", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Tristan", |
| "middle": [], |
| "last": "Naumann", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianfeng", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "Hoifung", |
| "middle": [], |
| "last": "Poon", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "ACM Transactions on Computing for Healthcare", |
| "volume": "3", |
| "issue": "1", |
| "pages": "1--23", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2021. Domain-specific lan- guage model pretraining for biomedical natural lan- guage processing. ACM Transactions on Computing for Healthcare (HEALTH), 3(1):1-23.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Realm: Retrievalaugmented language model pre-training", |
| "authors": [ |
| { |
| "first": "Kelvin", |
| "middle": [], |
| "last": "Guu", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Zora", |
| "middle": [], |
| "last": "Tung", |
| "suffix": "" |
| }, |
| { |
| "first": "Panupong", |
| "middle": [], |
| "last": "Pasupat", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:2002.08909" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasu- pat, and Ming-Wei Chang. 2020. Realm: Retrieval- augmented language model pre-training. arXiv preprint arXiv:2002.08909.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "The curious case of neural text degeneration", |
| "authors": [ |
| { |
| "first": "Ari", |
| "middle": [], |
| "last": "Holtzman", |
| "suffix": "" |
| }, |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Buys", |
| "suffix": "" |
| }, |
| { |
| "first": "Li", |
| "middle": [], |
| "last": "Du", |
| "suffix": "" |
| }, |
| { |
| "first": "Maxwell", |
| "middle": [], |
| "last": "Forbes", |
| "suffix": "" |
| }, |
| { |
| "first": "Yejin", |
| "middle": [], |
| "last": "Choi", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1904.09751" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Language use in teenage crisis intervention and the immediate outcome: A machine automated analysis of large scale text data", |
| "authors": [ |
| { |
| "first": "Rongyao", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rongyao Huang. 2015. Language use in teenage crisis intervention and the immediate outcome: A machine automated analysis of large scale text data. Ph.D. thesis, Master's thesis, Columbia University.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Detection of mental health from reddit via deep contextualized representations", |
| "authors": [ |
| { |
| "first": "Ping", |
| "middle": [], |
| "last": "Zheng", |
| "suffix": "" |
| }, |
| { |
| "first": "Sarah", |
| "middle": [], |
| "last": "Jiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Ita Levitan", |
| "suffix": "" |
| }, |
| { |
| "first": "Julia", |
| "middle": [], |
| "last": "Zomick", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hirschberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 11th International Workshop on Health Text Mining and Information Analysis", |
| "volume": "", |
| "issue": "", |
| "pages": "147--156", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zheng Ping Jiang, Sarah Ita Levitan, Jonathan Zomick, and Julia Hirschberg. 2020. Detection of mental health from reddit via deep contextualized represen- tations. In Proceedings of the 11th International Workshop on Health Text Mining and Information Analysis, pages 147-156.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "What we talk about when we talk about depression: doctor-patient conversations and treatment decision outcomes", |
| "authors": [ |
| { |
| "first": "Alison", |
| "middle": [], |
| "last": "Karasz", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Dowrick", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Byng", |
| "suffix": "" |
| }, |
| { |
| "first": "Marta", |
| "middle": [], |
| "last": "Buszewicz", |
| "suffix": "" |
| }, |
| { |
| "first": "Lucia", |
| "middle": [], |
| "last": "Ferri", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim C Olde", |
| "middle": [], |
| "last": "Hartman", |
| "suffix": "" |
| }, |
| { |
| "first": "Sandra", |
| "middle": [], |
| "last": "Van Dulmen", |
| "suffix": "" |
| }, |
| { |
| "first": "Evelyn", |
| "middle": [], |
| "last": "Van Weel-Baumgarten", |
| "suffix": "" |
| }, |
| { |
| "first": "Joanne", |
| "middle": [], |
| "last": "Reeve", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "British Journal of General Practice", |
| "volume": "62", |
| "issue": "594", |
| "pages": "55--63", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alison Karasz, Christopher Dowrick, Richard Byng, Marta Buszewicz, Lucia Ferri, Tim C Olde Hartman, Sandra Van Dulmen, Evelyn van Weel-Baumgarten, and Joanne Reeve. 2012. What we talk about when we talk about depression: doctor-patient conversa- tions and treatment decision outcomes. British Jour- nal of General Practice, 62(594):e55-e63.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Internet-augmented dialogue generation", |
| "authors": [ |
| { |
| "first": "Mojtaba", |
| "middle": [], |
| "last": "Komeili", |
| "suffix": "" |
| }, |
| { |
| "first": "Kurt", |
| "middle": [], |
| "last": "Shuster", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mojtaba Komeili, Kurt Shuster, and Jason Weston. 2021. Internet-augmented dialogue generation. CoRR, abs/2107.07566.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "The phq-9: validity of a brief depression severity measure", |
| "authors": [ |
| { |
| "first": "Kurt", |
| "middle": [], |
| "last": "Kroenke", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Robert", |
| "suffix": "" |
| }, |
| { |
| "first": "Janet Bw", |
| "middle": [], |
| "last": "Spitzer", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Williams", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Journal of general internal medicine", |
| "volume": "16", |
| "issue": "9", |
| "pages": "606--613", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kurt Kroenke, Robert L Spitzer, and Janet BW Williams. 2001. The phq-9: validity of a brief depression sever- ity measure. Journal of general internal medicine, 16(9):606-613.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Retrieval-augmented generation for knowledge-intensive nlp tasks", |
| "authors": [ |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "Ethan", |
| "middle": [], |
| "last": "Perez", |
| "suffix": "" |
| }, |
| { |
| "first": "Aleksandra", |
| "middle": [], |
| "last": "Piktus", |
| "suffix": "" |
| }, |
| { |
| "first": "Fabio", |
| "middle": [], |
| "last": "Petroni", |
| "suffix": "" |
| }, |
| { |
| "first": "Vladimir", |
| "middle": [], |
| "last": "Karpukhin", |
| "suffix": "" |
| }, |
| { |
| "first": "Naman", |
| "middle": [], |
| "last": "Goyal", |
| "suffix": "" |
| }, |
| { |
| "first": "Heinrich", |
| "middle": [], |
| "last": "K\u00fcttler", |
| "suffix": "" |
| }, |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "Wen-Tau", |
| "middle": [], |
| "last": "Yih", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Rockt\u00e4schel", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "33", |
| "issue": "", |
| "pages": "9459--9474", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich K\u00fcttler, Mike Lewis, Wen-tau Yih, Tim Rockt\u00e4schel, et al. 2020. Retrieval-augmented gen- eration for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33:9459- 9474.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "A test collection for research on depression and language use", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Losada", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Crestani", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proc. of Experimental IR Meets Multilinguality, Multimodality, and Interaction, 7th International Conference of the CLEF Association", |
| "volume": "", |
| "issue": "", |
| "pages": "28--39", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Losada and F. Crestani. 2016. A test collection for research on depression and language use. In Proc. of Experimental IR Meets Multilinguality, Multimodal- ity, and Interaction, 7th International Conference of the CLEF Association, CLEF 2016, pages 28-39, Evora, Portugal.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Knowledge-infused abstractive summarization of clinical diagnostic interviews: Framework development study", |
| "authors": [ |
| { |
| "first": "Gaur", |
| "middle": [], |
| "last": "Manas", |
| "suffix": "" |
| }, |
| { |
| "first": "Vamsi", |
| "middle": [], |
| "last": "Aribandi", |
| "suffix": "" |
| }, |
| { |
| "first": "Ugur", |
| "middle": [], |
| "last": "Kursuncu", |
| "suffix": "" |
| }, |
| { |
| "first": "Amanuel", |
| "middle": [], |
| "last": "Alambo", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Valerie", |
| "suffix": "" |
| }, |
| { |
| "first": "Krishnaprasad", |
| "middle": [], |
| "last": "Shalin", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Thirunarayan", |
| "suffix": "" |
| }, |
| { |
| "first": "Meera", |
| "middle": [], |
| "last": "Beich", |
| "suffix": "" |
| }, |
| { |
| "first": "Amit", |
| "middle": [], |
| "last": "Narasimhan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Sheth", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "JMIR Mental Health", |
| "volume": "8", |
| "issue": "5", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gaur Manas, Vamsi Aribandi, Ugur Kursuncu, Amanuel Alambo, Valerie L Shalin, Krishnaprasad Thirunarayan, Jonathan Beich, Meera Narasimhan, Amit Sheth, et al. 2021. Knowledge-infused abstrac- tive summarization of clinical diagnostic interviews: Framework development study. JMIR Mental Health, 8(5):e20865.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Smartphone-based conversational agents and responses to questions about mental health, interpersonal violence, and physical health", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Adam", |
| "suffix": "" |
| }, |
| { |
| "first": "Arnold", |
| "middle": [], |
| "last": "Miner", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Milstein", |
| "suffix": "" |
| }, |
| { |
| "first": "Roshini", |
| "middle": [], |
| "last": "Schueller", |
| "suffix": "" |
| }, |
| { |
| "first": "Christina", |
| "middle": [], |
| "last": "Hegde", |
| "suffix": "" |
| }, |
| { |
| "first": "Eleni", |
| "middle": [], |
| "last": "Mangurian", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Linos", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "JAMA internal medicine", |
| "volume": "176", |
| "issue": "5", |
| "pages": "619--625", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adam S Miner, Arnold Milstein, Stephen Schueller, Roshini Hegde, Christina Mangurian, and Eleni Linos. 2016. Smartphone-based conversational agents and responses to questions about mental health, interpersonal violence, and physical health. JAMA internal medicine, 176(5):619-625.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", |
| "authors": [ |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Raffel", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Roberts", |
| "suffix": "" |
| }, |
| { |
| "first": "Katherine", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Sharan", |
| "middle": [], |
| "last": "Narang", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Matena", |
| "suffix": "" |
| }, |
| { |
| "first": "Yanqi", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter J", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1910.10683" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv preprint arXiv:1910.10683.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Know what you don't know: Unanswerable questions for squad", |
| "authors": [ |
| { |
| "first": "Pranav", |
| "middle": [], |
| "last": "Rajpurkar", |
| "suffix": "" |
| }, |
| { |
| "first": "Robin", |
| "middle": [], |
| "last": "Jia", |
| "suffix": "" |
| }, |
| { |
| "first": "Percy", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for squad. CoRR, abs/1806.03822.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Bleurt: Learning robust metrics for text generation", |
| "authors": [ |
| { |
| "first": "Thibault", |
| "middle": [], |
| "last": "Sellam", |
| "suffix": "" |
| }, |
| { |
| "first": "Dipanjan", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "Ankur", |
| "middle": [], |
| "last": "Parikh", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "7881--7892", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. Bleurt: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the As- sociation for Computational Linguistics, pages 7881- 7892.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Ethical and social risks of harm from language models", |
| "authors": [ |
| { |
| "first": "Sean", |
| "middle": [], |
| "last": "Isaac", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [], |
| "last": "Legassick", |
| "suffix": "" |
| }, |
| { |
| "first": "Iason", |
| "middle": [], |
| "last": "Irving", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Gabriel", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Isaac, Sean Legassick, Geoffrey Irving, and Iason Gabriel. 2021. Ethical and social risks of harm from language models. CoRR, abs/2112.04359.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Eliza -a computer program for the study of natural language communication between man and machine", |
| "authors": [ |
| { |
| "first": "Joseph", |
| "middle": [], |
| "last": "Weizenbaum", |
| "suffix": "" |
| } |
| ], |
| "year": 1983, |
| "venue": "Commun. ACM", |
| "volume": "26", |
| "issue": "1", |
| "pages": "23--28", |
| "other_ids": { |
| "DOI": [ |
| "10.1145/357980.357991" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joseph Weizenbaum. 1983. Eliza -a computer pro- gram for the study of natural language communi- cation between man and machine. Commun. ACM, 26(1):23-28.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Semi-supervised approach to monitoring clinical depressive symptoms in social media", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Amir Hossein Yazdavar", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Hussein", |
| "suffix": "" |
| }, |
| { |
| "first": "Monireh", |
| "middle": [], |
| "last": "Al-Olimat", |
| "suffix": "" |
| }, |
| { |
| "first": "Goonmeet", |
| "middle": [], |
| "last": "Ebrahimi", |
| "suffix": "" |
| }, |
| { |
| "first": "Tanvi", |
| "middle": [], |
| "last": "Bajaj", |
| "suffix": "" |
| }, |
| { |
| "first": "Krishnaprasad", |
| "middle": [], |
| "last": "Banerjee", |
| "suffix": "" |
| }, |
| { |
| "first": "Jyotishman", |
| "middle": [], |
| "last": "Thirunarayan", |
| "suffix": "" |
| }, |
| { |
| "first": "Amit", |
| "middle": [], |
| "last": "Pathak", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Sheth", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining", |
| "volume": "", |
| "issue": "", |
| "pages": "1191--1198", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Amir Hossein Yazdavar, Hussein S Al-Olimat, Monireh Ebrahimi, Goonmeet Bajaj, Tanvi Banerjee, Krish- naprasad Thirunarayan, Jyotishman Pathak, and Amit Sheth. 2017. Semi-supervised approach to monitor- ing clinical depressive symptoms in social media. In Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017, pages 1191-1198.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "Figure 1: Reddit is a rich source for bringing crowd perspective in training DLMs over conversational data. On the left is a sample post from r/depression help which sees inquisitive interaction from other Reddit users. At the top-right are the FQs asked by the Reddit users in the comments. These FQs are aimed at understanding the severity of the mental health situation of the user and are hence, diagnostically relevant. At the bottom-right are the questions generated by DLMs. It can be seen that these are not suitable FQs.", |
| "uris": null, |
| "num": null, |
| "type_str": "figure" |
| }, |
| "TABREF1": { |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "text": "QGs I feel depressed . I have felt depressed for a while. I have done a lot of things to try and help myself through therapy ,[...] , staying busy , going to university . all I have found is that each time", |
| "content": "<table><tr><td>Post text Model 3 I complete any of these things Model 1 QGs Model 2 QGs that I still feel like I am empty . When I do not do those things for more than 10 mins by my-self that my thoughts just get dark and I be little everything I have ever done . [....] Also I get depressed because I have dreams where I 'm constantly bombarded by women who are not my wife who want to have relationships with me and it 's a nightly battle to not give in to it when I 'm dreaming . I know it sounds silly but I fear that if I give into the dreams that I might do it in real life . I do n't want to that I have had in the past . understand other weird dreams tell my wife because she does n't \u2022 What if I do not do it for more than 10 mins by myself? [*Closed book question already answered in the post and irrelevant to the di-agnostic procedure] \u2022 What might happen to any women that want to have relationships with me? [*Irrelevant to the diagnostic process] \u2022 What do the women at work do? [*Irrele-vant to the diagnostic process)] \u2022 What kind of dreams are I was depressed? [*Closed book question generated directly from a statement in the post] Would you be happy with them? [*Irrelevant to the di-agnostic process)] \u2022 What's your job? [*Irrele-vant] \u2022 Are you able to reach out to your doctor or psychia-trist? [*User has already mentioned in the original post that he has talked to his therapist] \u2022 what are your dreams? [*Closed book question al-ready answered in the post] \u2022 What do you enjoy about your dreams? [*FQ] \u2022 Do you want to talk to peo-ple about what dreams are ? [*FQ] \u2022 Do you feel as satisfied as yourself when you are dreams? [*FQ] Have you talked to a ther-apist? [*Already answered by the user in the post itself] \u2022 Is there anything that helps you calm your feeling for now? [*FQ] \u2022 \"Have you ever gotten any help?\" [*Already answered in the query] \u2022 Do you feel isolated? [*FQ] \u2022 What are your hobbies? [*FQ] \u2022 What are your interests? [*FQ] \u2022 How long have you been waiting for your wife to talk about these dreams? [*FQ] \u2022 Have you told your wife you're depressed or not? [*Inquisitive in nature but already answered by the user in original post]</td></tr></table>" |
| }, |
| "TABREF2": { |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "text": "", |
| "content": "<table/>" |
| }, |
| "TABREF6": { |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "text": "We record the Matthews Correlation Coefficient (MCC) to measure the performance of the Evaluator (see", |
| "content": "<table/>" |
| } |
| } |
| } |
| } |