ACL-OCL / Base_JSON /prefixC /json /clpsych /2022.clpsych-1.16.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:31:29.875894Z"
},
"title": "Overview of the CLPsych 2022 Shared Task: Capturing Moments of Change in Longitudinal User Posts",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Tsakalidis",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Queen Mary University of London",
"location": {}
},
"email": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Chim",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Queen Mary University of London",
"location": {}
},
"email": ""
},
{
"first": "Iman",
"middle": [],
"last": "Munire Bilal",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ayah",
"middle": [],
"last": "Zirikly",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Dana",
"middle": [],
"last": "Atzil-Slonim",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Federico",
"middle": [],
"last": "Nanni",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Maryland",
"location": {}
},
"email": ""
},
{
"first": "Manas",
"middle": [],
"last": "Gaur",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Kaushik",
"middle": [],
"last": "Roy",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Becky",
"middle": [],
"last": "Inkster",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Leintz",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Maria",
"middle": [],
"last": "Liakata",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Queen Mary University of London",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We provide an overview of the CLPsych 2022 Shared Task, which focusses on the automatic identification of 'Moments of Change' in longitudinal posts by individuals on social media and its connection with information regarding mental health. This year's task introduced the notion of longitudinal modelling of the text generated by an individual online over time, along with appropriate temporally sensitive evaluation metrics. The Shared Task consisted of two subtasks: (a) the main task of capturing changes in an individual's mood (drastic changes-'Switches'-and gradual changes-'Escalations'-on the basis of textual content shared online; and subsequently (b) the subtask of identifying the suicide risk level of an individual-a continuation of the CLPsych 2019 Shared Task-where participants were encouraged to explore how the identification of changes in mood in task (a) can help with assessing suicidality risk in task (b).",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "We provide an overview of the CLPsych 2022 Shared Task, which focusses on the automatic identification of 'Moments of Change' in longitudinal posts by individuals on social media and its connection with information regarding mental health. This year's task introduced the notion of longitudinal modelling of the text generated by an individual online over time, along with appropriate temporally sensitive evaluation metrics. The Shared Task consisted of two subtasks: (a) the main task of capturing changes in an individual's mood (drastic changes-'Switches'-and gradual changes-'Escalations'-on the basis of textual content shared online; and subsequently (b) the subtask of identifying the suicide risk level of an individual-a continuation of the CLPsych 2019 Shared Task-where participants were encouraged to explore how the identification of changes in mood in task (a) can help with assessing suicidality risk in task (b).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Increasingly the clinical community are looking for new and better diagnostic measures and tools for monitoring mental health conditions. Over the past decade, there has been a surge in methods at the intersection of NLP and mental health, showing that signals for the diagnosis of certain conditions can be found in language. However, most research tasks have been defined on the basis of classifying individuals (e.g., on the basis of suicide risk (Shing et al., 2018; Zirikly et al., 2019) or on the basis of having a mental health condition or not (Coppersmith et al., 2015) ), thus lacking the longitudinal aspect of monitoring an individual's mood and well-being in real-time.",
"cite_spans": [
{
"start": 450,
"end": 470,
"text": "(Shing et al., 2018;",
"ref_id": "BIBREF22"
},
{
"start": 471,
"end": 492,
"text": "Zirikly et al., 2019)",
"ref_id": "BIBREF26"
},
{
"start": 552,
"end": 578,
"text": "(Coppersmith et al., 2015)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Through this shared task we follow Tsakalidis et al. (2022) to introduce the problem of assessing changes in a person's mood over time on the basis {a.tsakalidis,m.liakata}@qmul.ac.uk of their linguistic content. For the purpose of the task we focus on posting activity in online social media platforms. In particular, given an individual's posts over a certain period in time, we aim: (a) at capturing those sub-periods during which an individual's mood deviates from their baseline mood -a post-level sequential classification task; (b) leveraging this task to help us assess the suicide risk level of the individual -a user-level classification task (Shing et al., 2018 ) & a continuation of the 2019 Shared Task (Zirikly et al., 2019) . Thus, this year's shared task consists of two subtasks: (A) the main task of identifying mood changes in an individual's online posts over time and (B) assessing the suicide risk level of the invididual, where ideally participants will have been able to establish a connection between tasks A and B. This paper makes the following contributions:",
"cite_spans": [
{
"start": 35,
"end": 59,
"text": "Tsakalidis et al. (2022)",
"ref_id": null
},
{
"start": 653,
"end": 672,
"text": "(Shing et al., 2018",
"ref_id": "BIBREF22"
},
{
"start": 716,
"end": 738,
"text": "(Zirikly et al., 2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We introduce tasks A and B and provide a detailed description",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We describe the datasets used for these tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We provide an overview of the secure data enclave environment used for the shared task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We provide an overview of participating team selection, evaluation strategy and discussion of results, paving the way for future approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We present the limitations of the current set up and provide suggestions for future organisers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Task A involves capturing 'Moments of Change' (MoC) in posts by individuals on social media over time. In particular, following Tsakalidis et al. (2022) , given a sequence of chronologically ordered posts between two dates ('timeline') made by an individual on an online social media platform, we aim to capture the post(s) -or the sequence(s) of posts -in the timeline indicating that the individual's mood has shifted in one of the following ways: (a) Switch -the individual's mood shifts suddenly from positive to negative (or vice versa); and (b) Escalation -the individual's mood gradually progresses from neutral/negative (positive) to very negative (positive). Both sudden and gradual changes in individuals' mood over time are important for monitoring mental health conditions (Lutz et al., 2013; Shalom and Aderka, 2020) and constitute one of the dimensions to measure in psychotherapy (Barkham et al., 2021) . By definition, this task is temporally sensitive, since the goal is to classify each post in a given timeline as belonging to a Switch (IS), belonging to an Escalation (IE) or not being part of either mood shift (O) -with the majority of the posts expected to be (O).",
"cite_spans": [
{
"start": 128,
"end": 152,
"text": "Tsakalidis et al. (2022)",
"ref_id": null
},
{
"start": 785,
"end": 804,
"text": "(Lutz et al., 2013;",
"ref_id": null
},
{
"start": 805,
"end": 829,
"text": "Shalom and Aderka, 2020)",
"ref_id": "BIBREF21"
},
{
"start": 895,
"end": 917,
"text": "(Barkham et al., 2021)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definitions",
"sec_num": "2"
},
{
"text": "Task B is a continuation of the work by Shing et al. (2018) and Zirikly et al. (2019) . Given the posts of an individual, the aim is to classify their suicide risk into (a) no risk, (b) low, (c) moderate or (d) severe level. Due to the very low number of users of (a) and (b) in our data, we have merged the no/low classes leading to a 3-label user classification task. Participants were encouraged to use insights from Task A in solving Task B.",
"cite_spans": [
{
"start": 40,
"end": 59,
"text": "Shing et al. (2018)",
"ref_id": "BIBREF22"
},
{
"start": 64,
"end": 85,
"text": "Zirikly et al. (2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definitions",
"sec_num": "2"
},
{
"text": "Dataset creation for the two tasks ( \u00a73.4) involved data collection & data relabelling ( \u00a73.1), timeline extraction ( \u00a73.2) and annotation ( \u00a73.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3"
},
{
"text": "As our ultimate goal is to find the connection between Moments of Change (MoC) in individuals' longitudinal online data (Task A) and other information regarding the individuals' level of risk (Task B), we wanted to repurpose as much as possible existing mental health datasets (Losada and Crestani, 2016; Losada et al., 2020; Shing et al., 2018; Zirikly et al., 2019) by annotating MoC within them. We also collected a new dataset from Reddit annotated for both MoC and suicidality risk.",
"cite_spans": [
{
"start": 277,
"end": 304,
"text": "(Losada and Crestani, 2016;",
"ref_id": "BIBREF16"
},
{
"start": 305,
"end": 325,
"text": "Losada et al., 2020;",
"ref_id": "BIBREF17"
},
{
"start": 326,
"end": 345,
"text": "Shing et al., 2018;",
"ref_id": "BIBREF22"
},
{
"start": 346,
"end": 367,
"text": "Zirikly et al., 2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "3.1"
},
{
"text": "Our final dataset consists of: Reddit-UMD. The UMD-Suicidality dataset (Shing et al., 2018; Zirikly et al., 2019) consists of 38K posts by 245 Reddit users who have posted in the r/SuicideWatch subreddit (and an equal number of control users who do not feature in our tasks). We have labelled the content generated by these individuals with MoC and relabelled the users' risk level for consistency across datasets.",
"cite_spans": [
{
"start": 71,
"end": 91,
"text": "(Shing et al., 2018;",
"ref_id": "BIBREF22"
},
{
"start": 92,
"end": 113,
"text": "Zirikly et al., 2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "3.1"
},
{
"text": "Reddit-New. We collected a new dataset from Reddit, in two steps: we first collected all public Reddit 2015-2021 (incl.) and then obtained the posting history for 83K users with at least 10 posts in MHS (for the list of MHS, refer to Appendix A). eRisk++. We obtained the eRisk dataset (Losada and Crestani, 2016; Losada et al., 2020) upon signing a data use agreement.It contains Reddit posts and comments made by 41 users with and 299 users without self-harm conditions. Inspection of posts by the 299 users showed they were irrelevant for our tasks and so we focussed on the 6,927 posts and comments by the 41 users. 1",
"cite_spans": [
{
"start": 103,
"end": 120,
"text": "2015-2021 (incl.)",
"ref_id": null
},
{
"start": 286,
"end": 313,
"text": "(Losada and Crestani, 2016;",
"ref_id": "BIBREF16"
},
{
"start": 314,
"end": 334,
"text": "Losada et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "3.1"
},
{
"text": "For each dataset, we extracted user timelines to allow annotation of MoC (Task A), while ensuring that these timelines also contain the information required for Task B (i.e., all associated users' posts in r/SuicideWatch are included in the timelines). Table 1 provides an overview of the datasets. Reddit-UMD. We ordered each user's posts chronologically, identified their posts in r/SuicideWatch and defined a user timeline as t days around each such post. Upon experimentation t was set to 30. We extracted 156 timelines of [10, 125] posts each, so that annotation was manageable, corresponding to 126 users. These timelines were manually inspected internally by two researchers asked to judge the suitability of the former for Task A. Timelines were thus independently labelled as 'good', 'medium', or 'bad' (Cohen's \u03ba=.66). 2 We only kept 90 timelines that (a) were labelled as 'good' by both annotators and (b) contained all of the user's posts on r/SuicideWatch so that we could follow the same annotation for Task B as in Shing et al. (2018) . To inform subsequent data collection we analysed what constitutes a 'good' timeline in Reddit-UMD. For this we trained a Logistic Regression learning to separate between 'good' and 'bad' timelines. We used the timeline-level features 1 As opposed to Reddit-New and Reddit-UMD, the eRisk dataset contains posts and comments made by the users on Reddit. For consistency, we will refer to all of them as 'posts'. 2 Details of the annotation are provided in Appendix B.",
"cite_spans": [
{
"start": 527,
"end": 531,
"text": "[10,",
"ref_id": null
},
{
"start": 532,
"end": 536,
"text": "125]",
"ref_id": null
},
{
"start": 829,
"end": 830,
"text": "2",
"ref_id": "BIBREF27"
},
{
"start": 1030,
"end": 1049,
"text": "Shing et al. (2018)",
"ref_id": "BIBREF22"
},
{
"start": 1286,
"end": 1287,
"text": "1",
"ref_id": null
},
{
"start": 1462,
"end": 1463,
"text": "2",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 253,
"end": 260,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Timeline Extraction",
"sec_num": "3.2"
},
{
"text": "[#posts, % of posts in MHS, and % of posts in r/SuicideWatch, r/depression and r/AskReddit 3 ], further accompanied by the average difference (in terms of #posts) between two postings on the same subreddit. We found that the % of posts in MHS is the most predictive feature, with 95% of the 'bad' timelines containing less than 17% MHS posts, whereas 99% of the 'good' timelines have contain less than 82%. We use this information to select 'good' timelines for the Reddit-New dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Timeline Extraction",
"sec_num": "3.2"
},
{
"text": "Reddit-New. Following our notion of 'good' timelines in Reddit-UMD we looked for two-month periods within which the user had at least 10 and no more than 125 posts, at least (most) 17% (82%) of which is posted on a MHS. 150 such timelines were selected at random (from an overall of 1,114) and annotated internally for quality (good/medium/bad), similarly to Reddit-UMDthis time by a single annotator, given the high agreement achieved in Reddit-UMD, resulting into 139 'good' timelines (83 users). Interestingly, one timeline in Reddit-New was identical to another one present in Reddit-UMD -signalling a consistency between the collection process of the two datasetsand hence removed from Reddit-New on our final processing. eRisk++. Two annotators with experience in mental health research on social media independently reviewed 103 timelines to check suitability for task A. 91 timelines were labeled either as 'good' or 'medium' (Cohen's \u03ba=.78). For consistency with the other datasets, we kept the 15 timelines (14 users) having at least (most) 10 (125) posts. Upon inspecting the resulting datasets, we found that there was a disproportionate representation of 'low' and 'no' risk users based on the labelling provided in (Shing et al., 2018; Zirikly et al., 2019) . To mitigate this, we enriched the eRisk++ dataset with 12 timelines by 12 users from UMD-Suicidality, who had been labelled as 'no'/'low' risk in Zirikly et al. (2019) . Though we did not use their associated suicidality risk labels, this step ensured a fairer representation of users for capturing MoC (task A).",
"cite_spans": [
{
"start": 1229,
"end": 1249,
"text": "(Shing et al., 2018;",
"ref_id": "BIBREF22"
},
{
"start": 1250,
"end": 1271,
"text": "Zirikly et al., 2019)",
"ref_id": "BIBREF26"
},
{
"start": 1420,
"end": 1441,
"text": "Zirikly et al. (2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Timeline Extraction",
"sec_num": "3.2"
},
{
"text": "Task A. We hired four annotators (2 native English, 2 fluent English language speakers), two of whom had previous experience with performing task A on a different dataset (TalkLife), and pro-vided them with the guidelines from Tsakalidis et al. (2022) . Briefly, the task involves reading one timeline at a time in an annotation interface and labelling (a) the first post that signals a 'Switch' (IS) in an individual's mood, along with the respective duration of the Switch (range of consecutive posts), as well as (b) the post signalling the 'peak' (most intense posts) of an 'Escalation' (IE) in an individual's mood, along with the respective range of consecutive posts that belong to the same Escalation. The training of the two non-experienced annotators involved annotating timelines from TalkLife that were previously annotated by the two experienced annotators, measuring their agreement and discussing cases of disagreement in iterative cycles, until reaching an agreement level similar to that in Tsakalidis et al. (2022) . Subsequently, the four annotators were provided with 10 separate timelines extracted from UMD Suicidality for training purposes, and disagreements in their annotations were discussed in two meetings. Finally, they were provided with the 255 timelines that have been used in the current Shared Task.",
"cite_spans": [
{
"start": 227,
"end": 251,
"text": "Tsakalidis et al. (2022)",
"ref_id": null
},
{
"start": 1008,
"end": 1032,
"text": "Tsakalidis et al. (2022)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "3.3"
},
{
"text": "Task B. We worked with four Clinical Psychology experts, all of whom are fluent English language speakers. The experts were provided with the guidelines by Shing et al. (2018) , which focus on the task of classifying the suicide risk level (no/low/moderate/severe risk) of an individual, solely on the basis of their r/SuicideWatch posts. An annotation interface was developed, where the experts could view and assign a single label to an individual based on up to 5 r/SuicideWatch posts made by the individual within the Reddit-New and Reddit-UMD datasets. Our experts reannotated the suicidality risk of users in Reddit-UMD to provide annotation consistency between the two datasets. 4 For users with more than 5 posts on r/SuicideWatch, the annotation was performed in several passes, with the most 'severe' label being finally assigned to the respective individual (Shing et al., 2018) . We completed two training rounds with the experts, where they discussed disagreements in their labelling and clarified points especially concerning the distinction between 'moderate' and 'severe' cases. Table 3 : IAA for Task A per agreement threshold.",
"cite_spans": [
{
"start": 156,
"end": 175,
"text": "Shing et al. (2018)",
"ref_id": "BIBREF22"
},
{
"start": 686,
"end": 687,
"text": "4",
"ref_id": "BIBREF29"
},
{
"start": 869,
"end": 889,
"text": "(Shing et al., 2018)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 1095,
"end": 1102,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Annotation",
"sec_num": "3.3"
},
{
"text": "Task A. Following Tsakalidis et al. 2022, we assess the inter-annotator agreement (IAA) based on the Intersection over Union for each label independently. The majority agreement (see Table 3 ) is lower than the agreement in Tsakalidis et al. 2022(.30/.50/.89 for IS/IE/O, respectively), primarily because in the latter there were 3 annotators employed (requiring 2/3 to agree) whereas here a majority requires agreement between 3/4 ). A post receives the label assigned to it by the majority. In the case of ties the least populous class receives the label (e.g. if 'IS' ('IE') is chosen over 'O'. In the rare (64 cases overall) of a tie between 'IS' and 'IE', we assigned the label 'IE' given its higher prior. Task B. The agreement between the expert annotators was considerably lower than that reported in Shing et al. (2018) (Krippendorff's \u03b1 .43 vs .81), primarily for two reasons: (a) in this dataset, there was only one user assigned 'no risk', which is the easiest category to identify even for nonexperts; (b) the experts in Shing et al. (2018) had a background on suicidality whereas our clinical psychologists have broader expertise. Most cases of disagreement involved 'moderate' vs 'severe', or 'low' vs 'moderate' as opposed to 'low' vs 'severe'. We used the majority label for each user and in case of ties the highest level of risk assigned was chosen. We split the data into train and test sets (80/20) preserving the distribution of labels in the two sets. Subsequently, all 204/51 timelines from users in our train/test split, were assigned to the respective set (see Table 2 ).",
"cite_spans": [
{
"start": 1034,
"end": 1053,
"text": "Shing et al. (2018)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 183,
"end": 190,
"text": "Table 3",
"ref_id": null
},
{
"start": 1587,
"end": 1594,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Resulting Dataset",
"sec_num": "3.4"
},
{
"text": "The CLPsych shared task 2021 (Macavaney et al., 2021) was the first to be conducted in a secure environment to provide a high level of safety for sensitive data. We have also opted for carrying out this year's shared task in the same secure environment and continue efforts in protecting highly sensitive data. NORC is an independent non-profit research institution at the University of Chicago who provide the NORC Data Enclave(r), chosen both this year and last for the shared task. Compared to other solutions (see for instance Arenas et al. (2019) ) the NORC Data Enclave(r) (hereafter, 'DE') does not rely on dedicated laptops but solely on a browser interface over HTTPS channels and Citrix HDX technology, making the setup of a shared task more feasible. All teams (see \u00a75) signed a data use agreement (DUA) and terms and conditions (T&C) with NORC before being provided with instructions to set up multi-factor authentication for login, procedures for requesting the ingress in the DE of written code, libraries, models or additional data and procedures for technical support. All ingress of information into the DE requires thorough system scans and human review to ensure the safety and integrity of the Enclave. After login authorized users can access a secure virtual machine within the DE. Although all applications and data run on servers in the NORC data center, the user interface is a familiar full Windows 10 virtual desktop. The DE is a closed environment: it does not have access to the internet and all functionalities for moving data in and out of the virtual space are disabled. This Citrix-based technology is configured to prevent users from downloading output from the remote server to an external machine. Similarly, other security protection features prevent the user from using the \"cut and paste\" feature in Windows to move data from the Citrix session into an Excel spreadsheet residing on the local computer. In addition, the user is prevented from printing the data on a local computer. There is documentation regarding the virtual environment and how to securely connect to the dedicated DE Cluster on Amazon Web Services (AWS). To connect to the cluster (via ssh) users rely on PuTTY and on the dedicated machine they can find a dedicated Python 3.9.1 environment with all requested libraries available (see \u00a75). Users can both run code and submit batch jobs using the Slurm cluster management while also monitoring the budget available for computational experiments. Following last year's suggestions, we ensured participants would be able to use Jupyter Notebooks to implement code on the cluster through ssh tunneling and by opening the notebook in the browser of the Windows machine. At the end of the Shared Task, each team was to inform NORC to egress the predictions for the test set.",
"cite_spans": [
{
"start": 29,
"end": 53,
"text": "(Macavaney et al., 2021)",
"ref_id": "BIBREF19"
},
{
"start": 531,
"end": 551,
"text": "Arenas et al. (2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Working in a Secure Environment",
"sec_num": "4"
},
{
"text": "Due to an unprecedented technical issue out of NORC's control, several teams faced issues with running their code a week prior to the system submissions deadlineTo avoid eliminating the teams despite their continuous efforts throughout the Shared Task, we decided to distribute the data outside the DE during the last few days on the basis of the signed DUA. To ensure fairness, we asked all teams (i.e., not only the ones affected) to let us know if they would like to receive the data outside the enclave to help them with the system submission. We made it clear that those submitting their results within the DE would feature separately in our evaluation (see Tables 4-5) , since they had more limited resources at their disposal.",
"cite_spans": [],
"ref_spans": [
{
"start": 663,
"end": 674,
"text": "Tables 4-5)",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Working in a Secure Environment",
"sec_num": "4"
},
{
"text": "We invited teams to register their interest in the shared task by providing details such as team members, motivation, related background, experience and NLP skills. We also asked for their requirements in terms of programming languages, libraries and pre-trained language models to prepare the set up in the DE. Given our limited resources pertaining to the functional costs of using the DE, we were limited to accepting 15 teams (\u223c50 members) for participating in the Shared Task. Therefore, we compiled a list of criteria that were given to two internal reviewers, along with the (anonymised) registrations of interest. The criteria were related to (a) the relevance of the team's background/current work to the shared task, (b) their motivation and likelihood of committing to the task and (c) details provided wrt technical requirements (see Appendix C for the complete guidelines). Based on the reviewers' assessments, we selected 13/37 teams to participate and asked another five applicant teams to be merged together into two groups, so as to accommodate as many requests as possible (one team was formed by three individual applicants, and another individual applicant was merged into a two-member team), leading to the acceptance of 18/37 requests (53 individuals).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Call for Participation -Teams Selection",
"sec_num": "5"
},
{
"text": "Task A. Following Tsakalidis et al. (2022) , besides the common post-level evaluation metrics (Precision, Recall, F1) -per class and macro-averagedwe report two sets of timeline-level metrics based on work in change-point detection (van den Burg and Williams, 2020) and image segmentation (Arbelaez et al., 2010) , emphasizing respectively performance at the level of a timeline and the prediction of regions of change.",
"cite_spans": [
{
"start": 18,
"end": 42,
"text": "Tsakalidis et al. (2022)",
"ref_id": null
},
{
"start": 289,
"end": 312,
"text": "(Arbelaez et al., 2010)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "6"
},
{
"text": "Firstly, working on each timeline and label type independently, we calculate Recall R",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "6"
},
{
"text": "(l) w (Precision P (l)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "6"
},
{
"text": "w ) by counting as \"correct\" a model prediction for label l if the prediction falls within a window of w posts around a post labelled as l in our ground truth -however, a post's predicted label can only be counted as 'correct' only once (at most). By increasing the value of w, we perform a less strict evaluation of a model. Results are macro-averaged for each label independently across all timelines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "6"
},
{
"text": "Secondly, we assess model performance on the basis of its ability to capture regions of change. For each true region R (l)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "6"
},
{
"text": "GS within a timeline, we define its overlap O(R",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "6"
},
{
"text": "(l) GS , R (l) M ) with each predicted re- gion R (l)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "6"
},
{
"text": "M as the intersection over union between the two sets. Finally, we retrieve recall-and precisionbased coverage metrics (again, macro-averaged across all timelines for each label independently:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "6"
},
{
"text": "C (l) r (M \u2192 GS) = 1 R (l) GS |R (l) GS | R (l) GS |R (l) GS | \u2022 max R (l) M {O(R (l) GS , R (l) M )}, C (l) p (M \u2192 GS) = 1 R (l) M |R (l) M | R (l) M |R (l) M | \u2022 max R (l) GS {O(R (l) GS , R (l) M )}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "6"
},
{
"text": "Ideally we want to see a system performing well on both window based and coverage metrics. Task B. We use standard classification metrics (Precision, Recall and F1) for each user-based class label and macro-averaged. Due to the low number of users in the 'Low' class on the test set, we also report micro-averaged metrics; however, these are added for completeness purposes in our analysis (i.e., the teams were guided to improve their performance on a per-class and macro-average basis).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "6"
},
{
"text": "This section outlines the submissions by each team. For Task A, we also provide the results of three baselines: the majority classifier, a logistic regression (LR) trained on tfidf features, and BERT trained using the focal loss on a related but separate dataset on the same task (Tsakalidis et al., 2022) . For Task B, we include the majority classifier and a LR trained on tfidf features from users' posts.",
"cite_spans": [
{
"start": 280,
"end": 305,
"text": "(Tsakalidis et al., 2022)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Shared Task Results",
"sec_num": "7"
},
{
"text": "Task A. Each team was allowed to submit up to three sets of test results. Nine teams submitted their predictions -an overview of the best results per team/metric is shown in Table 4 and Fig. 1 . The two best-performing teams (one submitting within and one outside the DE) incorporated a longitudinal component in their models, either in a multitask setting (UoS) or in an emotionally-informed seq2seq-based approach (WResearch), demonstrating the importance of temporally-sensitive modelling as opposed to classifying each post in isolation. The class imbalance problem was tackled by several teams either via balancing the instances (e.g., LAMA, uOttawa) or via weighted loss functions, notably by IIITH who achieved high recall for IS/IE. Time-related information was incorporated by UArizona, a proximity-based approach was followed by NLP-UNED, an ensemble on emo-tional and non-emotional features was chosen by BLUE, whereas WWBT-SQT-lite achieved high accuracy (albeit post-deadline) by using different combinations of consecutive post representations.",
"cite_spans": [],
"ref_spans": [
{
"start": 174,
"end": 181,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 186,
"end": 192,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Overview",
"sec_num": "7.1"
},
{
"text": "Task B. Each team was allowed to make a single submission; a second submission was allowed only for teams making use of their predictions from Task A. Seven teams submitted and two teams further took up the challenge of leveraging Task A (see Table 5 ). The teams that took up this challenge did not demonstrate (important) performance gains. However, the best-performing teams (in average, macro-terms) used some information from Task A, either by focusing mostly on posts labelled as MoC (WResearch) or by jointly learning the two tasks (UoS). The ranking of the teams differs when considering the micro-F1, due to the low number of 'low' risk users. Here IIITH and NLP-UNED, along with WResearch, were ranked amongst the top, being particularly effective in capturing 'severe' and 'moderate' cases, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 243,
"end": 250,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Overview",
"sec_num": "7.1"
},
{
"text": "BLUE (Bucur et al., 2022 ) explored a variety of feature representation approaches for Task A: (a) Emotion-aware embeddings and (b) non-emotion embeddings (e.g., tfidf, GloVe). They experimented with different combinations of algorithms and features sets, with the most notable performance achieved by a majority voting-based model over an ensemble of predictions obtained by LR, SVM, and Adaptive Boosting classifiers trained on (a), which ranked them second in macro-avg precision-oriented coverage (.499).",
"cite_spans": [
{
"start": 5,
"end": 24,
"text": "(Bucur et al., 2022",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of System Submissions",
"sec_num": "7.2"
},
{
"text": "IIITH (Boinepelli et al., 2022) used transformers for representing the user's posts before feeding them to an LSTM for Task A. They tuned their model using the weighted cross-entropy loss function, yielding very high recall for the two minority classes (see post-level results for IS/IE in Table 4 ). For Task B, they fine-tuned RoBERTa on the training data, tackling the class imbalance with weighted random sampling and producing the outcome label through majority voting. The team came second (third) in this task on micro-F1 (macro-F1), achieving the best scores for the 'Severe' class (see Table 5 , 'Severe').",
"cite_spans": [
{
"start": 6,
"end": 31,
"text": "(Boinepelli et al., 2022)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 290,
"end": 297,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 595,
"end": 602,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Summary of System Submissions",
"sec_num": "7.2"
},
{
"text": "LAMA (AlHamed et al., 2022) tackled the data imbalance problem by undersampling posts with high sentiment polarity corresponding to the majority class. They adopted a post-level BERT and LSTM models that take into account the sequence of the previous posts for a given target post for Task A. BERT performed particularly well wrt the recalloriented metrics for IS, leading to the third-best performance in terms of macro-F1 overall. Their models for Task B were Random Forests enriched with sentiment-related features and word frequencies of manually collected high-risk keywords.",
"cite_spans": [
{
"start": 5,
"end": 27,
"text": "(AlHamed et al., 2022)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of System Submissions",
"sec_num": "7.2"
},
{
"text": "NLP-UNED (Fabregat et al., 2022 ) completed all 5/5 submissions via the DE. In Task A, they analysed the encoded user posts via an Approximate Nearest Neighbour approach -labelling individual posts based on their proximity to others -achieving high recall-oriented scores for IE/IS and the highest macro-average timeline-level recall (for w = 3). For Task B, they represented each post on the basis of its proximity to each of the labels in Task A and fed the resulting sequence into a BiL-STM. Amongst the two submissions that leveraged Task A for performing Task B, NLP-UNED was marginally the best-performing in terms of F1.",
"cite_spans": [
{
"start": 9,
"end": 31,
"text": "(Fabregat et al., 2022",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of System Submissions",
"sec_num": "7.2"
},
{
"text": "UArizona (Culnan et al., 2022) completed their 2/2 submissions for Task A via the DE. They tested several variants of RoBERTa-based models, including (a) timeline-agnostic models that incorporate the time lag between consecutive posts and (b) models combining consecutive post vectors, either through concatenation or by passing them through an LSTM to extract the resulting states. They showcased that the incorporation of time boosts the performance of the model on IS cases, whereas they were consistently among the top-3 performing systems in macro-averaged, timeline-level precision.",
"cite_spans": [
{
"start": 9,
"end": 30,
"text": "(Culnan et al., 2022)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of System Submissions",
"sec_num": "7.2"
},
{
"text": "UoS (Azim et al., 2022) achieved the highest scores for Task A in most metrics and across classes, as well as the second-highest macro-F1 for Task B. They first represent a post in different ways (merged), including its emotion/sentiment-based scores. Their approach involved an attention-based, multi-task BiLSTM operating at the timeline-level, with each post corresponding to a single timestep in the input/output for Task A, and additional outputs for the user's risk label for Task B at the timeline level (selecting the most 'severe' label across all timelines for the user's classification). uOttawa-AI (Buddhitha et al., 2022) employed convolutional neural networks with global maxpooling and linear layers for multi-task learning. Task A was casted as two post-level binary tasks (i.e., (a) IS vs O and (b) IE vs O) using soft and hard parameter sharing, by also tackling the class imbalance through down-sampling the majority class. They achieved high recall-oriented metrics for capturing IE and were among the highest scoring teams wrt recall-oriented coverage. In Task B, the team experimented with the additional task of predicting self-declared mental health diagnoses using a separate dataset (Cohan et al., 2018) .",
"cite_spans": [
{
"start": 4,
"end": 23,
"text": "(Azim et al., 2022)",
"ref_id": "BIBREF3"
},
{
"start": 610,
"end": 634,
"text": "(Buddhitha et al., 2022)",
"ref_id": "BIBREF8"
},
{
"start": 1209,
"end": 1229,
"text": "(Cohan et al., 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of System Submissions",
"sec_num": "7.2"
},
{
"text": "WResearch (Bayram and Benhiba, 2022) completed 4/5 submissions in the DE. In Task A, they derived emotionally-informed vectors from pretrained models and constructed abnormality vectors (i.e., differences in expected vs predicted vectors via a seq2seq model) and differences in the vectors of consecutive posts, using them as inputs to post-level classifiers that take into account the class imbalance. Their best performing submission used XGBoost (Chen and Guestrin, 2016) and was consistently among the highest-scoring systems across metrics -and the best-performing from systems within the DE. In Task B, they used LR on n-grams and emotion bandwidth-based vectors extracted from the IS/IE posts for each user, achieving the highest averaged F1. They further leveraged the posts predicted for Task A as IS/IE via a timelinelevel BiLSTM, assigning the most 'severe' label for a user based on their timeline classifications, without improvement in performance, however.",
"cite_spans": [
{
"start": 10,
"end": 36,
"text": "(Bayram and Benhiba, 2022)",
"ref_id": "BIBREF5"
},
{
"start": 449,
"end": 474,
"text": "(Chen and Guestrin, 2016)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of System Submissions",
"sec_num": "7.2"
},
{
"text": "WWBP-SQT-lite (Ganesan et al., 2022) experimented with theoretically-motivated features and representations based on Human-aware Recurrent Transformers and PCA-reduced RoBERTa. After the deadline the team also tested a version of PCA-reduced RoBERTa vectors, yielding very high accuracy when concatenating them with the previous post's vector and their difference, as features (macro-F1: .61, not reported in Table 4 ). For Task B the team used LR on user-level features (ngrams, theoretically motivated features), achieving the second-best results on separating the 'Moderate' cases of risk level.",
"cite_spans": [],
"ref_spans": [
{
"start": 409,
"end": 416,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Summary of System Submissions",
"sec_num": "7.2"
},
{
"text": "We presented the overview of the CLPsych 2022 Shared Task, focusing on (A) capturing changes in an individual's mood as self-disclosed online and (B) classifying the individual's suicide risk levelas well as studying the link between the two tasks. The best results for (A) showcase the importance of taking into account the sequence-aware modelling of an individual's online shared content, whereas the link between the two tasks has been highlighted on the basis of the best results achieved for (B). Following last year's setting (Macavaney et al., 2021) , we utilised NORC's Enclave. Faced with challenges out of our and NORC's control, we pro-vide directions for shared tasks on sensitive domains ( \u00a79). Our aim for the future is to emphasize the need for research on longitudinal tracking and modelling of a user's mental health, under a common experimental setting in a secure environment.",
"cite_spans": [
{
"start": 533,
"end": 557,
"text": "(Macavaney et al., 2021)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "9 Recommendations for the Future Organising a NLP shared task on highly sensitive datasets is an incredibly challenging effort that relies on the coordination and collaboration of many different actors. In addition to the very useful feedback given by last year's organisers (Macavaney et al., 2021) , we have compiled an anonymous feedback questionnaire shared with the 39 members that had access to the DE or were the contact members of a team. In this section, we summarise the key insights gained from the teams' feedback ( \u00a79.1) and provide suggestions for future versions of Shared Tasks in such sensitive domains ( \u00a79.2).",
"cite_spans": [
{
"start": 275,
"end": 299,
"text": "(Macavaney et al., 2021)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "The questionnaire consists of 4 multiple choice questions and 2 free-text answers on (Q5) what they liked about this year's shared task vs (Q6) what needs improvement in future editions. Overview & Q1 -'My team managed to produce results': 18 members completed the feedback form (34% of all 53 participants; 46% of the 39 participants that the questionnaire was shared with), 17 of whom were members of teams that managed to submit their results (within or outside the DE). Q2 -'The task description was clear' (completely disagree to completely agree, [1] [2] [3] [4] [5] ): All 18 responses were between [3] [4] [5] , with an average of 4.4/5.0. Based on Q6 shown below, there were two respondents for whom the annotation guidelines and/or resulting labels for Task A were unclear. Providing more examples in such longitudinal tasks from the beginning of the Shared Task can offer an improvement in this regard. Q3 -'Communication via slack was easy and efficient.' (completely disagree to completely agree, [1] [2] [3] [4] [5] ): Responses were between [3] [4] [5] , with an average of 4.7/5.0, suggesting that an active communication channel can help participants along the way and is recommended for future editions. Q4 -'How was your experience with working on the Data Enclave?' (5 pre-defined choices): 50% of the respondents said that they faced many difficulties, but would have managed to produce results within the DE nevertheless if there wasn't the major incident during test time (see \u00a74); 4/18 respondents said that there were only some difficulties resulting in minor/medium loss in their productivity. We provide concrete suggestions to this effect in \u00a79.2. Q5 -What did you like about the shared task?:",
"cite_spans": [
{
"start": 553,
"end": 556,
"text": "[1]",
"ref_id": null
},
{
"start": 557,
"end": 560,
"text": "[2]",
"ref_id": "BIBREF27"
},
{
"start": 561,
"end": 564,
"text": "[3]",
"ref_id": null
},
{
"start": 565,
"end": 568,
"text": "[4]",
"ref_id": "BIBREF29"
},
{
"start": 569,
"end": 572,
"text": "[5]",
"ref_id": "BIBREF30"
},
{
"start": 606,
"end": 609,
"text": "[3]",
"ref_id": null
},
{
"start": 610,
"end": 613,
"text": "[4]",
"ref_id": "BIBREF29"
},
{
"start": 614,
"end": 617,
"text": "[5]",
"ref_id": "BIBREF30"
},
{
"start": 1010,
"end": 1013,
"text": "[1]",
"ref_id": null
},
{
"start": 1014,
"end": 1017,
"text": "[2]",
"ref_id": "BIBREF27"
},
{
"start": 1018,
"end": 1021,
"text": "[3]",
"ref_id": null
},
{
"start": 1022,
"end": 1025,
"text": "[4]",
"ref_id": "BIBREF29"
},
{
"start": 1026,
"end": 1029,
"text": "[5]",
"ref_id": "BIBREF30"
},
{
"start": 1056,
"end": 1059,
"text": "[3]",
"ref_id": null
},
{
"start": 1060,
"end": 1063,
"text": "[4]",
"ref_id": "BIBREF29"
},
{
"start": 1064,
"end": 1067,
"text": "[5]",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feedback from Participants",
"sec_num": "9.1"
},
{
"text": "The 17 responses on Q5 can be categorised into two main topics: 13 commented positively on the task itself and 7 on the organisational aspect (quick responses from the organisers -see also Q3 -and working in a secure manner through NORC's DE). Q6 -'What did you mostly not like about this year's Shared Task? What issues did you face? How can we improve for the next year?: Most of the 17 responses concerned issues around working within the DE -from inability to copy/paste to downloading resources. We compile a list of suggestions in \u00a79.2. 2/17 respondents commented on the delay of providing the code (e.g., evaluation, baselines/results); 2/17 commented on the clarity of the annotations (see also Q2); 2/17 also commented on the tightness of deadlines, which were packed towards the end of the Shared Task to allow more time for model training -a wider time frame for future Shared Tasks is recommended. Isolated concerning points (1/17) included the small size of the dataset to reach conclusive outcomes (often a concern in this domain) and inability to perform a direct comparison between systems trained within vs outside the DE (tackled by highlighting the bestperforming system for submissions within the DE).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feedback from Participants",
"sec_num": "9.1"
},
{
"text": "Secure Environment. Given the sensitive nature of data for the Shared Task, it is essential to be able to rely on a secure environment. Following CLPsych 2021, we opted for NORC and their DE. It is important that future organisers plan this collaboration in advance to make sure NORC has sufficient time to identify and secure enough resources and specific expertise to the project. The technical issue faced this year also highlights the need for a wider test-time period, to allow enough time for resolving such cases. Ideally there should be an ongoing collaboration with the DE so that any issues and the necessary expertise to overcome them are built during a sufficiently long period of time. Libraries and Resources. It is crucial to have a clear pre-defined list of libraries, resources and dependencies (e.g., pre-trained models) that would need to be reviewed before being available in the DE. This means reaching out in advance to the teams and also planning for a trial period of 2 weeks where the teams can access part of the data and check their needs, live. The teams for instance encountered many issues with NLP libraries that required additional downloads of resources when used. 5 It is also important to keep track of the approved/installed libraries each year. Communication and Peer Support. Following last year's suggestions, we wanted to avoid sending many similar requests to NORC, and try to provide a common setting for people to help each other. We relied on Slack by setting up two dedicated channels, which received very positive feedback and also facilitated the communication between the organisers and NORC. Participants helped each other e.g. in setting up the ssh tunneling for Jupyter Notebook or in identifying the specific issue to report back to NORC (which we have tried to do through a more coordinated effort, where one of the organisers would be the point of contact). Preparation. Notes from last year's edition already highlighted the complexity of organizing the shared task and recommended more advance planning. Even with that in mind, core challenges remain due to the antithesis between two very different agendas: the intensive experimental work in a very limited time frame (the shared task) and a centralised, step-by-step highly controlled process (the DE). We believe that only through long-term collaboration with DEs such as NORC is it feasible to define a middle-ground working solution which can guarantee high level of security while supporting researchers to develop their solutions. Such collaboration requires the recognition of the importance of DEs by funding bodies and the need to fund long-term collaborations between DEs and research organisations.",
"cite_spans": [
{
"start": 1198,
"end": 1199,
"text": "5",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Suggestions for future organisers",
"sec_num": "9.2"
},
{
"text": "Secure access to the shared task dataset was provided with IRB approval under University of Maryland, College Park protocol 1642625 and approval by the Biomedical and Scientific Research Ethics Committee (BSREC) at the University of Warwick (ethical application reference BSREC 40/19-20). Annotators were given contracts and paid fairly in line with University payscales. They were alerted about potentially encountering disturbing content and were advised to take breaks. The annotations are used to train and evaluate natural language processing models for recognising moments of change and linking them to suicidality risk, where the latter is provided by clinical psychology experts. Working with data on online platforms where individuals disclose personal information involves ethical considerations (Mao et al., 2011; Kek\u00fcll\u00fcoglu et al., 2020) . Such considerations include careful analysis and data sharing policies to protect sensitive personal information. Potential risks from the application of NLP models in being able to identify moments of change in individuals' timelines are akin to those in earlier work on personal event identification from social media and the detection of suicidal ideation. Potential mitigation strategies include restricting access to the code base and annotation labels used for evaluation. In this shared task we have asked participants to sign DUA agreements and we opted for a secure data enclave environment to work in. EP/V030302/1), the Alan Turing Institute (grant ref EP/N510129/1) and especially UKRI funding to promote collaboration between UK and US researchers. Aspects of this work were also supported by an Amazon Research Award and by the National Science Foundation under grant 2124270, and the effort also received internal financial support at NORC. The shared task organizers would like to express their gratitude to the anonymous users of Reddit whose data feature in this year's shared task dataset; to the annotators of the data for Task A, to the clinical experts from Bar-Ilan University who annotated the data for TaskB, the American Association of Suicidology; to all participants for their efforts and patience; to the NORC partners and personnel (especially co-author Jeff Leintz, Dariush Wilkowski, Julia Crothers, Bill Olesiuk and the Data Enclave Manager team) for their tremendous contributions and their willingness to put in a great amount of resources in setting up and managing the Enclave and enabling this year's shared task, especially given the short time frame, and finally to NAACL for its support for CLPsych.",
"cite_spans": [
{
"start": 806,
"end": 824,
"text": "(Mao et al., 2011;",
"ref_id": "BIBREF20"
},
{
"start": 825,
"end": 850,
"text": "Kek\u00fcll\u00fcoglu et al., 2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical statement",
"sec_num": null
},
{
"text": "Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology, pages 24-33, Minneapolis, Minnesota. Association for Computational Linguistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical statement",
"sec_num": null
},
{
"text": "We used the Pushshift API (https: //reddit-api.readthedocs.io/en/ latest/) to crawl the posts from the following subreddits for HealthAnxiety, autism, hardshipmates, rant, Anxiety, Needafriend, bipolar, lonely, rapecounseling, Anxietyhelp, StopSelfHarm, bipolarreddit, mentalhealth, schizophrenia, BPD, bulimia, mentalillness, socialanxiety, COVID19_support, addiction, depression, offmychest, survivorsofabuse, EDAnonymous, adhd, depression_help, panicparty, traumatoolbox, EatingDisorderHope, alcoholism, eating_disorders, psychoticreddit, trueoffmychest, EatingDisorders, anxietysupporters, foreveralone, ptsd, unsentletters .",
"cite_spans": [
{
"start": 128,
"end": 142,
"text": "HealthAnxiety,",
"ref_id": null
},
{
"start": 143,
"end": 150,
"text": "autism,",
"ref_id": null
},
{
"start": 151,
"end": 165,
"text": "hardshipmates,",
"ref_id": null
},
{
"start": 166,
"end": 171,
"text": "rant,",
"ref_id": null
},
{
"start": 172,
"end": 180,
"text": "Anxiety,",
"ref_id": null
},
{
"start": 181,
"end": 193,
"text": "Needafriend,",
"ref_id": null
},
{
"start": 194,
"end": 202,
"text": "bipolar,",
"ref_id": null
},
{
"start": 203,
"end": 210,
"text": "lonely,",
"ref_id": null
},
{
"start": 211,
"end": 226,
"text": "rapecounseling,",
"ref_id": null
},
{
"start": 227,
"end": 239,
"text": "Anxietyhelp,",
"ref_id": null
},
{
"start": 240,
"end": 253,
"text": "StopSelfHarm,",
"ref_id": null
},
{
"start": 254,
"end": 268,
"text": "bipolarreddit,",
"ref_id": null
},
{
"start": 269,
"end": 282,
"text": "mentalhealth,",
"ref_id": null
},
{
"start": 283,
"end": 297,
"text": "schizophrenia,",
"ref_id": null
},
{
"start": 298,
"end": 302,
"text": "BPD,",
"ref_id": null
},
{
"start": 303,
"end": 311,
"text": "bulimia,",
"ref_id": null
},
{
"start": 312,
"end": 326,
"text": "mentalillness,",
"ref_id": null
},
{
"start": 327,
"end": 341,
"text": "socialanxiety,",
"ref_id": null
},
{
"start": 342,
"end": 358,
"text": "COVID19_support,",
"ref_id": null
},
{
"start": 359,
"end": 369,
"text": "addiction,",
"ref_id": null
},
{
"start": 370,
"end": 381,
"text": "depression,",
"ref_id": null
},
{
"start": 382,
"end": 393,
"text": "offmychest,",
"ref_id": null
},
{
"start": 394,
"end": 411,
"text": "survivorsofabuse,",
"ref_id": null
},
{
"start": 412,
"end": 424,
"text": "EDAnonymous,",
"ref_id": null
},
{
"start": 425,
"end": 430,
"text": "adhd,",
"ref_id": null
},
{
"start": 431,
"end": 447,
"text": "depression_help,",
"ref_id": null
},
{
"start": 448,
"end": 459,
"text": "panicparty,",
"ref_id": null
},
{
"start": 460,
"end": 474,
"text": "traumatoolbox,",
"ref_id": null
},
{
"start": 475,
"end": 494,
"text": "EatingDisorderHope,",
"ref_id": null
},
{
"start": 495,
"end": 506,
"text": "alcoholism,",
"ref_id": null
},
{
"start": 507,
"end": 524,
"text": "eating_disorders,",
"ref_id": null
},
{
"start": 525,
"end": 541,
"text": "psychoticreddit,",
"ref_id": null
},
{
"start": 542,
"end": 557,
"text": "trueoffmychest,",
"ref_id": null
},
{
"start": 558,
"end": 574,
"text": "EatingDisorders,",
"ref_id": null
},
{
"start": 575,
"end": 593,
"text": "anxietysupporters,",
"ref_id": null
},
{
"start": 594,
"end": 607,
"text": "foreveralone,",
"ref_id": null
},
{
"start": 608,
"end": 613,
"text": "ptsd,",
"ref_id": null
},
{
"start": 614,
"end": 627,
"text": "unsentletters",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Reddit New: Data Collection",
"sec_num": null
},
{
"text": "When selecting informative timelines, the internal annotators independently classified them into the following categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Timeline Selection Criteria",
"sec_num": null
},
{
"text": "\u2022 Good: Timelines comprise posts that clearly indicate user mood or at least 1 moment of change in mood.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Timeline Selection Criteria",
"sec_num": null
},
{
"text": "\u2022 Medium: Timelines comprise posts from which user mood is challenging to infer. The individual may disclose information about their own life events, but such discussions are objective in tone.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Timeline Selection Criteria",
"sec_num": null
},
{
"text": "\u2022 Bad: Timelines comprise posts that do not provide indicators of the user's own mood. If there are posts by the user on subreddits related to mental health, these posts do not clearly relate to the user's own mood (e.g., words of encouragement for other users, crossposted content shared with intent to help other users rather than themselves).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Timeline Selection Criteria",
"sec_num": null
},
{
"text": "In this section, we outline the assessment criteria used for selecting the teams for participate in the Shared Task. The guidelines were given to two annotators internally, who achieved a high agreement (Pearson correlation \u03c1=.83).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Team Selection Assessment Criteria",
"sec_num": null
},
{
"text": "We selected these 3 subreddits on the basis of being present in at least 20% of the timelines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We did not use the users from the eRisk++ dataset for Task B: the information on the type of subreddit where a post was shared was not available in eRisk and the remaining 12 timelines from UMD-Suicidality (part of eRisk++) were incorporated at a latter stage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "e.g., the NLTK tokenizer requires 13MB of Punkt Tokenizer Models, which are not accessible in the DE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by a UKRI/EPSRC Turing AI Fellowship to Maria Liakata (grant ref",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "Aim: We have received applications (Registration of Interest) from 37 teams to participate in the CLPsych Shared Task 2022 (https://clpsych.org/sharedtask2022/). The goal of this reviewing process is to review the submitted applications on the basis of the main questions outlined below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Guidelines for Reviewing",
"sec_num": null
},
{
"text": "Each of the 37 teams that registered their interest provided us with the following information:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Registration of Interest Data:",
"sec_num": null
},
{
"text": "For each of the three reviewing criteria presented below, please provide your score (half scores, such as \"2.5\", are also allowed), your confidence and a justification of your rating.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Assessment Criteria",
"sec_num": null
},
{
"text": "\u2022 Does the background/current work of the team match the requirements of the task?Please rate between 1-5 (half scores allowed):\u25cb 5: The team has worked/works on similar longitudinal/sequential NLP tasks on mental health. \u25cb 4: The team has worked/works on similar NLP tasks with a longitudinal or sequential component. \u2022 Based on your assessment, how likely is the team to commit to this task? Please rate between 1-3 (half scores allowed): \u25cb 3: The task will help the team even to advance their own work, so they are likely to invest a lot of time in the task. \u25cb 2: The team has shown strong motivation, but their work is not directly linked to the shared task. Final Question (not part of the assessment): For the isolated participants (i.e., those who are a team on their own: numMembers=1), who should we try to group together so that they form a single team? Try to reply based on their responses to the 5 questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Criterion 1: Team Background",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Predicting moments of mood changes overtime from imbalanced social media data",
"authors": [
{
"first": "Falwah",
"middle": [],
"last": "Alhamed",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Ive",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2022,
"venue": "Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology: Mental Health in the Face of Change",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Falwah AlHamed, Julia Ive, and Lucia Specia. 2022. Predicting moments of mood changes overtime from imbalanced social media data. In Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology: Mental Health in the Face of Change.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Contour detection and hierarchical image segmentation",
"authors": [
{
"first": "Pablo",
"middle": [],
"last": "Arbelaez",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Maire",
"suffix": ""
},
{
"first": "Charless",
"middle": [],
"last": "Fowlkes",
"suffix": ""
},
{
"first": "Jitendra",
"middle": [],
"last": "Malik",
"suffix": ""
}
],
"year": 2010,
"venue": "IEEE transactions on pattern analysis and machine intelligence",
"volume": "33",
"issue": "",
"pages": "898--916",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pablo Arbelaez, Michael Maire, Charless Fowlkes, and Jitendra Malik. 2010. Contour detection and hierar- chical image segmentation. IEEE transactions on pattern analysis and machine intelligence, 33(5):898- 916.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Design choices for productive, secure, data-intensive research at scale in the cloud",
"authors": [
{
"first": "Diego",
"middle": [],
"last": "Arenas",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Atkins",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Austin",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Beavan",
"suffix": ""
},
{
"first": "Alvaro",
"middle": [
"Cabrejas"
],
"last": "Egea",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Carlysle-Davies",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Carter",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Clarke",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Cunningham",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Doel",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.08737"
]
},
"num": null,
"urls": [],
"raw_text": "Diego Arenas, Jon Atkins, Claire Austin, David Beavan, Alvaro Cabrejas Egea, Steven Carlysle-Davies, Ian Carter, Rob Clarke, James Cunningham, Tom Doel, et al. 2019. Design choices for productive, secure, data-intensive research at scale in the cloud. arXiv preprint arXiv:1908.08737.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Detecting moments of change and suicidal risks in longitudinal user texts using multi-task learning",
"authors": [
{
"first": "Tayyaba",
"middle": [],
"last": "Azim",
"suffix": ""
},
{
"first": "Gyanendro",
"middle": [],
"last": "Loitongbam",
"suffix": ""
},
{
"first": "Stuart",
"middle": [
"E"
],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Middleton",
"suffix": ""
}
],
"year": 2022,
"venue": "Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology: Mental Health in the Face of Change",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tayyaba Azim, Loitongbam Gyanendro Singh, and Stuart E. Middleton. 2022. Detecting moments of change and suicidal risks in longitudinal user texts us- ing multi-task learning. In Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology: Mental Health in the Face of Change.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Bergin and Garfield's handbook of psychotherapy and behavior change",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Barkham",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Lutz",
"suffix": ""
},
{
"first": "Louis G",
"middle": [],
"last": "Castonguay",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Barkham, Wolfgang Lutz, and Louis G Cas- tonguay. 2021. Bergin and Garfield's handbook of psychotherapy and behavior change. John Wiley & Sons.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Emotionallyinformed models for detecting moments of change and suicide risk levels in longitudinal social media data",
"authors": [
{
"first": "Ulya",
"middle": [],
"last": "Bayram",
"suffix": ""
},
{
"first": "Lamia",
"middle": [],
"last": "Benhiba",
"suffix": ""
}
],
"year": 2022,
"venue": "Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology: Mental Health in the Face of Change",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ulya Bayram and Lamia Benhiba. 2022. Emotionally- informed models for detecting moments of change and suicide risk levels in longitudinal social media data. In Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology: Mental Health in the Face of Change.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Towards capturing changes in mood and identifying suicidality risk",
"authors": [
{
"first": "Sravani",
"middle": [],
"last": "Boinepelli",
"suffix": ""
},
{
"first": "Shivansh",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Abhijeeth",
"middle": [],
"last": "Singam",
"suffix": ""
},
{
"first": "Tathagata",
"middle": [],
"last": "Raha",
"suffix": ""
},
{
"first": "Vasudeva",
"middle": [],
"last": "Varma",
"suffix": ""
}
],
"year": 2022,
"venue": "Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology: Mental Health in the Face of Change",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sravani Boinepelli, Shivansh Subramanian, Abhijeeth Singam, Tathagata Raha, and Vasudeva Varma. 2022. Towards capturing changes in mood and identify- ing suicidality risk. In Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology: Mental Health in the Face of Change.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Capturing changes in mood over time in longitudinal data using ensemble methodologies",
"authors": [
{
"first": "Ana-Maria",
"middle": [],
"last": "Bucur",
"suffix": ""
},
{
"first": "Hyewon",
"middle": [],
"last": "Jang",
"suffix": ""
},
{
"first": "Farhana Ferdousi",
"middle": [],
"last": "Liza",
"suffix": ""
}
],
"year": 2022,
"venue": "Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology: Mental Health in the Face of Change",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ana-Maria Bucur, Hyewon Jang, and Farhana Ferdousi Liza. 2022. Capturing changes in mood over time in longitudinal data using ensemble methodologies. In Proceedings of the Eighth Workshop on Computa- tional Linguistics and Clinical Psychology: Mental Health in the Face of Change.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Multi-task learning to capture changes in mood over time",
"authors": [
{
"first": "Prasadith",
"middle": [],
"last": "Buddhitha",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [
"Husseini"
],
"last": "Orabi",
"suffix": ""
},
{
"first": "Mahmoud",
"middle": [
"Husseini"
],
"last": "Orabi",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Inkpen",
"suffix": ""
}
],
"year": 2022,
"venue": "Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology: Mental Health in the Face of Change",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Prasadith Buddhitha, Ahmed Husseini Orabi, Mah- moud Husseini Orabi, and Diana Inkpen. 2022. Multi-task learning to capture changes in mood over time. In Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology: Mental Health in the Face of Change.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "XGBoost: A scalable tree boosting system",
"authors": [
{
"first": "Tianqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16",
"volume": "",
"issue": "",
"pages": "785--794",
"other_ids": {
"DOI": [
"10.1145/2939672.2939785"
]
},
"num": null,
"urls": [],
"raw_text": "Tianqi Chen and Carlos Guestrin. 2016. XGBoost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16, pages 785-794, New York, NY, USA. ACM.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "SMHD: a large-scale resource for exploring online language usage for multiple mental health conditions",
"authors": [
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Desmet",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Yates",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Soldaini",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Macavaney",
"suffix": ""
},
{
"first": "Nazli",
"middle": [],
"last": "Goharian",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1485--1497",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arman Cohan, Bart Desmet, Andrew Yates, Luca Sol- daini, Sean MacAvaney, and Nazli Goharian. 2018. SMHD: a large-scale resource for exploring online language usage for multiple mental health condi- tions. In Proceedings of the 27th International Con- ference on Computational Linguistics, pages 1485- 1497, Santa Fe, New Mexico, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "CLPsych 2015 shared task: Depression and PTSD on Twitter",
"authors": [
{
"first": "Glen",
"middle": [],
"last": "Coppersmith",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Harman",
"suffix": ""
},
{
"first": "Kristy",
"middle": [],
"last": "Hollingshead",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality",
"volume": "",
"issue": "",
"pages": "31--39",
"other_ids": {
"DOI": [
"10.3115/v1/W15-1204"
]
},
"num": null,
"urls": [],
"raw_text": "Glen Coppersmith, Mark Dredze, Craig Harman, Kristy Hollingshead, and Margaret Mitchell. 2015. CLPsych 2015 shared task: Depression and PTSD on Twitter. In Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, pages 31- 39, Denver, Colorado. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Exploring transformers and time lag features for predicting changes in mood over time",
"authors": [
{
"first": "John",
"middle": [],
"last": "Culnan",
"suffix": ""
},
{
"first": "Damian",
"middle": [
"Y Romero"
],
"last": "Diaz",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
}
],
"year": 2022,
"venue": "Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology: Mental Health in the Face of Change",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Culnan, Damian Y. Romero Diaz, and Steven Bethard. 2022. Exploring transformers and time lag features for predicting changes in mood over time. In Proceedings of the Eighth Workshop on Computa- tional Linguistics and Clinical Psychology: Mental Health in the Face of Change.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Maite Oronoz, and Arantza Casillas. 2022. Approximate nearest neighbour extraction techniques and neural networks for suicide risk prediction in the CLPsych 2022 shared task",
"authors": [
{
"first": "Gildo",
"middle": [],
"last": "Fabregat",
"suffix": ""
},
{
"first": "Ander",
"middle": [],
"last": "Cejudo",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Martinez-Romo",
"suffix": ""
},
{
"first": "Alicia",
"middle": [],
"last": "P\u00e9rez",
"suffix": ""
},
{
"first": "Lourdes",
"middle": [],
"last": "Araujo",
"suffix": ""
},
{
"first": "Nuria",
"middle": [],
"last": "Lebe\u00f1a",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology: Mental Health in the Face of Change",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gildo Fabregat, Ander Cejudo, Juan Martinez-Romo, Alicia P\u00e9rez, Lourdes Araujo, Nuria Lebe\u00f1a, Maite Oronoz, and Arantza Casillas. 2022. Approximate nearest neighbour extraction techniques and neural networks for suicide risk prediction in the CLPsych 2022 shared task. In Proceedings of the Eighth Work- shop on Computational Linguistics and Clinical Psy- chology: Mental Health in the Face of Change.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "WWBP-SQT-lite: Difference embeddings and multi-level models for moments of change identification in mental health forums",
"authors": [
{
"first": "Vasudha",
"middle": [],
"last": "Adithya V Ganesan",
"suffix": ""
},
{
"first": "Juhi",
"middle": [],
"last": "Varadarajan",
"suffix": ""
},
{
"first": "Shashanka",
"middle": [],
"last": "Mittal",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Subrahamanya",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Matero",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Soni",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Sharath",
"suffix": ""
},
{
"first": "Johannes",
"middle": [
"C"
],
"last": "Guntuku",
"suffix": ""
},
{
"first": "H",
"middle": [
"Andrew"
],
"last": "Eichstaedt",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schwartz",
"suffix": ""
}
],
"year": 2022,
"venue": "Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology: Mental Health in the Face of Change",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adithya V Ganesan, Vasudha Varadarajan, Juhi Mittal, Shashanka Subrahamanya, Matthew Matero, Nikita Soni, Sharath Chandra Guntuku, Johannes C. Eich- staedt, and H. Andrew Schwartz. 2022. WWBP- SQT-lite: Difference embeddings and multi-level models for moments of change identification in men- tal health forums. In Proceedings of the Eighth Work- shop on Computational Linguistics and Clinical Psy- chology: Mental Health in the Face of Change.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Analysing privacy leakage of life events on twitter",
"authors": [
{
"first": "Dilara",
"middle": [],
"last": "Kek\u00fcll\u00fcoglu",
"suffix": ""
},
{
"first": "Walid",
"middle": [],
"last": "Magdy",
"suffix": ""
},
{
"first": "Kami",
"middle": [],
"last": "Vaniea",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th ACM Conference on Web Science",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dilara Kek\u00fcll\u00fcoglu, Walid Magdy, and Kami Vaniea. 2020. Analysing privacy leakage of life events on twitter. In Proceedings of the 12th ACM Conference on Web Science.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A test collection for research on depression and language use",
"authors": [
{
"first": "David",
"middle": [
"E"
],
"last": "Losada",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Crestani",
"suffix": ""
}
],
"year": 2016,
"venue": "Experimental IR Meets Multilinguality, Multimodality, and Interaction -7th International Conference of the CLEF Association",
"volume": "9822",
"issue": "",
"pages": "28--39",
"other_ids": {
"DOI": [
"10.1007/978-3-319-44564-9_3"
]
},
"num": null,
"urls": [],
"raw_text": "David E. Losada and Fabio Crestani. 2016. A test collec- tion for research on depression and language use. In Experimental IR Meets Multilinguality, Multimodal- ity, and Interaction -7th International Conference of the CLEF Association, CLEF 2016, \u00c9vora, Portu- gal, September 5-8, 2016, Proceedings, volume 9822 of Lecture Notes in Computer Science, pages 28-39. Springer.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Overview of erisk 2020: Early risk prediction on the internet",
"authors": [
{
"first": "David",
"middle": [
"E"
],
"last": "Losada",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Crestani",
"suffix": ""
},
{
"first": "Javier",
"middle": [],
"last": "Parapar",
"suffix": ""
}
],
"year": 2020,
"venue": "Experimental IR Meets Multilinguality, Multimodality, and Interaction -11th International Conference of the CLEF Association",
"volume": "2020",
"issue": "",
"pages": "272--287",
"other_ids": {
"DOI": [
"10.1007/978-3-030-58219-7_20"
]
},
"num": null,
"urls": [],
"raw_text": "David E. Losada, Fabio Crestani, and Javier Parapar. 2020. Overview of erisk 2020: Early risk predic- tion on the internet. In Experimental IR Meets Mul- tilinguality, Multimodality, and Interaction -11th International Conference of the CLEF Association, CLEF 2020, Thessaloniki, Greece, September 22-25, 2020, Proceedings, volume 12260 of Lecture Notes in Computer Science, pages 272-287. Springer.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Silja Vocks, Dietmar Schulte, and Armita Tschitsaz-Stucki. 2013. The ups and downs of psychotherapy: Sudden gains and sudden losses identified with session reports",
"authors": [
{
"first": "Wolfgang",
"middle": [],
"last": "Lutz",
"suffix": ""
},
{
"first": "Torsten",
"middle": [],
"last": "Ehrlich",
"suffix": ""
},
{
"first": "Julian",
"middle": [
"A"
],
"last": "Rubel",
"suffix": ""
},
{
"first": "Nora",
"middle": [],
"last": "Hallwachs",
"suffix": ""
},
{
"first": "Marie-Anna",
"middle": [],
"last": "R\u00f6ttger",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Jorasz",
"suffix": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Mocanu",
"suffix": ""
}
],
"year": null,
"venue": "Psychotherapy Research",
"volume": "23",
"issue": "",
"pages": "14--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wolfgang Lutz, Torsten Ehrlich, Julian A. Rubel, Nora Hallwachs, Marie-Anna R\u00f6ttger, Christine Jorasz, Sarah Mocanu, Silja Vocks, Dietmar Schulte, and Ar- mita Tschitsaz-Stucki. 2013. The ups and downs of psychotherapy: Sudden gains and sudden losses iden- tified with session reports. Psychotherapy Research, 23:14 -24.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Community-level research on suicidality prediction in a secure environment: Overview of the clpsych 2021 shared task",
"authors": [
{
"first": "Sean",
"middle": [],
"last": "Macavaney",
"suffix": ""
},
{
"first": "Anjali",
"middle": [],
"last": "Mittu",
"suffix": ""
},
{
"first": "Glen",
"middle": [],
"last": "Coppersmith",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Leintz",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2021,
"venue": "Proc. of CLPsych",
"volume": "",
"issue": "",
"pages": "70--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sean Macavaney, Anjali Mittu, Glen Coppersmith, Jeff Leintz, and Philip Resnik. 2021. Community-level research on suicidality prediction in a secure environ- ment: Overview of the clpsych 2021 shared task. In Proc. of CLPsych, pages 70-80.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Loose tweets: An analysis of privacy leaks on twitter",
"authors": [
{
"first": "Huina",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Shuai",
"suffix": ""
},
{
"first": "Apu",
"middle": [],
"last": "Kapadia",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--12",
"other_ids": {
"DOI": [
"10.1145/2046556.2046558"
]
},
"num": null,
"urls": [],
"raw_text": "Huina Mao, Xin Shuai, and Apu Kapadia. 2011. Loose tweets: An analysis of privacy leaks on twitter. WPES '11, page 1-12, New York, NY, USA. As- sociation for Computing Machinery.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A meta-analysis of sudden gains in psychotherapy: Outcome and moderators",
"authors": [
{
"first": "Jonathan",
"middle": [
"G"
],
"last": "Shalom",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Idan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Aderka",
"suffix": ""
}
],
"year": 2020,
"venue": "Clinical Psychology Review",
"volume": "76",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1016/j.cpr.2020.101827"
]
},
"num": null,
"urls": [],
"raw_text": "Jonathan G. Shalom and Idan M. Aderka. 2020. A meta-analysis of sudden gains in psychotherapy: Out- come and moderators. Clinical Psychology Review, 76:101827.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Expert, crowdsourced, and machine assessment of suicide risk via online postings",
"authors": [
{
"first": "Han-Chin",
"middle": [],
"last": "Shing",
"suffix": ""
},
{
"first": "Suraj",
"middle": [],
"last": "Nair",
"suffix": ""
},
{
"first": "Ayah",
"middle": [],
"last": "Zirikly",
"suffix": ""
},
{
"first": "Meir",
"middle": [],
"last": "Friedenberg",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Fifth Workshop on Computational Linguistics and Clinical Psychology: From Keyboard to Clinic",
"volume": "",
"issue": "",
"pages": "25--36",
"other_ids": {
"DOI": [
"10.18653/v1/W18-0603"
]
},
"num": null,
"urls": [],
"raw_text": "Han-Chin Shing, Suraj Nair, Ayah Zirikly, Meir Frieden- berg, Hal Daum\u00e9 III, and Philip Resnik. 2018. Expert, crowdsourced, and machine assessment of suicide risk via online postings. In Proceedings of the Fifth Workshop on Computational Linguistics and Clinical Psychology: From Keyboard to Clinic, pages 25-36, New Orleans, LA. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Human language modeling",
"authors": [
{
"first": "Nikita",
"middle": [],
"last": "Soni",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Matero",
"suffix": ""
},
{
"first": "Niranjan",
"middle": [],
"last": "Balasubramanian",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Schwartz",
"suffix": ""
}
],
"year": 2022,
"venue": "Findings of the Association for Computational Linguistics: ACL 2022",
"volume": "",
"issue": "",
"pages": "622--636",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikita Soni, Matthew Matero, Niranjan Balasubrama- nian, and H. Schwartz. 2022. Human language modeling. In Findings of the Association for Com- putational Linguistics: ACL 2022, pages 622-636, Dublin, Ireland. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Jiayu Song, and Maria Liakata. 2022. Identifying moments of change from longitudinal user text",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Tsakalidis",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Nanni",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Hills",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Chim",
"suffix": ""
}
],
"year": null,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.48550/ARXIV.2205.05593"
]
},
"num": null,
"urls": [],
"raw_text": "Adam Tsakalidis, Federico Nanni, Anthony Hills, Jenny Chim, Jiayu Song, and Maria Liakata. 2022. Identi- fying moments of change from longitudinal user text. In Proc. of ACL.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "An evaluation of change point detection algorithms",
"authors": [
{
"first": "J",
"middle": [
"J"
],
"last": "Gerrit",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Burg",
"suffix": ""
},
{
"first": "K",
"middle": [
"I"
],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.06222"
]
},
"num": null,
"urls": [],
"raw_text": "Gerrit JJ van den Burg and Christopher KI Williams. 2020. An evaluation of change point detection algo- rithms. arXiv preprint arXiv:2003.06222.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "CLPsych 2019 shared task: Predicting the degree of suicide risk in Reddit posts",
"authors": [
{
"first": "Ayah",
"middle": [],
"last": "Zirikly",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
},
{
"first": "\u00d6zlem",
"middle": [],
"last": "Uzuner",
"suffix": ""
},
{
"first": "Kristy",
"middle": [],
"last": "Hollingshead",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3003"
]
},
"num": null,
"urls": [],
"raw_text": "Ayah Zirikly, Philip Resnik, \u00d6zlem Uzuner, and Kristy Hollingshead. 2019. CLPsych 2019 shared task: Pre- dicting the degree of suicide risk in Reddit posts. In",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Team name (brief, no spaces)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Team name (brief, no spaces)",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Team Members (provide all names, comma-separated)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Team Members (provide all names, comma-separated)",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Main Contact (name)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Main Contact (name)",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Main Contact (email)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Main Contact (email)",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Main Contact (Affiliation(s))",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Main Contact (Affiliation(s))",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Which programming languages (and corresponding version) are you planning to use? (if other, please specify) 10",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tell us about your background, experience and NLP skills 9. Which programming languages (and corresponding version) are you planning to use? (if other, please specify) 10. Which software libraries do you expect to use? (one per line)",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "If so, please specify the version and the software library that you plan to use it with. (one per line) the following: 1. Number of participants in the team 2",
"authors": [
{
"first": "T5",
"middle": [],
"last": "Bert",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Do you plan to use a pre-trained model (such as GloVe, BERT, T5, etc.)? If so, please specify the version and the software library that you plan to use it with. (one per line) the following: 1. Number of participants in the team 2. Tell us why you are interested in participating (question 7 form the list above)",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Tell us about your background, experience and NLP skills",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tell us about your background, experience and NLP skills (question 8) 4. Which programming languages [...]? (question 9) 5. Which software libraries [...] (question 10)",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Timeline-level Precision P w and Recall R w of the submitted systems. Only the best performing submission by each team is shown (selected on the basis of F1=2\u2022P 1 \u2022C 1 /(P 1 +R 1 ), macro-based).",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF1": {
"html": null,
"content": "<table><tr><td>: Dataset overview</td></tr><tr><td>posts in any mental health-related subreddit (MHS)</td></tr><tr><td>between</td></tr></table>",
"text": "",
"num": null,
"type_str": "table"
},
"TABREF3": {
"html": null,
"content": "<table><tr><td>Half Majority Perfect .451 .264 .129 .309 .122 Escalation (IE) .550 Switch (IS) None (O) .920 .832 .692</td></tr></table>",
"text": "Summary of the data for both tasks.",
"num": null,
"type_str": "table"
},
"TABREF4": {
"html": null,
"content": "<table><tr><td/><td>DE</td><td>macro-avg P R</td><td>F1</td><td>P</td><td colspan=\"2\">Post-level Evaluation IS IE R F1 P R</td><td>F1</td><td>P</td><td>O</td><td>R</td><td>F1</td><td>macro-avg Cp Cr</td><td colspan=\"2\">Coverage-based Metrics IS IE Cp Cr Cp Cr</td><td>Cp</td><td>O</td><td>Cr</td></tr><tr><td>Baseline</td><td>Majority</td><td colspan=\"2\">-.333 .280</td><td colspan=\"2\">-.000 .000</td><td colspan=\"6\">-.000 .000 .724 1.000 .840</td><td>-.142</td><td>-.000</td><td colspan=\"2\">-.000 .489 .426</td></tr></table>",
"text": ".762 BERT f -TalkLife.523 .386 .380 .091 .012 .022 .723 .163 .267 .754 .983 .853 .260 .204 .025 .007 .226 .094 .529 .513 System Submissions BLUE .505 .495 .499 .175 .171 .173 .484 .433 .457 .855 .882 .868 .499 .378 .500 .028 .299 .395 .699 .712 IIITH .520 .600 .519 .206 .524 .296 .402 .630 .491 .954 .647 .771 .347 .405 .254 .356 .249 .373 .536 .486 LAMA .552 .535 .524 .166 .354 .226 .609 .389 .475 .882 .861 .871 .376 .441 .253 .373 .193 .244 .680 .706 NLP-UNED \u2713 .493 .518 .501 .189 .293 .230 .414 .471 .440 .876 .791 .832 .306 .401 .244 .304 .134 .330 .541 .569 UArizona \u2713",
"num": null,
"type_str": "table"
},
"TABREF5": {
"html": null,
"content": "<table/>",
"text": "Task A -System evaluation, with first, second and third highest scores (as well as the highest scores for submissions within the DE) being highlighted. Only the best submission for each team is shown, selected separately on the basis of macro-avg F1 (Post-level Evaluation) and F1=2\u2022C p \u2022C",
"num": null,
"type_str": "table"
},
"TABREF6": {
"html": null,
"content": "<table><tr><td/><td/><td>DE</td><td>macro-avg P R</td><td>F1</td><td>micro-avg P R</td><td>F1</td><td>P</td><td>Low R</td><td>F1</td><td>Moderate P R</td><td>F1</td><td>P</td><td>Severe R</td><td>F1</td></tr><tr><td>(a)</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>(c)</td><td>uOttawa-AI WResearch (1) WWBP-SQT-lite NLP-UNED (2) WResearch (2)</td><td colspan=\"5\">.329 .365 .344 .449 .500 .470 .467 .479 .465 .565 .531 .543 .346 .370 .354 .471 .500 .480 \u2713 .367 .387 .365 .497 .531 .497 \u2713 .367 .365 .362 .499 .500 .494</td><td colspan=\"6\">.000 .000 .000 .462 .429 .444 .526 .200 .333 .250 .533 .571 .552 .667 .000 .000 .000 .500 .643 .563 .538 .000 .000 .000 .600 .429 .500 .500 .000 .000 .000 .545 .429 .480 .556</td><td>.733 .579 .667 .588 .533 .593 .467 .500 .733 .595 .667 .606</td></tr></table>",
"text": "Majority.156 .333 .213 .220 .469 .299 -.000 .000 -.000 .000 .4691.000 .638 LR-tfidf .303 .338 .295 .413 .469 .406 .000 .000 .000 .429 .214 .286 .480 .800 .600 (b) IIITH .397 .408 .380 .538 .563 .520 .000 .000 .000 .625 .357 .455 .565 .867 .684 LAMA .306 .424 .298 .359 .344 .316 .167 .667 .267 .250 .071 .111 .500 .533 .516 NLP-UNED (1) \u2713 .361 .394 .369 .492 .531 .500 .000 .000 .000 .500 .714 .588 .583 .467 .519 UoS .618 .427 .451 .482 .469 .438 1.000 .333 .500 .375 .214 .273 .478",
"num": null,
"type_str": "table"
},
"TABREF7": {
"html": null,
"content": "<table><tr><td>Task B -System Evaluation: (a) baselines, (b) system submissions, (c) systems utilising Task A.</td></tr></table>",
"text": "",
"num": null,
"type_str": "table"
}
}
}
}