Titles
stringlengths 41
140
| Abstracts
stringlengths 554
1.91k
| Years
int64 2.02k
2.02k
| Categories
stringclasses 1
value | __index_level_0__
int64 2k
44.4k
|
|---|---|---|---|---|
Large-scale Analysis of Counseling Conversations: An Application of
Natural Language Processing to Mental Health
|
Mental illness is one of the most pressing public health issues of our time.
While counseling and psychotherapy can be effective treatments, our knowledge
about how to conduct successful counseling conversations has been limited due
to lack of large-scale data with labeled outcomes of the conversations. In this
paper, we present a large-scale, quantitative study on the discourse of
text-message-based counseling conversations. We develop a set of novel
computational discourse analysis methods to measure how various linguistic
aspects of conversations are correlated with conversation outcomes. Applying
techniques such as sequence-based conversation models, language model
comparisons, message clustering, and psycholinguistics-inspired word frequency
analyses, we discover actionable conversation strategies that are associated
with better conversation outcomes.
| 2,016
|
Computation and Language
| 2,004
|
Modeling Interpersonal Influence of Verbal Behavior in Couples Therapy
Dyadic Interactions
|
Dyadic interactions among humans are marked by speakers continuously
influencing and reacting to each other in terms of responses and behaviors,
among others. Understanding how interpersonal dynamics affect behavior is
important for successful treatment in psychotherapy domains. Traditional
schemes that automatically identify behavior for this purpose have often looked
at only the target speaker. In this work, we propose a Markov model of how a
target speaker's behavior is influenced by their own past behavior as well as
their perception of their partner's behavior, based on lexical features. Apart
from incorporating additional potentially useful information, our model can
also control the degree to which the partner affects the target speaker. We
evaluate our proposed model on the task of classifying Negative behavior in
Couples Therapy and show that it is more accurate than the single-speaker
model. Furthermore, we investigate the degree to which the optimal influence
relates to how well a couple does on the long-term, via relating to
relationship outcomes
| 2,018
|
Computation and Language
| 5,875
|
Multi-label Multi-task Deep Learning for Behavioral Coding
|
We propose a methodology for estimating human behaviors in psychotherapy
sessions using mutli-label and multi-task learning paradigms. We discuss the
problem of behavioral coding in which data of human interactions is the
annotated with labels to describe relevant human behaviors of interest. We
describe two related, yet distinct, corpora consisting of therapist client
interactions in psychotherapy sessions. We experimentally compare the proposed
learning approaches for estimating behaviors of interest in these datasets.
Specifically, we compare single and multiple label learning approaches, single
and multiple task learning approaches, and evaluate the performance of these
approaches when incorporating turn context. We demonstrate the prediction
performance gains which can be achieved by using the proposed paradigms and
discuss the insights these models provide into these complex interactions.
| 2,020
|
Computation and Language
| 7,342
|
Conversation Model Fine-Tuning for Classifying Client Utterances in
Counseling Dialogues
|
The recent surge of text-based online counseling applications enables us to
collect and analyze interactions between counselors and clients. A dataset of
those interactions can be used to learn to automatically classify the client
utterances into categories that help counselors in diagnosing client status and
predicting counseling outcome. With proper anonymization, we collect
counselor-client dialogues, define meaningful categories of client utterances
with professional counselors, and develop a novel neural network model for
classifying the client utterances. The central idea of our model, ConvMFiT, is
a pre-trained conversation model which consists of a general language model
built from an out-of-domain corpus and two role-specific language models built
from unlabeled in-domain dialogues. The classification result shows that
ConvMFiT outperforms state-of-the-art comparison models. Further, the attention
weights in the learned model confirm that the model finds expected linguistic
patterns for each category.
| 2,019
|
Computation and Language
| 8,400
|
Modeling Interpersonal Linguistic Coordination in Conversations using
Word Mover's Distance
|
Linguistic coordination is a well-established phenomenon in spoken
conversations and often associated with positive social behaviors and outcomes.
While there have been many attempts to measure lexical coordination or
entrainment in literature, only a few have explored coordination in syntactic
or semantic space. In this work, we attempt to combine these different aspects
of coordination into a single measure by leveraging distances in a neural word
representation space. In particular, we adopt the recently proposed Word
Mover's Distance with word2vec embeddings and extend it to measure the
dissimilarity in language used in multiple consecutive speaker turns. To
validate our approach, we apply this measure for two case studies in the
clinical psychology domain. We find that our proposed measure is correlated
with the therapist's empathy towards their patient in Motivational Interviewing
and with affective behaviors in Couples Therapy. In both case studies, our
proposed metric exhibits higher correlation than previously proposed measures.
When applied to the couples with relationship improvement, we also notice a
significant decrease in the proposed measure over the course of therapy,
indicating higher linguistic coordination.
| 2,019
|
Computation and Language
| 8,608
|
Finding Your Voice: The Linguistic Development of Mental Health
Counselors
|
Mental health counseling is an enterprise with profound societal importance
where conversations play a primary role. In order to acquire the conversational
skills needed to face a challenging range of situations, mental health
counselors must rely on training and on continued experience with actual
clients. However, in the absence of large scale longitudinal studies, the
nature and significance of this developmental process remain unclear. For
example, prior literature suggests that experience might not translate into
consequential changes in counselor behavior. This has led some to even argue
that counseling is a profession without expertise.
In this work, we develop a computational framework to quantify the extent to
which individuals change their linguistic behavior with experience and to study
the nature of this evolution. We use our framework to conduct a large
longitudinal study of mental health counseling conversations, tracking over
3,400 counselors across their tenure. We reveal that overall, counselors do
indeed change their conversational behavior to become more diverse across
interactions, developing an individual voice that distinguishes them from other
counselors. Furthermore, a finer-grained investigation shows that the rate and
nature of this diversification vary across functionally different
conversational components.
| 2,019
|
Computation and Language
| 9,433
|
Observing Dialogue in Therapy: Categorizing and Forecasting Behavioral
Codes
|
Automatically analyzing dialogue can help understand and guide behavior in
domains such as counseling, where interactions are largely mediated by
conversation. In this paper, we study modeling behavioral codes used to asses a
psychotherapy treatment style called Motivational Interviewing (MI), which is
effective for addressing substance abuse and related problems. Specifically, we
address the problem of providing real-time guidance to therapists with a
dialogue observer that (1) categorizes therapist and client MI behavioral codes
and, (2) forecasts codes for upcoming utterances to help guide the conversation
and potentially alert the therapist. For both tasks, we define neural network
models that build upon recent successes in dialogue modeling. Our experiments
demonstrate that our models can outperform several baselines for both tasks. We
also report the results of a careful analysis that reveals the impact of the
various network design tradeoffs for modeling therapy dialogue.
| 2,019
|
Computation and Language
| 9,580
|
Predicting Behavior in Cancer-Afflicted Patient and Spouse Interactions
using Speech and Language
|
Cancer impacts the quality of life of those diagnosed as well as their spouse
caregivers, in addition to potentially influencing their day-to-day behaviors.
There is evidence that effective communication between spouses can improve
well-being related to cancer but it is difficult to efficiently evaluate the
quality of daily life interactions using manual annotation frameworks.
Automated recognition of behaviors based on the interaction cues of speakers
can help analyze interactions in such couples and identify behaviors which are
beneficial for effective communication. In this paper, we present and detail a
dataset of dyadic interactions in 85 real-life cancer-afflicted couples and a
set of observational behavior codes pertaining to interpersonal communication
attributes. We describe and employ neural network-based systems for classifying
these behaviors based on turn-level acoustic and lexical speech patterns.
Furthermore, we investigate the effect of controlling for factors such as
gender, patient/caregiver role and conversation content on behavior
classification. Analysis of our preliminary results indicates the challenges in
this task due to the nature of the targeted behaviors and suggests that
techniques incorporating contextual processing might be better suited to tackle
this problem.
| 2,019
|
Computation and Language
| 9,872
|
An analysis of observation length requirements for machine understanding
of human behaviors from spoken language
|
The task of quantifying human behavior by observing interaction cues is an
important and useful one across a range of domains in psychological research
and practice. Machine learning-based approaches typically perform this task by
first estimating behavior based on cues within an observation window, such as a
fixed number of words, and then aggregating the behavior over all the windows
in that interaction. The length of this window directly impacts the accuracy of
estimation by controlling the amount of information being used. The exact link
between window length and accuracy, however, has not been well studied,
especially in spoken language. In this paper, we investigate this link and
present an analysis framework that determines appropriate window lengths for
the task of behavior estimation. Our proposed framework utilizes a two-pronged
evaluation approach: (a) extrinsic similarity between machine predictions and
human expert annotations, and (b) intrinsic consistency between intra-machine
and intra-human behavior relations. We apply our analysis to real-life
conversations that are annotated for a large and diverse set of behavior codes
and examine the relation between the nature of a behavior and how long it
should be observed. We find that behaviors describing negative and positive
affect can be accurately estimated from short to medium-length expressions
whereas behaviors related to problem-solving and dysphoria require much longer
observations and are difficult to quantify from language alone. These findings
are found to be generally consistent across different behavior modeling
approaches.
| 2,020
|
Computation and Language
| 11,655
|
Balancing Objectives in Counseling Conversations: Advancing Forwards or
Looking Backwards
|
Throughout a conversation, participants make choices that can orient the flow
of the interaction. Such choices are particularly salient in the consequential
domain of crisis counseling, where a difficulty for counselors is balancing
between two key objectives: advancing the conversation towards a resolution,
and empathetically addressing the crisis situation.
In this work, we develop an unsupervised methodology to quantify how
counselors manage this balance. Our main intuition is that if an utterance can
only receive a narrow range of appropriate replies, then its likely aim is to
advance the conversation forwards, towards a target within that range.
Likewise, an utterance that can only appropriately follow a narrow range of
possible utterances is likely aimed backwards at addressing a specific
situation within that range. By applying this intuition, we can map each
utterance to a continuous orientation axis that captures the degree to which it
is intended to direct the flow of the conversation forwards or backwards.
This unsupervised method allows us to characterize counselor behaviors in a
large dataset of crisis counseling conversations, where we show that known
counseling strategies intuitively align with this axis. We also illustrate how
our measure can be indicative of a conversation's progress, as well as its
effectiveness.
| 2,020
|
Computation and Language
| 13,745
|
A chatbot architecture for promoting youth resilience
|
E-health technologies have the potential to provide scalable and accessible
interventions for youth mental health. As part of a developing an ecosystem of
e-screening and e-therapy tools for New Zealand young people, a dialog agent,
Headstrong, has been designed to promote resilience with methods grounded in
cognitive behavioral therapy and positive psychology. This paper describes the
architecture underlying the chatbot. The architecture supports a range of over
20 activities delivered in a 4-week program by relatable personas. The
architecture provides a visual authoring interface to its content management
system. In addition to supporting the original adolescent resilience chatbot,
the architecture has been reused to create a 3-week 'stress-detox' intervention
for undergraduates, and subsequently for a chatbot to support young people with
the impacts of the COVID-19 pandemic, with all three systems having been used
in field trials. The Headstrong architecture illustrates the feasibility of
creating a domain-focused authoring environment in the context of e-therapy
that supports non-technical expert input and rapid deployment.
| 2,020
|
Computation and Language
| 13,862
|
Automated Quality Assessment of Cognitive Behavioral Therapy Sessions
Through Highly Contextualized Language Representations
|
During a psychotherapy session, the counselor typically adopts techniques
which are codified along specific dimensions (e.g., 'displays warmth and
confidence', or 'attempts to set up collaboration') to facilitate the
evaluation of the session. Those constructs, traditionally scored by trained
human raters, reflect the complex nature of psychotherapy and highly depend on
the context of the interaction. Recent advances in deep contextualized language
models offer an avenue for accurate in-domain linguistic representations which
can lead to robust recognition and scoring of such psychotherapy-relevant
behavioral constructs, and support quality assurance and supervision. In this
work, we propose a BERT-based model for automatic behavioral scoring of a
specific type of psychotherapy, called Cognitive Behavioral Therapy (CBT),
where prior work is limited to frequency-based language features and/or short
text excerpts which do not capture the unique elements involved in a
spontaneous long conversational interaction. The model focuses on the
classification of therapy sessions with respect to the overall score achieved
on the widely-used Cognitive Therapy Rating Scale (CTRS), but is trained in a
multi-task manner in order to achieve higher interpretability. BERT-based
representations are further augmented with available therapy metadata,
providing relevant non-linguistic context and leading to consistent performance
improvements. We train and evaluate our models on a set of 1,118 real-world
therapy sessions, recorded and automatically transcribed. Our best model
achieves an F1 score equal to 72.61% on the binary classification task of low
vs. high total CTRS.
| 2,021
|
Computation and Language
| 18,274
|
Towards Automated Psychotherapy via Language Modeling
|
In this experiment, a model was devised, trained, and evaluated to automate
psychotherapist/client text conversations through the use of state-of-the-art,
Seq2Seq Transformer-based Natural Language Generation (NLG) systems. Through
training the model upon a mix of the Cornell Movie Dialogue Corpus for language
understanding and an open-source, anonymized, and public licensed
psychotherapeutic dataset, the model achieved statistically significant
performance in published, standardized qualitative benchmarks against
human-written validation data - meeting or exceeding human-written responses'
performance in 59.7% and 67.1% of the test set for two independent test methods
respectively. Although the model cannot replace the work of psychotherapists
entirely, its ability to synthesize human-appearing utterances for the majority
of the test set serves as a promising step towards communizing and easing
stigma at the psychotherapeutic point-of-care.
| 2,021
|
Computation and Language
| 19,430
|
An Automated Quality Evaluation Framework of Psychotherapy Conversations
with Local Quality Estimates
|
Text-based computational approaches for assessing the quality of
psychotherapy are being developed to support quality assurance and clinical
training. However, due to the long durations of typical conversation based
therapy sessions, and due to limited annotated modeling resources,
computational methods largely rely on frequency-based lexical features or
dialogue acts to assess the overall session level characteristics. In this
work, we propose a hierarchical framework to automatically evaluate the quality
of transcribed Cognitive Behavioral Therapy (CBT) interactions. Given the
richly dynamic nature of the spoken dialog within a talk therapy session, to
evaluate the overall session level quality, we propose to consider modeling it
as a function of local variations across the interaction. To implement that
empirically, we divide each psychotherapy session into conversation segments
and initialize the segment-level qualities with the session-level scores.
First, we produce segment embeddings by fine-tuning a BERT-based model, and
predict segment-level (local) quality scores. These embeddings are used as the
lower-level input to a Bidirectional LSTM-based neural network to predict the
session-level (global) quality estimates. In particular, we model the global
quality as a linear function of the local quality scores, which allows us to
update the segment-level quality estimates based on the session-level quality
prediction. These newly estimated segment-level scores benefit the BERT
fine-tuning process, which in turn results in better segment embeddings. We
evaluate the proposed framework on automatically derived transcriptions from
real-world CBT clinical recordings to predict session-level behavior codes. The
results indicate that our approach leads to improved evaluation accuracy for
most codes when used for both regression and classification tasks.
| 2,022
|
Computation and Language
| 20,601
|
Speaker and Time-aware Joint Contextual Learning for Dialogue-act
Classification in Counselling Conversations
|
The onset of the COVID-19 pandemic has brought the mental health of people
under risk. Social counselling has gained remarkable significance in this
environment. Unlike general goal-oriented dialogues, a conversation between a
patient and a therapist is considerably implicit, though the objective of the
conversation is quite apparent. In such a case, understanding the intent of the
patient is imperative in providing effective counselling in therapy sessions,
and the same applies to a dialogue system as well. In this work, we take
forward a small but an important step in the development of an automated
dialogue system for mental-health counselling. We develop a novel dataset,
named HOPE, to provide a platform for the dialogue-act classification in
counselling conversations. We identify the requirement of such conversation and
propose twelve domain-specific dialogue-act (DAC) labels. We collect 12.9K
utterances from publicly-available counselling session videos on YouTube,
extract their transcripts, clean, and annotate them with DAC labels. Further,
we propose SPARTA, a transformer-based architecture with a novel speaker- and
time-aware contextual learning for the dialogue-act classification. Our
evaluation shows convincing performance over several baselines, achieving
state-of-the-art on HOPE. We also supplement our experiments with extensive
empirical and qualitative analyses of SPARTA.
| 2,021
|
Computation and Language
| 23,463
|
Mental Health Assessment for the Chatbots
|
Previous researches on dialogue system assessment usually focus on the
quality evaluation (e.g. fluency, relevance, etc) of responses generated by the
chatbots, which are local and technical metrics. For a chatbot which responds
to millions of online users including minors, we argue that it should have a
healthy mental tendency in order to avoid the negative psychological impact on
them. In this paper, we establish several mental health assessment dimensions
for chatbots (depression, anxiety, alcohol addiction, empathy) and introduce
the questionnaire-based mental health assessment methods. We conduct
assessments on some well-known open-domain chatbots and find that there are
severe mental health issues for all these chatbots. We consider that it is due
to the neglect of the mental health risks during the dataset building and the
model training procedures. We expect to attract researchers' attention to the
serious mental health problems of chatbots and improve the chatbots' ability in
positive emotional interaction.
| 2,022
|
Computation and Language
| 24,310
|
Towards Automated Real-time Evaluation in Text-based Counseling
|
Automated real-time evaluation of counselor-client interaction is important
for ensuring quality counseling but the rules are difficult to articulate.
Recent advancements in machine learning methods show the possibility of
learning such rules automatically. However, these methods often demand large
scale and high quality counseling data, which are difficult to collect. To
address this issue, we build an online counseling platform, which allows
professional psychotherapists to provide free counseling services to those are
in need. In exchange, we collect the counseling transcripts. Within a year of
its operation, we manage to get one of the largest set of (675) transcripts of
counseling sessions. To further leverage the valuable data we have, we label
our dataset using both coarse- and fine-grained labels and use a set of
pretraining techniques. In the end, we are able to achieve practically useful
accuracy in both labeling system.
| 2,022
|
Computation and Language
| 24,960
|
Neural Topic Modeling of Psychotherapy Sessions
|
In this work, we compare different neural topic modeling methods in learning
the topical propensities of different psychiatric conditions from the
psychotherapy session transcripts parsed from speech recordings. We also
incorporate temporal modeling to put this additional interpretability to action
by parsing out topic similarities as a time series in a turn-level resolution.
We believe this topic modeling framework can offer interpretable insights for
the therapist to optimally decide his or her strategy and improve psychotherapy
effectiveness.
| 2,022
|
Computation and Language
| 26,068
|
D4: a Chinese Dialogue Dataset for Depression-Diagnosis-Oriented Chat
|
In a depression-diagnosis-directed clinical session, doctors initiate a
conversation with ample emotional support that guides the patients to expose
their symptoms based on clinical diagnosis criteria. Such a dialogue system is
distinguished from existing single-purpose human-machine dialog systems, as it
combines task-oriented and chit-chats with uniqueness in dialogue topics and
procedures. However, due to the social stigma associated with mental illness,
the dialogue data related to depression consultation and diagnosis are rarely
disclosed. Based on clinical depression diagnostic criteria ICD-11 and DSM-5,
we designed a 3-phase procedure to construct D$^4$: a Chinese Dialogue Dataset
for Depression-Diagnosis-Oriented Chat, which simulates the dialogue between
doctors and patients during the diagnosis of depression, including diagnosis
results and symptom summary given by professional psychiatrists for each
conversation. Upon the newly-constructed dataset, four tasks mirroring the
depression diagnosis process are established: response generation, topic
prediction, dialog summary, and severity classification of depressive episode
and suicide risk. Multi-scale evaluation results demonstrate that a more
empathy-driven and diagnostic-accurate consultation dialogue system trained on
our dataset can be achieved compared to rule-based bots.
| 2,022
|
Computation and Language
| 26,869
|
What can Speech and Language Tell us About the Working Alliance in
Psychotherapy
|
We are interested in the problem of conversational analysis and its
application to the health domain. Cognitive Behavioral Therapy is a structured
approach in psychotherapy, allowing the therapist to help the patient to
identify and modify the malicious thoughts, behavior, or actions. This
cooperative effort can be evaluated using the Working Alliance Inventory
Observer-rated Shortened - a 12 items inventory covering task, goal, and
relationship - which has a relevant influence on therapeutic outcomes. In this
work, we investigate the relation between this alliance inventory and the
spoken conversations (sessions) between the patient and the psychotherapist. We
have delivered eight weeks of e-therapy, collected their audio and video call
sessions, and manually transcribed them. The spoken conversations have been
annotated and evaluated with WAI ratings by professional therapists. We have
investigated speech and language features and their association with WAI items.
The feature types include turn dynamics, lexical entrainment, and
conversational descriptors extracted from the speech and language signals. Our
findings provide strong evidence that a subset of these features are strong
indicators of working alliance. To the best of our knowledge, this is the first
and a novel study to exploit speech and language for characterising working
alliance.
| 2,022
|
Computation and Language
| 27,361
|
Automated Utterance Labeling of Conversations Using Natural Language
Processing
|
Conversational data is essential in psychology because it can help
researchers understand individuals cognitive processes, emotions, and
behaviors. Utterance labelling is a common strategy for analyzing this type of
data. The development of NLP algorithms allows researchers to automate this
task. However, psychological conversational data present some challenges to NLP
researchers, including multilabel classification, a large number of classes,
and limited available data. This study explored how automated labels generated
by NLP methods are comparable to human labels in the context of conversations
on adulthood transition. We proposed strategies to handle three common
challenges raised in psychological studies. Our findings showed that the deep
learning method with domain adaptation (RoBERTa-CON) outperformed all other
machine learning methods; and the hierarchical labelling system that we
proposed was shown to help researchers strategically analyze conversational
data. Our Python code and NLP model are available at
https://github.com/mlaricheva/automated_labeling.
| 2,022
|
Computation and Language
| 28,002
|
SupervisorBot: NLP-Annotated Real-Time Recommendations of Psychotherapy
Treatment Strategies with Deep Reinforcement Learning
|
We propose a recommendation system that suggests treatment strategies to a
therapist during the psychotherapy session in real-time. Our system uses a
turn-level rating mechanism that predicts the therapeutic outcome by computing
a similarity score between the deep embedding of a scoring inventory, and the
current sentence that the patient is speaking. The system automatically
transcribes a continuous audio stream and separates it into turns of the
patient and of the therapist and perform real-time inference of their
therapeutic working alliance. The dialogue pairs along with their computed
working alliance as ratings are then fed into a deep reinforcement learning
recommendation system where the sessions are treated as users and the topics
are treated as items. Other than evaluating the empirical advantages of the
core components on an existing dataset of psychotherapy sessions, we
demonstrate the effectiveness of this system in a web app.
| 2,022
|
Computation and Language
| 28,179
|
Chatbots for Mental Health Support: Exploring the Impact of Emohaa on
Reducing Mental Distress in China
|
The growing demand for mental health support has highlighted the importance
of conversational agents as human supporters worldwide and in China. These
agents could increase availability and reduce the relative costs of mental
health support. The provided support can be divided into two main types:
cognitive and emotional support. Existing work on this topic mainly focuses on
constructing agents that adopt Cognitive Behavioral Therapy (CBT) principles.
Such agents operate based on pre-defined templates and exercises to provide
cognitive support. However, research on emotional support using such agents is
limited. In addition, most of the constructed agents operate in English,
highlighting the importance of conducting such studies in China. In this study,
we analyze the effectiveness of Emohaa in reducing symptoms of mental distress.
Emohaa is a conversational agent that provides cognitive support through
CBT-based exercises and guided conversations. It also emotionally supports
users by enabling them to vent their desired emotional problems. The study
included 134 participants, split into three groups: Emohaa (CBT-based), Emohaa
(Full), and control. Experimental results demonstrated that compared to the
control group, participants who used Emohaa experienced considerably more
significant improvements in symptoms of mental distress. We also found that
adding the emotional support agent had a complementary effect on such
improvements, mainly depression and insomnia. Based on the obtained results and
participants' satisfaction with the platform, we concluded that Emohaa is a
practical and effective tool for reducing mental distress.
| 2,022
|
Computation and Language
| 28,530
|
Leveraging Open Data and Task Augmentation to Automated Behavioral
Coding of Psychotherapy Conversations in Low-Resource Scenarios
|
In psychotherapy interactions, the quality of a session is assessed by
codifying the communicative behaviors of participants during the conversation
through manual observation and annotation. Developing computational approaches
for automated behavioral coding can reduce the burden on human coders and
facilitate the objective evaluation of the intervention. In the real world,
however, implementing such algorithms is associated with data sparsity
challenges since privacy concerns lead to limited available in-domain data. In
this paper, we leverage a publicly available conversation-based dataset and
transfer knowledge to the low-resource behavioral coding task by performing an
intermediate language model training via meta-learning. We introduce a task
augmentation method to produce a large number of "analogy tasks" - tasks
similar to the target one - and demonstrate that the proposed framework
predicts target behaviors more accurately than all the other baseline models.
| 2,022
|
Computation and Language
| 29,664
|
Working Alliance Transformer for Psychotherapy Dialogue Classification
|
As a predictive measure of the treatment outcome in psychotherapy, the
working alliance measures the agreement of the patient and the therapist in
terms of their bond, task and goal. Long been a clinical quantity estimated by
the patients' and therapists' self-evaluative reports, we believe that the
working alliance can be better characterized using natural language processing
technique directly in the dialogue transcribed in each therapy session. In this
work, we propose the Working Alliance Transformer (WAT), a Transformer-based
classification model that has a psychological state encoder which infers the
working alliance scores by projecting the embedding of the dialogues turns onto
the embedding space of the clinical inventory for working alliance. We evaluate
our method in a real-world dataset with over 950 therapy sessions with anxiety,
depression, schizophrenia and suicidal patients and demonstrate an empirical
advantage of using information about the therapeutic states in this sequence
classification task of psychotherapy dialogues.
| 2,022
|
Computation and Language
| 29,774
|
Conversational Pattern Mining using Motif Detection
|
The subject of conversational mining has become of great interest recently
due to the explosion of social and other online media. Supplementing this
explosion of text is the advancement in pre-trained language models which have
helped us to leverage these sources of information. An interesting domain to
analyse is conversations in terms of complexity and value. Complexity arises
due to the fact that a conversation can be asynchronous and can involve
multiple parties. It is also computationally intensive to process. We use
unsupervised methods in our work in order to develop a conversational pattern
mining technique which does not require time consuming, knowledge demanding and
resource intensive labelling exercises. The task of identifying repeating
patterns in sequences is well researched in the Bioinformatics field. In our
work, we adapt this to the field of Natural Language Processing and make
several extensions to a motif detection algorithm. In order to demonstrate the
application of the algorithm on a dynamic, real world data set; we extract
motifs from an open-source film script data source. We run an exploratory
investigation into the types of motifs we are able to mine.
| 2,022
|
Computation and Language
| 30,171
|
GDPR Compliant Collection of Therapist-Patient-Dialogues
|
According to the Global Burden of Disease list provided by the World Health
Organization (WHO), mental disorders are among the most debilitating
disorders.To improve the diagnosis and the therapy effectiveness in recent
years, researchers have tried to identify individual biomarkers. Gathering
neurobiological data however, is costly and time-consuming. Another potential
source of information, which is already part of the clinical routine, are
therapist-patient dialogues. While there are some pioneering works
investigating the role of language as predictors for various therapeutic
parameters, for example patient-therapist alliance, there are no large-scale
studies. A major obstacle to conduct these studies is the availability of
sizeable datasets, which are needed to train machine learning models. While
these conversations are part of the daily routine of clinicians, gathering them
is usually hindered by various ethical (purpose of data usage), legal (data
privacy) and technical (data formatting) limitations. Some of these limitations
are particular to the domain of therapy dialogues, like the increased
difficulty in anonymisation, or the transcription of the recordings. In this
paper, we elaborate on the challenges we faced in starting our collection of
therapist-patient dialogues in a psychiatry clinic under the General Data
Privacy Regulation of the European Union with the goal to use the data for
Natural Language Processing (NLP) research. We give an overview of each step in
our procedure and point out the potential pitfalls to motivate further research
in this field.
| 2,022
|
Computation and Language
| 30,416
|
Routine Outcome Monitoring in Psychotherapy Treatment using
Sentiment-Topic Modelling Approach
|
Despite the importance of emphasizing the right psychotherapy treatment for
an individual patient, assessing the outcome of the therapy session is equally
crucial. Evidence showed that continuous monitoring patient's progress can
significantly improve the therapy outcomes to an expected change. By monitoring
the outcome, the patient's progress can be tracked closely to help clinicians
identify patients who are not progressing in the treatment. These monitoring
can help the clinician to consider any necessary actions for the patient's
treatment as early as possible, e.g., recommend different types of treatment,
or adjust the style of approach. Currently, the evaluation system is based on
the clinical-rated and self-report questionnaires that measure patients'
progress pre- and post-treatment. While outcome monitoring tends to improve the
therapy outcomes, however, there are many challenges in the current method,
e.g. time and financial burden for administering questionnaires, scoring and
analysing the results. Therefore, a computational method for measuring and
monitoring patient progress over the course of treatment is needed, in order to
enhance the likelihood of positive treatment outcome. Moreover, this
computational method could potentially lead to an inexpensive monitoring tool
to evaluate patients' progress in clinical care that could be administered by a
wider range of health-care professionals.
| 2,022
|
Computation and Language
| 30,900
|
Deep Learning Mental Health Dialogue System
|
Mental health counseling remains a major challenge in modern society due to
cost, stigma, fear, and unavailability. We posit that generative artificial
intelligence (AI) models designed for mental health counseling could help
improve outcomes by lowering barriers to access. To this end, we have developed
a deep learning (DL) dialogue system called Serena. The system consists of a
core generative model and post-processing algorithms. The core generative model
is a 2.7 billion parameter Seq2Seq Transformer fine-tuned on thousands of
transcripts of person-centered-therapy (PCT) sessions. The series of
post-processing algorithms detects contradictions, improves coherency, and
removes repetitive answers. Serena is implemented and deployed on
\url{https://serena.chat}, which currently offers limited free services. While
the dialogue system is capable of responding in a qualitatively empathetic and
engaging manner, occasionally it displays hallucination and long-term
incoherence. Overall, we demonstrate that a deep learning mental health
dialogue system has the potential to provide a low-cost and effective
complement to traditional human counselors with less barriers to access.
| 2,022
|
Computation and Language
| 31,552
|
Response-act Guided Reinforced Dialogue Generation for Mental Health
Counseling
|
Virtual Mental Health Assistants (VMHAs) have become a prevalent method for
receiving mental health counseling in the digital healthcare space. An
assistive counseling conversation commences with natural open-ended topics to
familiarize the client with the environment and later converges into more
fine-grained domain-specific topics. Unlike other conversational systems, which
are categorized as open-domain or task-oriented systems, VMHAs possess a hybrid
conversational flow. These counseling bots need to comprehend various aspects
of the conversation, such as dialogue-acts, intents, etc., to engage the client
in an effective conversation. Although the surge in digital health research
highlights applications of many general-purpose response generation systems,
they are barely suitable in the mental health domain -- the prime reason is the
lack of understanding in mental health counseling. Moreover, in general,
dialogue-act guided response generators are either limited to a template-based
paradigm or lack appropriate semantics. To this end, we propose READER -- a
REsponse-Act guided reinforced Dialogue genERation model for the mental health
counseling conversations. READER is built on transformer to jointly predict a
potential dialogue-act d(t+1) for the next utterance (aka response-act) and to
generate an appropriate response u(t+1). Through the
transformer-reinforcement-learning (TRL) with Proximal Policy Optimization
(PPO), we guide the response generator to abide by d(t+1) and ensure the
semantic richness of the responses via BERTScore in our reward computation. We
evaluate READER on HOPE, a benchmark counseling conversation dataset and
observe that it outperforms several baselines across several evaluation metrics
-- METEOR, ROUGE, and BERTScore. We also furnish extensive qualitative and
quantitative analyses on results, including error analysis, human evaluation,
etc.
| 2,023
|
Computation and Language
| 31,671
|
TherapyView: Visualizing Therapy Sessions with Temporal Topic Modeling
and AI-Generated Arts
|
We present the TherapyView, a demonstration system to help therapists
visualize the dynamic contents of past treatment sessions, enabled by the
state-of-the-art neural topic modeling techniques to analyze the topical
tendencies of various psychiatric conditions and deep learning-based image
generation engine to provide a visual summary. The system incorporates temporal
modeling to provide a time-series representation of topic similarities at a
turn-level resolution and AI-generated artworks given the dialogue segments to
provide a concise representations of the contents covered in the session,
offering interpretable insights for therapists to optimize their strategies and
enhance the effectiveness of psychotherapy. This system provides a proof of
concept of AI-augmented therapy tools with e in-depth understanding of the
patient's mental state and enabling more effective treatment.
| 2,023
|
Computation and Language
| 32,095
|
Demo Alleviate: Demonstrating Artificial Intelligence Enabled Virtual
Assistance for Telehealth: The Mental Health Case
|
After the pandemic, artificial intelligence (AI) powered support for mental
health care has become increasingly important. The breadth and complexity of
significant challenges required to provide adequate care involve: (a)
Personalized patient understanding, (b) Safety-constrained and medically
validated chatbot patient interactions, and (c) Support for continued
feedback-based refinements in design using chatbot-patient interactions. We
propose Alleviate, a chatbot designed to assist patients suffering from mental
health challenges with personalized care and assist clinicians with
understanding their patients better. Alleviate draws from an array of publicly
available clinically valid mental-health texts and databases, allowing
Alleviate to make medically sound and informed decisions. In addition,
Alleviate's modular design and explainable decision-making lends itself to
robust and continued feedback-based refinements to its design. In this paper,
we explain the different modules of Alleviate and submit a short video
demonstrating Alleviate's capabilities to help patients and clinicians
understand each other better to facilitate optimal care strategies.
| 2,023
|
Computation and Language
| 32,768
|
Cognitive Reframing of Negative Thoughts through Human-Language Model
Interaction
|
A proven therapeutic technique to overcome negative thoughts is to replace
them with a more hopeful "reframed thought." Although therapy can help people
practice and learn this Cognitive Reframing of Negative Thoughts, clinician
shortages and mental health stigma commonly limit people's access to therapy.
In this paper, we conduct a human-centered study of how language models may
assist people in reframing negative thoughts. Based on psychology literature,
we define a framework of seven linguistic attributes that can be used to
reframe a thought. We develop automated metrics to measure these attributes and
validate them with expert judgements from mental health practitioners. We
collect a dataset of 600 situations, thoughts and reframes from practitioners
and use it to train a retrieval-enhanced in-context learning model that
effectively generates reframed thoughts and controls their linguistic
attributes. To investigate what constitutes a "high-quality" reframe, we
conduct an IRB-approved randomized field study on a large mental health website
with over 2,000 participants. Amongst other findings, we show that people
prefer highly empathic or specific reframes, as opposed to reframes that are
overly positive. Our findings provide key implications for the use of LMs to
assist people in overcoming negative thoughts.
| 2,023
|
Computation and Language
| 33,452
|
Boosting Distress Support Dialogue Responses with Motivational
Interviewing Strategy
|
AI-driven chatbots have become an emerging solution to address psychological
distress. Due to the lack of psychotherapeutic data, researchers use dialogues
scraped from online peer support forums to train them. But since the responses
in such platforms are not given by professionals, they contain both conforming
and non-conforming responses. In this work, we attempt to recognize these
conforming and non-conforming response types present in online distress-support
dialogues using labels adapted from a well-established behavioral coding scheme
named Motivational Interviewing Treatment Integrity (MITI) code and show how
some response types could be rephrased into a more MI adherent form that can,
in turn, enable chatbot responses to be more compliant with the MI strategy. As
a proof of concept, we build several rephrasers by fine-tuning Blender and GPT3
to rephrase MI non-adherent "Advise without permission" responses into "Advise
with permission". We show how this can be achieved with the construction of
pseudo-parallel corpora avoiding costs for human labor. Through automatic and
human evaluation we show that in the presence of less training data, techniques
such as prompting and data augmentation can be used to produce substantially
good rephrasings that reflect the intended style and preserve the content of
the original text.
| 2,023
|
Computation and Language
| 33,969
|
LLM-empowered Chatbots for Psychiatrist and Patient Simulation:
Application and Evaluation
|
Empowering chatbots in the field of mental health is receiving increasing
amount of attention, while there still lacks exploration in developing and
evaluating chatbots in psychiatric outpatient scenarios. In this work, we focus
on exploring the potential of ChatGPT in powering chatbots for psychiatrist and
patient simulation. We collaborate with psychiatrists to identify objectives
and iteratively develop the dialogue system to closely align with real-world
scenarios. In the evaluation experiments, we recruit real psychiatrists and
patients to engage in diagnostic conversations with the chatbots, collecting
their ratings for assessment. Our findings demonstrate the feasibility of using
ChatGPT-powered chatbots in psychiatric scenarios and explore the impact of
prompt designs on chatbot behavior and user experience.
| 2,023
|
Computation and Language
| 34,409
|
Ask an Expert: Leveraging Language Models to Improve Strategic Reasoning
in Goal-Oriented Dialogue Models
|
Existing dialogue models may encounter scenarios which are not
well-represented in the training data, and as a result generate responses that
are unnatural, inappropriate, or unhelpful. We propose the "Ask an Expert"
framework in which the model is trained with access to an "expert" which it can
consult at each turn. Advice is solicited via a structured dialogue with the
expert, and the model is optimized to selectively utilize (or ignore) it given
the context and dialogue history. In this work the expert takes the form of an
LLM. We evaluate this framework in a mental health support domain, where the
structure of the expert conversation is outlined by pre-specified prompts which
reflect a reasoning strategy taught to practitioners in the field. Blenderbot
models utilizing "Ask an Expert" show quality improvements across all expert
sizes, including those with fewer parameters than the dialogue model itself.
Our best model provides a $\sim 10\%$ improvement over baselines, approaching
human-level scores on "engingingness" and "helpfulness" metrics.
| 2,023
|
Computation and Language
| 35,123
|
Understanding Client Reactions in Online Mental Health Counseling
|
Communication success relies heavily on reading participants' reactions. Such
feedback is especially important for mental health counselors, who must
carefully consider the client's progress and adjust their approach accordingly.
However, previous NLP research on counseling has mainly focused on studying
counselors' intervention strategies rather than their clients' reactions to the
intervention. This work aims to fill this gap by developing a theoretically
grounded annotation framework that encompasses counselors' strategies and
client reaction behaviors. The framework has been tested against a large-scale,
high-quality text-based counseling dataset we collected over the past two years
from an online welfare counseling platform. Our study shows how clients react
to counselors' strategies, how such reactions affect the final counseling
outcomes, and how counselors can adjust their strategies in response to these
reactions. We also demonstrate that this study can help counselors
automatically predict their clients' states.
| 2,023
|
Computation and Language
| 36,171
|
Training Models to Generate, Recognize, and Reframe Unhelpful Thoughts
|
Many cognitive approaches to well-being, such as recognizing and reframing
unhelpful thoughts, have received considerable empirical support over the past
decades, yet still lack truly widespread adoption in self-help format. A
barrier to that adoption is a lack of adequately specific and diverse dedicated
practice material. This work examines whether current language models can be
leveraged to both produce a virtually unlimited quantity of practice material
illustrating standard unhelpful thought patterns matching specific given
contexts, and generate suitable positive reframing proposals. We propose
PATTERNREFRAME, a novel dataset of about 10k examples of thoughts containing
unhelpful thought patterns conditioned on a given persona, accompanied by about
27k positive reframes. By using this dataset to train and/or evaluate current
models, we show that existing models can already be powerful tools to help
generate an abundance of tailored practice material and hypotheses, with no or
minimal additional model training required.
| 2,023
|
Computation and Language
| 36,398
|
Psy-LLM: Scaling up Global Mental Health Psychological Services with
AI-based Large Language Models
|
The demand for psychological counselling has grown significantly in recent
years, particularly with the global outbreak of COVID-19, which has heightened
the need for timely and professional mental health support. Online
psychological counselling has emerged as the predominant mode of providing
services in response to this demand. In this study, we propose the Psy-LLM
framework, an AI-based assistive tool leveraging Large Language Models (LLMs)
for question-answering in psychological consultation settings to ease the
demand for mental health professions. Our framework combines pre-trained LLMs
with real-world professional Q\&A from psychologists and extensively crawled
psychological articles. The Psy-LLM framework serves as a front-end tool for
healthcare professionals, allowing them to provide immediate responses and
mindfulness activities to alleviate patient stress. Additionally, it functions
as a screening tool to identify urgent cases requiring further assistance. We
evaluated the framework using intrinsic metrics, such as perplexity, and
extrinsic evaluation metrics, with human participant assessments of response
helpfulness, fluency, relevance, and logic. The results demonstrate the
effectiveness of the Psy-LLM framework in generating coherent and relevant
answers to psychological questions. This article discusses the potential and
limitations of using large language models to enhance mental health support
through AI technologies.
| 2,023
|
Computation and Language
| 36,720
|
Dynamic Strategy Chain: Dynamic Zero-Shot CoT for Long Mental Health
Support Generation
|
Long counseling Text Generation for Mental health support (LTGM), an
innovative and challenging task, aims to provide help-seekers with mental
health support through a comprehensive and more acceptable response. The
combination of chain-of-thought (CoT) prompting and Large Language Models
(LLMs) is employed and get the SOTA performance on various NLP tasks,
especially on text generation tasks. Zero-shot CoT prompting is one of the most
common methods in CoT prompting. However, in the LTGM task, Zero-shot CoT
prompting can not simulate a counselor or provide personalized strategies
without effective mental health counseling strategy prompts. To tackle this
challenge, we propose a zero-shot Dynamic Strategy Chain (DSC) prompting
method. Firstly, we utilize GPT2 to learn the responses written by mental
health counselors and dynamically generate mental health counseling strategies
tailored to the help-seekers' needs. Secondly, the Zero-shot DSC prompting is
constructed according to mental health counseling strategies and the
help-seekers' post. Finally, the Zero-shot DSC prompting is employed to guide
LLMs in generating more human-like responses for the help-seekers. Both
automatic and manual evaluations demonstrate that Zero-shot DSC prompting can
deliver more human-like responses than CoT prompting methods on LTGM tasks.
| 2,023
|
Computation and Language
| 37,304
|
ChatCounselor: A Large Language Models for Mental Health Support
|
This paper presents ChatCounselor, a large language model (LLM) solution
designed to provide mental health support. Unlike generic chatbots,
ChatCounselor is distinguished by its foundation in real conversations between
consulting clients and professional psychologists, enabling it to possess
specialized knowledge and counseling skills in the field of psychology. The
training dataset, Psych8k, was constructed from 260 in-depth interviews, each
spanning an hour. To assess the quality of counseling responses, the counseling
Bench was devised. Leveraging GPT-4 and meticulously crafted prompts based on
seven metrics of psychological counseling assessment, the model underwent
evaluation using a set of real-world counseling questions. Impressively,
ChatCounselor surpasses existing open-source models in the counseling Bench and
approaches the performance level of ChatGPT, showcasing the remarkable
enhancement in model capability attained through high-quality domain-specific
data.
| 2,023
|
Computation and Language
| 38,232
|
An Integrative Survey on Mental Health Conversational Agents to Bridge
Computer Science and Medical Perspectives
|
Mental health conversational agents (a.k.a. chatbots) are widely studied for
their potential to offer accessible support to those experiencing mental health
challenges. Previous surveys on the topic primarily consider papers published
in either computer science or medicine, leading to a divide in understanding
and hindering the sharing of beneficial knowledge between both domains. To
bridge this gap, we conduct a comprehensive literature review using the PRISMA
framework, reviewing 534 papers published in both computer science and
medicine. Our systematic review reveals 136 key papers on building mental
health-related conversational agents with diverse characteristics of modeling
and experimental design techniques. We find that computer science papers focus
on LLM techniques and evaluating response quality using automated metrics with
little attention to the application while medical papers use rule-based
conversational agents and outcome metrics to measure the health outcomes of
participants. Based on our findings on transparency, ethics, and cultural
heterogeneity in this review, we provide a few recommendations to help bridge
the disciplinary divide and enable the cross-disciplinary development of mental
health conversational agents.
| 2,023
|
Computation and Language
| 39,658
|
VERVE: Template-based ReflectiVE Rewriting for MotiVational IntErviewing
|
Reflective listening is a fundamental skill that counselors must acquire to
achieve proficiency in motivational interviewing (MI). It involves responding
in a manner that acknowledges and explores the meaning of what the client has
expressed in the conversation. In this work, we introduce the task of
counseling response rewriting, which transforms non-reflective statements into
reflective responses. We introduce VERVE, a template-based rewriting system
with paraphrase-augmented training and adaptive template updating. VERVE first
creates a template by identifying and filtering out tokens that are not
relevant to reflections and constructs a reflective response using the
template. Paraphrase-augmented training allows the model to learn less-strict
fillings of masked spans, and adaptive template updating helps discover
effective templates for rewriting without significantly removing the original
content. Using both automatic and human evaluations, we compare our method
against text rewriting baselines and show that our framework is effective in
turning non-reflective statements into more reflective responses while
achieving a good content preservation-reflection style trade-off.
| 2,023
|
Computation and Language
| 40,457
|
PsyChat: A Client-Centric Dialogue System for Mental Health Support
|
Dialogue systems are increasingly integrated into mental health support to
help clients facilitate exploration, gain insight, take action, and ultimately
heal themselves. For a dialogue system to be practical and user-friendly, it
should be client-centric, focusing on the client's behaviors. However, existing
dialogue systems publicly available for mental health support often concentrate
solely on the counselor's strategies rather than the behaviors expressed by
clients. This can lead to the implementation of unreasonable or inappropriate
counseling strategies and corresponding responses from the dialogue system. To
address this issue, we propose PsyChat, a client-centric dialogue system that
provides psychological support through online chat. The client-centric dialogue
system comprises five modules: client behavior recognition, counselor strategy
selection, input packer, response generator intentionally fine-tuned to produce
responses, and response selection. Both automatic and human evaluations
demonstrate the effectiveness and practicality of our proposed dialogue system
for real-life mental health support. Furthermore, we employ our proposed
dialogue system to simulate a real-world client-virtual-counselor interaction
scenario. The system is capable of predicting the client's behaviors, selecting
appropriate counselor strategies, and generating accurate and suitable
responses, as demonstrated in the scenario.
| 2,023
|
Computation and Language
| 41,229
|
A Computational Framework for Behavioral Assessment of LLM Therapists
|
The emergence of ChatGPT and other large language models (LLMs) has greatly
increased interest in utilizing LLMs as therapists to support individuals
struggling with mental health challenges. However, due to the lack of
systematic studies, our understanding of how LLM therapists behave, i.e., ways
in which they respond to clients, is significantly limited. Understanding their
behavior across a wide range of clients and situations is crucial to accurately
assess their capabilities and limitations in the high-risk setting of mental
health, where undesirable behaviors can lead to severe consequences. In this
paper, we propose BOLT, a novel computational framework to study the
conversational behavior of LLMs when employed as therapists. We develop an
in-context learning method to quantitatively measure the behavior of LLMs based
on 13 different psychotherapy techniques including reflections, questions,
solutions, normalizing, and psychoeducation. Subsequently, we compare the
behavior of LLM therapists against that of high- and low-quality human therapy,
and study how their behavior can be modulated to better reflect behaviors
observed in high-quality therapy. Our analysis of GPT and Llama-variants
reveals that these LLMs often resemble behaviors more commonly exhibited in
low-quality therapy rather than high-quality therapy, such as offering a higher
degree of problem-solving advice when clients share emotions, which is against
typical recommendations. At the same time, unlike low-quality therapy, LLMs
reflect significantly more upon clients' needs and strengths. Our analysis
framework suggests that despite the ability of LLMs to generate anecdotal
examples that appear similar to human therapists, LLM therapists are currently
not fully consistent with high-quality care, and thus require additional
research to ensure quality care.
| 2,024
|
Computation and Language
| 41,798
|
Response Generation for Cognitive Behavioral Therapy with Large Language
Models: Comparative Study with Socratic Questioning
|
Dialogue systems controlled by predefined or rule-based scenarios derived
from counseling techniques, such as cognitive behavioral therapy (CBT), play an
important role in mental health apps. Despite the need for responsible
responses, it is conceivable that using the newly emerging LLMs to generate
contextually relevant utterances will enhance these apps. In this study, we
construct dialogue modules based on a CBT scenario focused on conventional
Socratic questioning using two kinds of LLMs: a Transformer-based dialogue
model further trained with a social media empathetic counseling dataset,
provided by Osaka Prefecture (OsakaED), and GPT-4, a state-of-the art LLM
created by OpenAI. By comparing systems that use LLM-generated responses with
those that do not, we investigate the impact of generated responses on
subjective evaluations such as mood change, cognitive change, and dialogue
quality (e.g., empathy). As a result, no notable improvements are observed when
using the OsakaED model. When using GPT-4, the amount of mood change, empathy,
and other dialogue qualities improve significantly. Results suggest that GPT-4
possesses a high counseling ability. However, they also indicate that even when
using a dialogue model trained with a human counseling dataset, it does not
necessarily yield better outcomes compared to scenario-based dialogues. While
presenting LLM-generated responses, including GPT-4, and having them interact
directly with users in real-life mental health care services may raise ethical
issues, it is still possible for human professionals to produce example
responses or response templates using LLMs in advance in systems that use
rules, scenarios, or example responses.
| 2,024
|
Computation and Language
| 42,538
|
Generation, Distillation and Evaluation of Motivational
Interviewing-Style Reflections with a Foundational Language Model
|
Large Foundational Language Models are capable of performing many tasks at a
high level but are difficult to deploy in many applications because of their
size and proprietary ownership. Many will be motivated to distill specific
capabilities of foundational models into smaller models that can be owned and
controlled. In the development of a therapeutic chatbot, we wish to distill a
capability known as reflective listening, in which a therapist produces
reflections of client speech. These reflections either restate what a client
has said, or connect what was said to a relevant observation, idea or guess
that encourages and guides the client to continue contemplation. In this paper,
we present a method for distilling the generation of reflections from a
Foundational Language Model (GPT-4) into smaller models. We first show that
GPT-4, using zero-shot prompting, can generate reflections at near 100% success
rate, superior to all previous methods. Using reflections generated by GPT-4,
we fine-tune different sizes of the GPT-2 family. The GPT-2-small model
achieves 83% success on a hold-out test set and the GPT-2 XL achieves 90%
success. We also show that GPT-4 can help in the labor-intensive task of
evaluating the quality of the distilled models, using it as a zero-shot
classifier. Using triple-human review as a guide, the classifier achieves a
Cohen-Kappa of 0.66, a substantial inter-rater reliability figure.
| 2,024
|
Computation and Language
| 42,699
|
Towards Sustainable Workplace Mental Health: A Novel Approach to Early
Intervention and Support
|
Employee well-being is a critical concern in the contemporary workplace, as
highlighted by the American Psychological Association's 2021 report, indicating
that 71% of employees experience stress or tension. This stress contributes
significantly to workplace attrition and absenteeism, with 61% of attrition and
16% of sick days attributed to poor mental health. A major challenge for
employers is that employees often remain unaware of their mental health issues
until they reach a crisis point, resulting in limited utilization of corporate
well-being benefits. This research addresses this challenge by presenting a
groundbreaking stress detection algorithm that provides real-time support
preemptively. Leveraging automated chatbot technology, the algorithm
objectively measures mental health levels by analyzing chat conversations,
offering personalized treatment suggestions in real-time based on linguistic
biomarkers. The study explores the feasibility of integrating these innovations
into practical learning applications within real-world contexts and introduces
a chatbot-style system integrated into the broader employee experience
platform. This platform, encompassing various features, aims to enhance overall
employee well-being, detect stress in real time, and proactively engage with
individuals to improve support effectiveness, demonstrating a 22% increase when
assistance is provided early. Overall, the study emphasizes the importance of
fostering a supportive workplace environment for employees' mental health.
| 2,024
|
Computation and Language
| 42,738
|
The Colorful Future of LLMs: Evaluating and Improving LLMs as Emotional
Supporters for Queer Youth
|
Queer youth face increased mental health risks, such as depression, anxiety,
and suicidal ideation. Hindered by negative stigma, they often avoid seeking
help and rely on online resources, which may provide incompatible information.
Although access to a supportive environment and reliable information is
invaluable, many queer youth worldwide have no access to such support. However,
this could soon change due to the rapid adoption of Large Language Models
(LLMs) such as ChatGPT. This paper aims to comprehensively explore the
potential of LLMs to revolutionize emotional support for queers. To this end,
we conduct a qualitative and quantitative analysis of LLM's interactions with
queer-related content. To evaluate response quality, we develop a novel
ten-question scale that is inspired by psychological standards and expert
input. We apply this scale to score several LLMs and human comments to posts
where queer youth seek advice and share experiences. We find that LLM responses
are supportive and inclusive, outscoring humans. However, they tend to be
generic, not empathetic enough, and lack personalization, resulting in
nonreliable and potentially harmful advice. We discuss these challenges,
demonstrate that a dedicated prompt can improve the performance, and propose a
blueprint of an LLM-supporter that actively (but sensitively) seeks user
context to provide personalized, empathetic, and reliable responses. Our
annotated dataset is available for further research.
| 2,024
|
Computation and Language
| 43,472
|
Automatic Evaluation for Mental Health Counseling using LLMs
|
High-quality psychological counseling is crucial for mental health worldwide,
and timely evaluation is vital for ensuring its effectiveness. However,
obtaining professional evaluation for each counseling session is expensive and
challenging. Existing methods that rely on self or third-party manual reports
to assess the quality of counseling suffer from subjective biases and
limitations of time-consuming.
To address above challenges, this paper proposes an innovative and efficient
automatic approach using large language models (LLMs) to evaluate the working
alliance in counseling conversations. We collected a comprehensive counseling
dataset and conducted multiple third-party evaluations based on therapeutic
relationship theory. Our LLM-based evaluation, combined with our guidelines,
shows high agreement with human evaluations and provides valuable insights into
counseling scripts. This highlights the potential of LLMs as supervisory tools
for psychotherapists. By integrating LLMs into the evaluation process, our
approach offers a cost-effective and dependable means of assessing counseling
quality, enhancing overall effectiveness.
| 2,024
|
Computation and Language
| 43,487
|
Can Large Language Models be Used to Provide Psychological Counselling?
An Analysis of GPT-4-Generated Responses Using Role-play Dialogues
|
Mental health care poses an increasingly serious challenge to modern
societies. In this context, there has been a surge in research that utilizes
information technologies to address mental health problems, including those
aiming to develop counseling dialogue systems. However, there is a need for
more evaluations of the performance of counseling dialogue systems that use
large language models. For this study, we collected counseling dialogue data
via role-playing scenarios involving expert counselors, and the utterances were
annotated with the intentions of the counselors. To determine the feasibility
of a dialogue system in real-world counseling scenarios, third-party counselors
evaluated the appropriateness of responses from human counselors and those
generated by GPT-4 in identical contexts in role-play dialogue data. Analysis
of the evaluation results showed that the responses generated by GPT-4 were
competitive with those of human counselors.
| 2,024
|
Computation and Language
| 43,569
|
Towards Understanding Counseling Conversations: Domain Knowledge and
Large Language Models
|
Understanding the dynamics of counseling conversations is an important task,
yet it is a challenging NLP problem regardless of the recent advance of
Transformer-based pre-trained language models. This paper proposes a systematic
approach to examine the efficacy of domain knowledge and large language models
(LLMs) in better representing conversations between a crisis counselor and a
help seeker. We empirically show that state-of-the-art language models such as
Transformer-based models and GPT models fail to predict the conversation
outcome. To provide richer context to conversations, we incorporate
human-annotated domain knowledge and LLM-generated features; simple integration
of domain knowledge and LLM features improves the model performance by
approximately 15%. We argue that both domain knowledge and LLM-generated
features can be exploited to better characterize counseling conversations when
they are used as an additional context to conversations.
| 2,024
|
Computation and Language
| 43,728
|
COMPASS: Computational Mapping of Patient-Therapist Alliance Strategies
with Language Modeling
|
The therapeutic working alliance is a critical factor in predicting the
success of psychotherapy treatment. Traditionally, working alliance assessment
relies on questionnaires completed by both therapists and patients. In this
paper, we present COMPASS, a novel framework to directly infer the therapeutic
working alliance from the natural language used in psychotherapy sessions. Our
approach utilizes advanced large language models to analyze transcripts of
psychotherapy sessions and compare them with distributed representations of
statements in the working alliance inventory. Analyzing a dataset of over 950
sessions covering diverse psychiatric conditions, we demonstrate the
effectiveness of our method in microscopically mapping patient-therapist
alignment trajectories and providing interpretability for clinical psychiatry
and in identifying emerging patterns related to the condition being treated. By
employing various neural topic modeling techniques in combination with
generative language prompting, we analyze the topical characteristics of
different psychiatric conditions and incorporate temporal modeling to capture
the evolution of topics at a turn-level resolution. This combined framework
enhances the understanding of therapeutic interactions, enabling timely
feedback for therapists regarding conversation quality and providing
interpretable insights to improve the effectiveness of psychotherapy.
| 2,024
|
Computation and Language
| 43,786
|
Socratic Reasoning Improves Positive Text Rewriting
|
Reframing a negative into a positive thought is at the crux of several
cognitive approaches to mental health and psychotherapy that could be made more
accessible by large language model-based solutions. Such reframing is typically
non-trivial and requires multiple rationalization steps to uncover the
underlying issue of a negative thought and transform it to be more positive.
However, this rationalization process is currently neglected by both datasets
and models which reframe thoughts in one step. In this work, we address this
gap by augmenting open-source datasets for positive text rewriting with
synthetically-generated Socratic rationales using a novel framework called
\textsc{SocraticReframe}. \textsc{SocraticReframe} uses a sequence of
question-answer pairs to rationalize the thought rewriting process. We show
that such Socratic rationales significantly improve positive text rewriting for
different open-source LLMs according to both automatic and human evaluations
guided by criteria from psychotherapy research.
| 2,024
|
Computation and Language
| 44,403
|
README.md exists but content is empty.
- Downloads last month
- 1