mishig HF Staff commited on
Commit
1e017ce
·
verified ·
1 Parent(s): 5254ad8

Add 1 files

Browse files
Files changed (1) hide show
  1. 2502/2502.11451.md +387 -0
2502/2502.11451.md ADDED
@@ -0,0 +1,387 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations
2
+
3
+ URL Source: https://arxiv.org/html/2502.11451
4
+
5
+ Published Time: Tue, 16 Sep 2025 00:47:57 GMT
6
+
7
+ Markdown Content:
8
+ Shenghan Wu 1 Yimo Zhu 1 Wynne Hsu 1 Mong-Li Lee 1 Yang Deng 2
9
+
10
+ 1 National University of Singapore 2 Singapore Management University
11
+
12
+ shenghan@u.nus.edu
13
+
14
+ ###### Abstract
15
+
16
+ The rapid advancement of Large Language Models (LLMs) has revolutionized the generation of emotional support conversations (ESC), offering scalable solutions with reduced costs and enhanced data privacy. This paper explores the role of personas in the creation of ESC by LLMs. Our research utilizes established psychological frameworks to measure and infuse persona traits into LLMs, which then generate dialogues in the emotional support scenario. We conduct extensive evaluations to understand the stability of persona traits in dialogues, examining shifts in traits post-generation and their impact on dialogue quality and strategy distribution. Experimental results reveal several notable findings: 1) LLMs can infer core persona traits, 2) subtle shifts in emotionality and extraversion occur, influencing the dialogue dynamics, and 3) the application of persona traits modifies the distribution of emotional support strategies, enhancing the relevance and empathetic quality of the responses. These findings highlight the potential of persona-driven LLMs in crafting more personalized, empathetic, and effective emotional support dialogues, which has significant implications for the future design of AI-driven emotional support systems.
17
+
18
+ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations
19
+
20
+ Shenghan Wu 1 Yimo Zhu 1 Wynne Hsu 1 Mong-Li Lee 1 Yang Deng 2 1 National University of Singapore 2 Singapore Management University shenghan@u.nus.edu
21
+
22
+ 1 Introduction
23
+ --------------
24
+
25
+ In emotional support conversations (ESC), the supporter aims to help the seeker to reduce stress, overcome emotional issues, and promote mental well-being. Traditionally, ESC corpora have been developed through skilled crowdsourcing Rashkin et al. ([2019](https://arxiv.org/html/2502.11451v2#bib.bib42)); Liu et al. ([2021](https://arxiv.org/html/2502.11451v2#bib.bib31)), transcription of therapist sessions Liu et al. ([2023](https://arxiv.org/html/2502.11451v2#bib.bib30)); Shen et al. ([2020](https://arxiv.org/html/2502.11451v2#bib.bib46)), or by compiling emotional question-answer pairs from online platforms Sharma et al. ([2020a](https://arxiv.org/html/2502.11451v2#bib.bib44)); Sun et al. ([2021](https://arxiv.org/html/2502.11451v2#bib.bib47)); Garg et al. ([2022](https://arxiv.org/html/2502.11451v2#bib.bib19)); Lahnala et al. ([2021](https://arxiv.org/html/2502.11451v2#bib.bib28)). However, beyond the high costs, recent research Deng et al. ([2024a](https://arxiv.org/html/2502.11451v2#bib.bib13)) highlights several limitations in these methods, including privacy concerns, variability in data quality, and fabricated user needs created by crowdworkers. With the advent of large language models (LLMs), their powerful generalization abilities enable high-quality data annotation and generation based on specific instructions Tan et al. ([2024](https://arxiv.org/html/2502.11451v2#bib.bib48)); Ding et al. ([2024](https://arxiv.org/html/2502.11451v2#bib.bib16)). Consequently, more and more recent studies Zheng et al. ([2023](https://arxiv.org/html/2502.11451v2#bib.bib57), [2024](https://arxiv.org/html/2502.11451v2#bib.bib58)); Qiu and Lan ([2024](https://arxiv.org/html/2502.11451v2#bib.bib41)); Wu et al. ([2024](https://arxiv.org/html/2502.11451v2#bib.bib54)) investigate the use of LLMs to generate large-scale emotional support dialogue datasets across various scenarios via role-playing at lower costs. These efforts have significantly expanded the available ESC corpora.
26
+
27
+ ![Image 1: Refer to caption](https://arxiv.org/html/2502.11451v2/x1.png)
28
+
29
+ Figure 1: Overview of evaluating the impact of personas on LLM-synthesized emotional support dialogues.
30
+
31
+ Despite the significant strides of LLMs in synthesizing ESC, a significant issue in generative AI annotations is the Lack of Human Intuition Deng et al. ([2024a](https://arxiv.org/html/2502.11451v2#bib.bib13)). Recent studies have demonstrated that effective emotional support requires careful consideration of individual differences, including personality traits, emotional states, and contextual factors (Ma et al., [2024](https://arxiv.org/html/2502.11451v2#bib.bib32); Ait Baha et al., [2023](https://arxiv.org/html/2502.11451v2#bib.bib1); Hernandez et al., [2023](https://arxiv.org/html/2502.11451v2#bib.bib21)). This understanding has led to increased research attention on the role of persona in emotional support dialogues (Zhao et al., [2024](https://arxiv.org/html/2502.11451v2#bib.bib56); Cheng et al., [2022](https://arxiv.org/html/2502.11451v2#bib.bib6)), particularly as AI systems become more prevalent in providing such support. The importance of incorporating psychological perspectives in developing empathetic AI assistants has been emphasized by researchers. Huang et al. ([2023b](https://arxiv.org/html/2502.11451v2#bib.bib23)) argues that psychological analysis of LLMs is crucial for creating more human-like and engaging interactions. While recent work has made progress in measuring LLMs’ personality characteristics using established psychological inventories (Frisch and Giulianelli, [2024](https://arxiv.org/html/2502.11451v2#bib.bib18); Safdari et al., [2023](https://arxiv.org/html/2502.11451v2#bib.bib43)), there remains a gap in understanding how these persona-related aspects influence the generation of emotional support dialogues.
32
+
33
+ In this work, we aim to address this gap by investigating the relationship between LLM-generated emotional support dialogues and persona traits with psychological measurement. Specifically, we propose a LLM-based simulation framework to answer the following critical research questions:
34
+
35
+ * •RQ1: Can LLMs infer stable traits from persona in the emotional support scenario?
36
+ * •RQ2: Can dialogues generated by LLMs retain the original persona traits?
37
+ * •RQ3: How will the injected persona affect the LLM-simulated emotional support dialogues?
38
+
39
+ As illustrated in Figure [1](https://arxiv.org/html/2502.11451v2#S1.F1 "Figure 1 ‣ 1 Introduction ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations"), we first extract personas from the existing datasets. Then we assess the capacity of LLMs to deduce stable traits from these personas. We further conduct a comparison between the personas derived from synthesized dialogues and the original personas to evaluate their stability during dialogue synthesis. Lastly, we investigate how these personas influence emotional support strategies. As a result, we offer insights into the potential application of persona-driven dialogue synthesis in emotional support conversations. The key findings are summarized as follows:
40
+
41
+ * •LLMs can infer stable traits from personas like personalities and communication styles in emotional support scenarios. We utilize LLMs to infer traits from personas, revealing a strong correlation between these traits.
42
+ * •LLM-simulated seekers tend to exhibit more emotionality and lower extraversion compared to their original personas. After generating emotional support dialogues based on personas and extracting personas from these dialogues, there are slight shifts in persona traits.
43
+ * •Infusing persona traits into the generation of emotional support dialogues alters the distribution of strategies. The LLM-simulated supporter tends to focus more on deeply understanding the seeker’s problems and gently offers reassurance and encouragement.
44
+
45
+ 2 Related Works
46
+ ---------------
47
+
48
+ ### 2.1 Psychological Inventories
49
+
50
+ Personality inventories are widely used in psychology to understand individuals, which predict distinctive patterns of interpersonal interaction across contexts. These assessments are often structured, theory-driven, and standardized. Prominent instruments include the Myers-Briggs Type Indicator (MBTI) (Briggs, [1976](https://arxiv.org/html/2502.11451v2#bib.bib3)), NEO-PI-R (Costa and McCrae, [2008](https://arxiv.org/html/2502.11451v2#bib.bib8)), and Comrey Personality Scales (CPS) (Comrey, [1970](https://arxiv.org/html/2502.11451v2#bib.bib7)). Among these, the HEXACO model (Ashton and Lee, [2009](https://arxiv.org/html/2502.11451v2#bib.bib2)) is particularly notable, offering a framework that encompasses six factors: Honesty-Humility, Emotionality, Extraversion, Agreeableness, Conscientiousness, and Openness to Experience. Norton ([1978](https://arxiv.org/html/2502.11451v2#bib.bib35)) introduces the foundational construct of a communicator style. De Vries et al. ([2013](https://arxiv.org/html/2502.11451v2#bib.bib10)) further explored communication style as a six-dimensional model. Extensive research (Capraro and Capraro, [2002](https://arxiv.org/html/2502.11451v2#bib.bib4); Costa Jr and McCrae, [1992](https://arxiv.org/html/2502.11451v2#bib.bib9); Lee and Ashton, [2004](https://arxiv.org/html/2502.11451v2#bib.bib29)) has demonstrated the reliability and validity of these inventories.
51
+
52
+ ### 2.2 Emotional Support Dialogues
53
+
54
+ Early efforts on emotional support dialogues focused on collecting emotional question-answer data from online platforms (Medeiros and Bosse, [2018](https://arxiv.org/html/2502.11451v2#bib.bib33); Sharma et al., [2020b](https://arxiv.org/html/2502.11451v2#bib.bib45); Turcan and McKeown, [2019](https://arxiv.org/html/2502.11451v2#bib.bib51); Garg et al., [2022](https://arxiv.org/html/2502.11451v2#bib.bib19)). These datasets laid the groundwork for understanding user emotions, but were limited to single-turn interactions. Empathetic Dialogue dataset (Rashkin et al., [2019](https://arxiv.org/html/2502.11451v2#bib.bib42)) addressed this by introducing multi-turn dialogues, crowd-sourced to simulate diverse empathetic interactions. ESConv (Liu et al., [2021](https://arxiv.org/html/2502.11451v2#bib.bib31)) further advanced the field by introducing emotional support strategies collected from psychological theories, enabling chatbots to use these strategies for more empathetic and contextually appropriate responses. Subsequent studies proposed using graph networks to capture global emotions causes and user intentions (Peng et al., [2022](https://arxiv.org/html/2502.11451v2#bib.bib37); Deng et al., [2023b](https://arxiv.org/html/2502.11451v2#bib.bib15)), combining multiple emotional support strategies to enhance empathy (Tu et al., [2022](https://arxiv.org/html/2502.11451v2#bib.bib50)), developing proactive dialogue systems to lead the conversation towards positive emotions Deng et al. ([2023a](https://arxiv.org/html/2502.11451v2#bib.bib11), [2024b](https://arxiv.org/html/2502.11451v2#bib.bib14), [2025](https://arxiv.org/html/2502.11451v2#bib.bib12)), and implementing emotional support strategies and scenarios using LLMs to create the ExTES dataset (Zheng et al., [2024](https://arxiv.org/html/2502.11451v2#bib.bib58)).
55
+
56
+ ### 2.3 Persona-Driven Emotional Support
57
+
58
+ Recent advances have integrated personas into emotional support dialogues to enhance personalization and diversity. The ESC dataset (Zhang et al., [2024](https://arxiv.org/html/2502.11451v2#bib.bib55)) introduced personas into the dialogue generation process. Zhao et al. ([2024](https://arxiv.org/html/2502.11451v2#bib.bib56)) proposed a framework to extract personas from existing datasets for evaluation. Additionally, personas have been incorporated into chatbots to generate personalized responses (Tu et al., [2023](https://arxiv.org/html/2502.11451v2#bib.bib49); Ait Baha et al., [2023](https://arxiv.org/html/2502.11451v2#bib.bib1); Ma et al., [2024](https://arxiv.org/html/2502.11451v2#bib.bib32)). These developments inspire our analysis of the relationship between personas, emotional support strategies, and dialogues, focusing on the extraction and use of personas in ESC.
59
+
60
+ ![Image 2: Refer to caption](https://arxiv.org/html/2502.11451v2/x2.png)
61
+
62
+ Figure 2: An example of persona card.
63
+
64
+ 3 Dataset Collection
65
+ --------------------
66
+
67
+ In order to study the relationship between LLM-generated emotional support conversations (ESC) and persona traits, we first need to collect a set of non-synthetic ESC data with user personas as reference. Specifically, we select three existing emotional support datasets, including ESConv (Liu et al., [2021](https://arxiv.org/html/2502.11451v2#bib.bib31)), CAMS (Garg et al., [2022](https://arxiv.org/html/2502.11451v2#bib.bib19)), and Dreaddit (Turcan and McKeown, [2019](https://arxiv.org/html/2502.11451v2#bib.bib51)). ESConv is a multi-turn emotional support dialogue dataset, while CAMS and Dreaddit are derived from Reddit posts discussing mental health issues.
68
+
69
+ Table 1: Statistics of the extracted persona cards.
70
+
71
+ To obtain the user persona traits from these ESC data, we extract age, gender, occupation, socio-demographic description, and problem from the datasets using gpt-4o-mini. The prompt utilized for extracting these basic persona is provided in Appendix [A](https://arxiv.org/html/2502.11451v2#A1 "Appendix A Prompts for Generating Persona Cards ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations"). Since the supporters usually focus on the seeker’s emotions and their main tasks are to provide emotional support, the supporters’ responses in the datasets lack personal information. Due to the limited information available about supporters, we only extract seeker personas from these datasets. After extraction, we prompt LLMs (see Appendix [A](https://arxiv.org/html/2502.11451v2#A1 "Appendix A Prompts for Generating Persona Cards ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations")) to filter the personas to ensure they include the individual’s emotions and the events they are experiencing, along with a clear socio-demographic background that provides a comprehensive sense of their identity. Ultimately, we obtain 1,155 personas from ESConv, 1,140 personas from CAMS, and 730 personas from Dreaddit. An example of a basic persona card is illustrated in Figure [2](https://arxiv.org/html/2502.11451v2#S2.F2 "Figure 2 ‣ 2.3 Persona-Driven Emotional Support ‣ 2 Related Works ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations"). The detailed statistics of datasets to be studied are summarized in Table [1](https://arxiv.org/html/2502.11451v2#S3.T1 "Table 1 ‣ 3 Dataset Collection ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations").
72
+
73
+ ![Image 3: Refer to caption](https://arxiv.org/html/2502.11451v2/x3.png)
74
+
75
+ Figure 3: Diagram of the process for the measurement of persona traits.
76
+
77
+ 4 Measurement of Persona Traits (RQ1)
78
+ -------------------------------------
79
+
80
+ Personality and communication style play crucial roles in emotional support conversations. Personality influences how a seeker may emotionally respond to various scenarios (Hughes et al., [2020](https://arxiv.org/html/2502.11451v2#bib.bib24); Komulainen et al., [2014](https://arxiv.org/html/2502.11451v2#bib.bib27)), while communication style reflects the way someone conveys their thoughts and emotions during the interaction (van Pinxteren et al., [2023](https://arxiv.org/html/2502.11451v2#bib.bib52)). These factors significantly impact the dynamics and outcomes of emotional support. Previous studies have demonstrated the effectiveness of personas in guiding the responses of emotional support systems (Cheng et al., [2022](https://arxiv.org/html/2502.11451v2#bib.bib6); Han et al., [2024](https://arxiv.org/html/2502.11451v2#bib.bib20)). In this section, we investigate whether the personality and communication style inferred by LLMs based on persona cards are correlated. In other words, we examine whether LLMs can infer stable traits from persona cards in emotional support conversations.
81
+
82
+ Table 2: Correlations between CSI and HEXACO from ESConv measured by gpt-4o-mini. P_value of all metrics are less than 0.01.
83
+
84
+ Table 3: Correlations between CSI and HEXACO from ESConv measured by Claude-3.5-haiku.
85
+
86
+ Table 4: Correlations between CSI and HEXACO from ESConv measured by LLaMA-3.1-8B-Instruct.
87
+
88
+ ### 4.1 Experimental Setups
89
+
90
+ Previous studies showed that LLMs have sufficient capability to capture certain level of human traits from dialogues (Jiang et al., [2024](https://arxiv.org/html/2502.11451v2#bib.bib26); Porvatov et al., [2024](https://arxiv.org/html/2502.11451v2#bib.bib40)). To further investigate whether LLMs can infer stable traits, we utilize HEXACO model (Lee and Ashton, [2004](https://arxiv.org/html/2502.11451v2#bib.bib29)) to assess personality and Communication Styles Inventory (CSI) (De Vries et al., [2013](https://arxiv.org/html/2502.11451v2#bib.bib10)) to evaluate communication styles. The HEXACO model is a personality framework consisting of six dimensions: Honesty-Humility (H), Emotionality (E), Extraversion (X), Agreeableness (A), Conscientiousness (C), and Openness to Experience (O). For our study, we use the HEXACO-60 inventory (Ashton and Lee, [2009](https://arxiv.org/html/2502.11451v2#bib.bib2)) to assess the personalities represented in the persona cards. The CSI identifies six domain-level communicative behavior scales: Expressiveness, Preciseness, Verbal Aggressiveness, Questioningness, Emotionality, and Impression Manipulativeness. Each dimension of the HEXACO personality model has the strongest correlation with a specific communication style dimension in the CSI. The correlations between these dimensions are introduced as follows (left - HEXACO, right - CSI) (De Vries et al., [2013](https://arxiv.org/html/2502.11451v2#bib.bib10)):
91
+
92
+ * •Extraversion <-> Expressiveness
93
+ * •Conscientiousness <-> Preciseness
94
+ * •Agreeableness <-> Verbal Aggressiveness
95
+ * •Openness to Experience <-> Questioningness
96
+ * •Emotionality <-> Emotionality
97
+ * •Honesty-Humility <-> Impression Manipulativeness
98
+
99
+ If LLMs can infer stable personality and communication style based on emotional support dialogues, the strongest correlations in the personality scores and communication style scores obtained from completing the inventories will align with those shown above.
100
+
101
+ Following previous research (Ji et al., [2024](https://arxiv.org/html/2502.11451v2#bib.bib25)), we prompt the LLM to generate descriptions for each dimension based on the extracted socio-demographic information, incorporating these descriptions into persona cards. Then, we prompt LLMs to predict answers to the HEXACO and CSI inventories using the persona card. These prompts are presented in Appendix [A](https://arxiv.org/html/2502.11451v2#A1 "Appendix A Prompts for Generating Persona Cards ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations"). We run our experiments on multiple open-source (LLaMA-3.1-8B-Instruct 1 1 1 meta-llama/Llama-3.1-8B-Instruct) and close-source LLMs (GPT-4o-mini 2 2 2 gpt-4o-mini-2024-07-18 and Claude-3.5-Haiku 3 3 3 claude-3-5-haiku-20241022), and we set temperature as 0 to get stable results.
102
+
103
+ ### 4.2 Results and Discussions
104
+
105
+ Based on the responses from these inventories, we calculate the HEXACO and CSI dimension scores for each dataset. To evaluate whether LLMs can infer stable traits from persona cards in an emotional support context, we then compute the correlations between the HEXACO and CSI dimensions within each dataset. Each scale provides a score corresponding to each response, along with the dimension to which each question belongs. Based on the LLM’s responses, we calculate the scores for each dimension. Finally, we use Pearson correlation to analyze the relationships between each dimension of HEXACO and CSI. Tables [4](https://arxiv.org/html/2502.11451v2#S4.T4 "Table 4 ‣ 4 Measurement of Persona Traits (RQ1) ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations"), [4](https://arxiv.org/html/2502.11451v2#S4.T4 "Table 4 ‣ 4 Measurement of Persona Traits (RQ1) ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations"), and [4](https://arxiv.org/html/2502.11451v2#S4.T4 "Table 4 ‣ 4 Measurement of Persona Traits (RQ1) ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations") present the correlations between HEXACO and CSI dimensions in the ESConv dataset (Liu et al., [2021](https://arxiv.org/html/2502.11451v2#bib.bib31)) as measured by three different LLMs. Similarly, Tables [12](https://arxiv.org/html/2502.11451v2#A3.T12 "Table 12 ‣ Appendix C Correlations on Other Datasets ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations"), [12](https://arxiv.org/html/2502.11451v2#A3.T12 "Table 12 ‣ Appendix C Correlations on Other Datasets ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations"), and [12](https://arxiv.org/html/2502.11451v2#A3.T12 "Table 12 ‣ Appendix C Correlations on Other Datasets ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations") show the correlations in the CAMS dataset (Garg et al., [2022](https://arxiv.org/html/2502.11451v2#bib.bib19)), while Tables [15](https://arxiv.org/html/2502.11451v2#A3.T15 "Table 15 ‣ Appendix C Correlations on Other Datasets ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations"), [15](https://arxiv.org/html/2502.11451v2#A3.T15 "Table 15 ‣ Appendix C Correlations on Other Datasets ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations"), and [15](https://arxiv.org/html/2502.11451v2#A3.T15 "Table 15 ‣ Appendix C Correlations on Other Datasets ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations") provide the correlations observed in the Dreaddit dataset (Turcan and McKeown, [2019](https://arxiv.org/html/2502.11451v2#bib.bib51)).
106
+
107
+ Experimental results demonstrate that on all three test datasets, GPT-4o-mini exhibits the strongest correlations between the six dimensions of HEXACO and CSI, aligning well with findings from the established psychological theory. For instance, extraversion from HEXACO model shows the strongest correlation with expressiveness from CSI model. This indicates that GPT-4o-mini can reliably infer persona traits relevant to emotional support dialogues from persona cards. However, we observed some discrepancies in the correlation analyses for LLaMA-3.1-8B-Instruct and Claude-3.5-Haiku. Specifically, LLaMA-3.1-8B-Instruct incorrectly associates verbal aggressiveness with extraversion and conscientiousness, suggesting that higher verbal aggressiveness implies greater extraversion in seekers. Meanwhile, Claude-3.5-Haiku incorrectly links questioningness with conscientiousness, implying that seekers who ask more questions are more extraverted. These inconsistencies highlight potential limitations in the ability of these models to interpret certain persona traits accurately. Overall, our findings suggests that LLMs are capable of inferring stable persona traits from personas in emotional support scenarios, though some inconsistencies exist.
108
+
109
+ 5 Persona Consistency in LLM-simulated Emotional Support Dialogues (RQ2)
110
+ ------------------------------------------------------------------------
111
+
112
+ After demonstrating that LLMs can reliably infer stable traits from personas, we further investigate whether these persona traits remain consistent during the LLM-based dialogue generation process.
113
+
114
+ ![Image 4: Refer to caption](https://arxiv.org/html/2502.11451v2/x4.png)
115
+
116
+ Figure 4: Diagram of the process for studying the persona consistency in LLM-simulated ESC.
117
+
118
+ ![Image 5: Refer to caption](https://arxiv.org/html/2502.11451v2/x5.png)
119
+
120
+ Figure 5: Comparison of HEXACO scores between the original persona and the one extracted from the dialogue generated by gpt-4o-mini.
121
+
122
+ ![Image 6: Refer to caption](https://arxiv.org/html/2502.11451v2/x6.png)
123
+
124
+ Figure 6: Comparison of HEXACO scores between the original persona and the one extracted from the dialogue generated by claude-3.5-haiku.
125
+
126
+ ![Image 7: Refer to caption](https://arxiv.org/html/2502.11451v2/x7.png)
127
+
128
+ Figure 7: Comparison of HEXACO scores between the original persona and the one extracted from the dialogue generated by LLaMA-3.1-8B-Instruct.
129
+
130
+ ### 5.1 Experimental Setups
131
+
132
+ To evaluate whether LLMs maintain consistent persona traits after dialogue generation, we conducted an extensive analysis using 1,000 randomly selected personas from PersonaHub (Chan et al., [2024](https://arxiv.org/html/2502.11451v2#bib.bib5)), ensuring a diverse range of characteristics. These personas initially consist of simple descriptions, such as "A fearless and highly trained athlete who can perform complex and dangerous physical sequences", that we systematically enhanced through LLM-based expansion. The enhancement process involved adding socio-demographic details (age, gender, occupation) and specific trait-indicative statements aligned with HEXACO and CSI dimensions. We then quantified these enhanced personas by generating HEXACO and CSI dimension scores using the methodology described in Section [4.1](https://arxiv.org/html/2502.11451v2#S4.SS1 "4.1 Experimental Setups ‣ 4 Measurement of Persona Traits (RQ1) ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations"). As LLMs can be effectively shaped to emulate human-beings (Frisch and Giulianelli, [2024](https://arxiv.org/html/2502.11451v2#bib.bib18); Wang et al., [2023](https://arxiv.org/html/2502.11451v2#bib.bib53)), we use these enriched personas to generate emotional support dialogues where each persona acted as a seeker in contextually relevant scenarios. For instance, an athlete discussing an injury-related emotional challenge. Following dialogue generation, we applied the extraction method outlined in Section [3](https://arxiv.org/html/2502.11451v2#S3 "3 Dataset Collection ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations") to derive persona characteristics from the generated conversations. We then calculated HEXACO and CSI scores from these extracted persona. By comparing these extracted scores with the original scores assigned to the input personas, we could assess the consistency of trait representation after the dialogue generation process. The complete set of prompts used for dialogue generation and trait extraction is provided in Appendix [B](https://arxiv.org/html/2502.11451v2#A2 "Appendix B Prompts for Measuring the Stability of Inferring Persona Traits ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations").
133
+
134
+ ![Image 8: Refer to caption](https://arxiv.org/html/2502.11451v2/x8.png)
135
+
136
+ Figure 8: Distribution of personality scores, reduced to 2D, obtained from dialogues w/o persona injection.
137
+
138
+ ### 5.2 Results and Discussions
139
+
140
+ The comparisons of scores of persona traits are shown in the Figure [7](https://arxiv.org/html/2502.11451v2#S5.F7 "Figure 7 ‣ 5 Persona Consistency in LLM-simulated Emotional Support Dialogues (RQ2) ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations"), [7](https://arxiv.org/html/2502.11451v2#S5.F7 "Figure 7 ‣ 5 Persona Consistency in LLM-simulated Emotional Support Dialogues (RQ2) ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations"), and [7](https://arxiv.org/html/2502.11451v2#S5.F7 "Figure 7 ‣ 5 Persona Consistency in LLM-simulated Emotional Support Dialogues (RQ2) ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations"). The results show that the original personas and the personas extracted from the generated dialogues share similar distributions across four personality traits: Honesty-Humility, Agreeableness, Conscientiousness, and Openness to Experience. This indicates that the personas maintain consistent traits in the synthetic emotional support dialogues. However, we notice that the extracted personas tend to have higher Emotionality and lower Extraversion compared to the original personas. We believe this may be because the seeker in the emotional support dialogues is dealing with emotional issues, making them more emotional and less outgoing. A similar pattern is observed in the comparison of CSI scores (see Table [21](https://arxiv.org/html/2502.11451v2#A5.F21 "Figure 21 ‣ Appendix E Results of Persoan Consistency ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations"), [21](https://arxiv.org/html/2502.11451v2#A5.F21 "Figure 21 ‣ Appendix E Results of Persoan Consistency ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations"), and [21](https://arxiv.org/html/2502.11451v2#A5.F21 "Figure 21 ‣ Appendix E Results of Persoan Consistency ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations") in Appendix [E](https://arxiv.org/html/2502.11451v2#A5 "Appendix E Results of Persoan Consistency ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations")). This indicates that the persona traits generally maintain consistent in the synthetic emotional support dialogues with only slight shifts.
141
+
142
+ ![Image 9: Refer to caption](https://arxiv.org/html/2502.11451v2/x9.png)
143
+
144
+ Figure 9: Diagram of the process for studying the impact of persona on LLM-simulated ESC.
145
+
146
+ ### 5.3 Ablation Study
147
+
148
+ Prior research (Huang et al., [2023a](https://arxiv.org/html/2502.11451v2#bib.bib22); Pan and Zeng, [2023](https://arxiv.org/html/2502.11451v2#bib.bib36); Frisch and Giulianelli, [2024](https://arxiv.org/html/2502.11451v2#bib.bib18)) suggests that LLMs possess inherent personality traits. To investigate how these intrinsic traits might influence emotional support dialogue generation, we conducted a comparative analysis between dialogues generated with and without predefined personas. We extracted the implicit personas manifested in the generated conversations, calculated their corresponding personality scores. In Figure [8](https://arxiv.org/html/2502.11451v2#S5.F8 "Figure 8 ‣ 5.1 Experimental Setups ‣ 5 Persona Consistency in LLM-simulated Emotional Support Dialogues (RQ2) ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations"), we used PCA to project these scores into a 2D space for visualization. The resulting distribution reveals a marked difference between the two conditions: dialogues generated without persona injection exhibit a more concentrated distribution, whereas those with predefined personas cover a broader range of personality traits. This contrast suggests that externally provided personas influence the personality traits manifested in dialogue generation, persona injection can guide and shape the dialogue generation process.
149
+
150
+ 6 Impact of Persona on LLM-simulated Emotional Support Dialogues (RQ3)
151
+ ----------------------------------------------------------------------
152
+
153
+ Many efforts have been made on exploring how to incorporate emotional support strategies into emotional support dialogues. Zheng et al. ([2024](https://arxiv.org/html/2502.11451v2#bib.bib58)) use LLMs to introduce emotional support strategies in synthetic dialogues. Zhang et al. ([2024](https://arxiv.org/html/2502.11451v2#bib.bib55)) further involve the concept of persona into these dialogues and analyze the usage of strategies. This raises an important question: does adding a persona have an impact on the way emotional support strategies (definitions of each strategy are shown in Appendix [G](https://arxiv.org/html/2502.11451v2#A7 "Appendix G Definitions of Emotional Support Strategies ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations")) are distributed in dialogues? To investigate it, we use ESConv dialogues as history and instruct LLMs to predict how a new conversation might unfold after some time. Each future dialogue is generated in two versions: with and without persona traits (personality and communication style scores). Since ESConv provides real-world context and continuations generated by LLMs can preserve the semantic distribution of dialogues (Fan et al., [2025](https://arxiv.org/html/2502.11451v2#bib.bib17)), our approach ensures diversity and realism in emotional support scenarios.
154
+
155
+ Table 5: Strategy distribution on two different groups of synthesized dialogues by gpt-4o-mini, one generated with persona traits (PT), the other without.
156
+
157
+ Table 6: Strategy distribution on two different groups of synthesized dialogues by claude-haiku-3.5.
158
+
159
+ Table 7: Strategy distribution on two different groups of synthesized dialogues by LLaMA-3.1-8B-Instruct.
160
+
161
+ Table 8: Strategy distribution on two different groups of synthesized dialogues, one generated with HEXACO scores, the other with CSI scores.
162
+
163
+ ### 6.1 Analysis of Emotional Support Strategies
164
+
165
+ As demonstrated in Tables [7](https://arxiv.org/html/2502.11451v2#S6.T7 "Table 7 ‣ 6 Impact of Persona on LLM-simulated Emotional Support Dialogues (RQ3) ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations") and [7](https://arxiv.org/html/2502.11451v2#S6.T7 "Table 7 ‣ 6 Impact of Persona on LLM-simulated Emotional Support Dialogues (RQ3) ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations"), the distribution of emotional support strategies differs significantly between dialogues generated with and without persona traits for both gpt-4o-mini and claude-3.5-haiku. While Table [7](https://arxiv.org/html/2502.11451v2#S6.T7 "Table 7 ‣ 6 Impact of Persona on LLM-simulated Emotional Support Dialogues (RQ3) ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations") shows that llama-3.1-8B-instruct exhibits less pronounced differences in strategy distributions between persona and non-persona conditions, the trends remain consistent with those observed in other models. The analysis revealed that supporters in persona-enhanced dialogues demonstrated a greater tendency to validate seekers’ emotions through questioning and provided more effective emotional comfort, as evidenced by significantly higher rates of questioning, affirmation, and reassurance strategies compared to non-persona dialogues. Conversely, supporters in dialogues without persona traits emphasized problem explanation and relied more heavily on self-disclosure, while prior research (Meng and Dai, [2021](https://arxiv.org/html/2502.11451v2#bib.bib34)) indicate that self-disclosing chatbots perform poorly when emotional support is absent and the clear boundaries are not established. The distributions of emotional support strategies generated using either personality scores or communication style scores, as shown in Table [8](https://arxiv.org/html/2502.11451v2#S6.T8 "Table 8 ‣ 6 Impact of Persona on LLM-simulated Emotional Support Dialogues (RQ3) ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations"), demonstrate remarkable similarity. This alignment can be attributed to the strong correlation between these trait measures, as discussed in Section [4](https://arxiv.org/html/2502.11451v2#S4 "4 Measurement of Persona Traits (RQ1) ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations"). These results provide compelling evidence that persona traits significantly influence the distribution and application of emotional support strategies in dialogues.
166
+
167
+ Table 9: Human evaluation compares dialogues generated with and without personas. Win indicates that the dialogues generated with persona outperforms the one generated without persona on the given indicator.
168
+
169
+ ![Image 10: Refer to caption](https://arxiv.org/html/2502.11451v2/x10.png)
170
+
171
+ Figure 10: Case study. Blue indicates that the supporter directly provides emotional support. Green signifies the supporter offers direct suggestions. Yellow means that the supporter provides suggestions through rhetorical questions or guides the seeker to reflect.
172
+
173
+ ### 6.2 Human Evaluation
174
+
175
+ To intuitively reveal the impact of persona on LLM-simulated emotional support conversations, we conduct human evaluation by comparing the generated dialogues with and without persona traits, using the following indicators: Annotators evaluate the dialogues based on the following metrics: (1) Suggestion evaluates how effectively the supporter provided helpful advice. (2) Consistency assesses whether participants consistently maintained their roles and exhibit coherent behavior. (3) Comforting examines the supporter’s ability to provide emotional support to the seeker. (4) Identification determines which supporter delved deeper into the seeker’s situation and was more effective in identifying issues. (5) Overall assesses the overall performance about these two groups of dialogues. We recruited 10 native English speakers from undergraduate students in various disciplines, who had previously completed various annotation tasks and were experienced in the field. Each evaluator was compensated at a rate of approximately 20 USD per hour and was fully informed about the tasks they needed to complete. They reviewed 50 randomly selected groups of instances from the generated dialogues with and without personas.
176
+
177
+ The result of human evaluation is shown in Table [9](https://arxiv.org/html/2502.11451v2#S6.T9 "Table 9 ‣ 6.1 Analysis of Emotional Support Strategies ‣ 6 Impact of Persona on LLM-simulated Emotional Support Dialogues (RQ3) ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations"). Except for the Consistency indicator, which showed similar performance for both dialogue groups, the dialogues generated with personas outperformed those without. Based on dialogue strategies and observations, we believe that the dialogues generated with personas are better at using rhetorical questions to identify the seeker’s issues and gently offer suggestions, making the conversations more in-depth and comforting.
178
+
179
+ ### 6.3 Case Study
180
+
181
+ A case study supports our point can be found in Figure [10](https://arxiv.org/html/2502.11451v2#S6.F10 "Figure 10 ‣ 6.1 Analysis of Emotional Support Strategies ‣ 6 Impact of Persona on LLM-simulated Emotional Support Dialogues (RQ3) ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations"), and the persona card and dialogue history can be found in Appendix [F](https://arxiv.org/html/2502.11451v2#A6 "Appendix F Persona Card and Dialogue History of Case Study ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations"). Although we found that these dialogues provided the same level of direct emotional support, in the dialogue generated with persona, there is a tendency to use rhetorical questions to encourage the seeker to reflect and explore the content more deeply, while also offering suggestions more tactfully. In contrast, the dialogue generated without persona is more likely to provide direct affirmations or suggestions. In psychology, when a message is highly relevant to the recipient and they are already motivated to process it, rhetorical questions can enhance the persuasiveness of messages with weak arguments(Petty et al., [1981](https://arxiv.org/html/2502.11451v2#bib.bib39)). Since emotional support is often regarded as a form of weak argument (Petty and Cacioppo, [2012](https://arxiv.org/html/2502.11451v2#bib.bib38)), incorporating rhetorical questions into a supporter’s response can make it more acceptable to the seeker, and facilitate deeper and more meaningful conversations. This observation highlights the role of personas in enhancing the quality of dialogue generation.
182
+
183
+ 7 Conclusion
184
+ ------------
185
+
186
+ This analytical study highlights the potential of incorporating personas into LLM-generated emotional support dialogues to enhance effectiveness and human-likeness. Our findings confirm that LLMs can infer stable traits from personas and maintain key persona characteristics while revealing shifts in emotionality and extraversion traits that affect dialogue. Personas not only improve the empathetic quality of responses but also influence the distribution of emotional support strategies, ensuring dialogues are more personalized and contextually appropriate. We encourage future researchers to build on our findings to develop more adaptive, effective, and robust ESC chatbots.
187
+
188
+ Acknowledgments
189
+ ---------------
190
+
191
+ This research was supported by the Singapore Ministry of Education (MOE) Academic Research Fund (AcRF) Tier 1 grant (No. MSS24C012).
192
+
193
+ Limitations
194
+ -----------
195
+
196
+ The reliance on LLM outputs introduces potential biases inherent in the model’s training data. These biases may have influenced both the extraction and simulation of personas, possibly affecting the accuracy of the results comparing to real persons. Our approach follows the recent researches about extracting personas using LLMs (Ji et al., [2024](https://arxiv.org/html/2502.11451v2#bib.bib25); Zhao et al., [2024](https://arxiv.org/html/2502.11451v2#bib.bib56)), which may still have limitations in accurately capturing and representing complex human traits. Future researchers may need to investigate the impact of inherent biases in LLMs on persona extraction and dialogue simulation based on these personas.
197
+
198
+ Additionally, we employ an omniscient perspective in our data generation process, where both seekers and supporters can access the complete information. While this approach is common in previous studies (Zheng et al., [2023](https://arxiv.org/html/2502.11451v2#bib.bib57), [2024](https://arxiv.org/html/2502.11451v2#bib.bib58)), it doesn’t fully reflect the real-world conversation dynamics. Future researchers may improve realism by simulating emotional support scenarios with distinct information states for each role.
199
+
200
+ Ethical Considerations
201
+ ----------------------
202
+
203
+ Emotional support conversations (ESC) in LLM-generated dialogues requires careful ethical considerations. We recognize the risks of using LLMs to generate emotional support responses, especially if the system misinterprets or misrepresents the user’s persona, potentially leading to unintended emotional harm. Users must clearly understand they are interacting with chatbots, not a human, to manage expectations and avoid misleading attachments. Incorporating personas could make LLMs seem more human-like and more guiding, increasing the risk of user dependency on chatbots instead of seeking professional help. Therefore, implementing safeguards to guide users to human assistance in cases of severe distress is crucial. While the LLMs demonstrate the ability to identify and utilize personas, they also inherit issues like societal biases, the risk of emotional manipulation, and dependency on LLM-generated support. To mitigate these concerns, we emphasize that this research should only be considered for academic purposes and cannot be deployed in real-life emotional support scenarios without additional safeguards. We are committed to improving ESC chatbots to minimize biases, enhance transparency, and support the development of more adaptive and ethically sound emotional support chatbots in the future.
204
+
205
+ References
206
+ ----------
207
+
208
+ * Ait Baha et al. (2023) Tarek Ait Baha, Mohamed El Hajji, Youssef Es-Saady, and Hammou Fadili. 2023. The power of personalization: A systematic review of personality-adaptive chatbots. _SN Computer Science_, 4(5):661.
209
+ * Ashton and Lee (2009) Michael C Ashton and Kibeom Lee. 2009. The hexaco–60: A short measure of the major dimensions of personality. _Journal of personality assessment_, 91(4):340–345.
210
+ * Briggs (1976) Katharine C Briggs. 1976. _Myers-Briggs type indicator_. Consulting Psychologists Press Palo Alto, CA.
211
+ * Capraro and Capraro (2002) Robert M Capraro and Mary Margaret Capraro. 2002. Myers-briggs type indicator score reliability across: Studies a meta-analytic reliability generalization study. _Educational and Psychological Measurement_, 62(4):590–602.
212
+ * Chan et al. (2024) Xin Chan, Xiaoyang Wang, Dian Yu, Haitao Mi, and Dong Yu. 2024. Scaling synthetic data creation with 1,000,000,000 personas. _arXiv preprint arXiv:2406.20094_.
213
+ * Cheng et al. (2022) Jiale Cheng, Sahand Sabour, Hao Sun, Zhuang Chen, and Minlie Huang. 2022. Pal: Persona-augmented emotional support conversation generation. _arXiv preprint arXiv:2212.09235_.
214
+ * Comrey (1970) Andrew L Comrey. 1970. Comrey personality scales. _San Diego: Edu_.
215
+ * Costa and McCrae (2008) Paul T Costa and Robert R McCrae. 2008. The revised neo personality inventory (neo-pi-r). _The SAGE handbook of personality theory and assessment_, 2(2):179–198.
216
+ * Costa Jr and McCrae (1992) Paul T Costa Jr and Robert R McCrae. 1992. Four ways five factors are basic. _Personality and individual differences_, 13(6):653–665.
217
+ * De Vries et al. (2013) Reinout E De Vries, Angelique Bakker-Pieper, Femke E Konings, and Barbara Schouten. 2013. The communication styles inventory (csi) a six-dimensional behavioral model of communication styles and its relation with personality. _Communication Research_, 40(4):506–532.
218
+ * Deng et al. (2023a) Yang Deng, Wenqiang Lei, Minlie Huang, and Tat-Seng Chua. 2023a. [Rethinking conversational agents in the era of llms: Proactivity, non-collaborativity, and beyond](https://doi.org/10.1145/3624918.3629548). In _Annual International ACM SIGIR Conference on Research and Development in Information Retrieval in the Asia Pacific Region, SIGIR-AP 2023_, pages 298–301. ACM.
219
+ * Deng et al. (2025) Yang Deng, Lizi Liao, Wenqiang Lei, Grace Hui Yang, Wai Lam, and Tat-Seng Chua. 2025. [Proactive conversational AI: A comprehensive survey of advancements and opportunities](https://doi.org/10.1145/3715097). _ACM Trans. Inf. Syst._, 43(3):67:1–67:45.
220
+ * Deng et al. (2024a) Yang Deng, Lizi Liao, Zhonghua Zheng, Grace Hui Yang, and Tat-Seng Chua. 2024a. [Towards human-centered proactive conversational agents](https://doi.org/10.1145/3626772.3657843). In _Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2024_, pages 807–818.
221
+ * Deng et al. (2024b) Yang Deng, Wenxuan Zhang, Wai Lam, See-Kiong Ng, and Tat-Seng Chua. 2024b. [Plug-and-play policy planner for large language model powered dialogue agents](https://openreview.net/forum?id=MCNqgUFTHI). In _The Twelfth International Conference on Learning Representations, ICLR 2024_. OpenReview.net.
222
+ * Deng et al. (2023b) Yang Deng, Wenxuan Zhang, Yifei Yuan, and Wai Lam. 2023b. [Knowledge-enhanced mixed-initiative dialogue system for emotional support conversations](https://doi.org/10.18653/v1/2023.acl-long.225). In _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023_, pages 4079–4095. Association for Computational Linguistics.
223
+ * Ding et al. (2024) Bosheng Ding, Chengwei Qin, Ruochen Zhao, Tianze Luo, Xinze Li, Guizhen Chen, Wenhan Xia, Junjie Hu, Anh Tuan Luu, and Shafiq Joty. 2024. [Data augmentation using llms: Data perspectives, learning paradigms and challenges](https://doi.org/10.18653/v1/2024.findings-acl.97). In _Findings of the Association for Computational Linguistics, ACL 2024_, pages 1679–1705.
224
+ * Fan et al. (2025) Wenlu Fan, Yuqi Zhu, Chenyang Wang, Bin Wang, and Wentao Xu. 2025. Consistency of responses and continuations generated by large language models on social media. _arXiv preprint arXiv:2501.08102_.
225
+ * Frisch and Giulianelli (2024) Ivar Frisch and Mario Giulianelli. 2024. Llm agents in interaction: Measuring personality consistency and linguistic alignment in interacting populations of large language models. _arXiv preprint arXiv:2402.02896_.
226
+ * Garg et al. (2022) Muskan Garg, Chandni Saxena, Sriparna Saha, Veena Krishnan, Ruchi Joshi, and Vijay Mago. 2022. [CAMS: An annotated corpus for causal analysis of mental health issues in social media posts](https://aclanthology.org/2022.lrec-1.686). In _Proceedings of the Thirteenth Language Resources and Evaluation Conference_, pages 6387–6396, Marseille, France. European Language Resources Association.
227
+ * Han et al. (2024) Seunghee Han, Se Jin Park, Chae Won Kim, and Yong Man Ro. 2024. Persona extraction through semantic similarity for emotional support conversation generation. In _ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_, pages 11321–11325. IEEE.
228
+ * Hernandez et al. (2023) Javier Hernandez, Jina Suh, Judith Amores, Kael Rowan, Gonzalo Ramos, and Mary Czerwinski. 2023. Affective conversational agents: Understanding expectations and personal influences. _arXiv preprint arXiv:2310.12459_.
229
+ * Huang et al. (2023a) Jen-tse Huang, Wenxuan Wang, Man Ho Lam, Eric John Li, Wenxiang Jiao, and Michael R Lyu. 2023a. Revisiting the reliability of psychological scales on large language models. _arXiv e-prints_, pages arXiv–2305.
230
+ * Huang et al. (2023b) Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, and Michael Lyu. 2023b. On the humanity of conversational ai: Evaluating the psychological portrayal of llms. In _The Twelfth International Conference on Learning Representations_.
231
+ * Hughes et al. (2020) David J Hughes, Ioannis K Kratsiotis, Karen Niven, and David Holman. 2020. Personality traits and emotion regulation: A targeted review and recommendations. _Emotion_, 20(1):63.
232
+ * Ji et al. (2024) Yongyi Ji, Zhisheng Tang, and Mayank Kejriwal. 2024. Is persona enough for personality? using chatgpt to reconstruct an agent’s latent personality from simple descriptions. _arXiv preprint arXiv:2406.12216_.
233
+ * Jiang et al. (2024) Hang Jiang, Xiajie Zhang, Xubo Cao, Cynthia Breazeal, Deb Roy, and Jad Kabbara. 2024. Personallm: Investigating the ability of large language models to express personality traits. In _Findings of the Association for Computational Linguistics: NAACL 2024_, pages 3605–3627.
234
+ * Komulainen et al. (2014) Emma Komulainen, Katarina Meskanen, Jari Lipsanen, Jari Marko Lahti, Pekka Jylhä, Tarja Melartin, Marieke Wichers, Erkki Isometsä, and Jesper Ekelund. 2014. The effect of personality on daily life emotional processes. _PloS one_, 9(10):e110907.
235
+ * Lahnala et al. (2021) Allison Lahnala, Yuntian Zhao, Charles Welch, Jonathan K. Kummerfeld, Lawrence C. An, Kenneth Resnicow, Rada Mihalcea, and Verónica Pérez-Rosas. 2021. [Exploring self-identified counseling expertise in online support forums](https://doi.org/10.18653/v1/2021.findings-acl.392). In _Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021_, volume ACL/IJCNLP 2021 of _Findings of ACL_, pages 4467–4480. Association for Computational Linguistics.
236
+ * Lee and Ashton (2004) Kibeom Lee and Michael C Ashton. 2004. Psychometric properties of the hexaco personality inventory. _Multivariate behavioral research_, 39(2):329–358.
237
+ * Liu et al. (2023) June M Liu, Donghao Li, He Cao, Tianhe Ren, Zeyi Liao, and Jiamin Wu. 2023. Chatcounselor: A large language models for mental health support. _arXiv preprint arXiv:2309.15461_.
238
+ * Liu et al. (2021) Siyang Liu, Chujie Zheng, Orianna Demasi, Sahand Sabour, Yu Li, Zhou Yu, Yong Jiang, and Minlie Huang. 2021. [Towards emotional support dialog systems](https://doi.org/10.18653/v1/2021.acl-long.269). In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_, pages 3469–3483, Online. Association for Computational Linguistics.
239
+ * Ma et al. (2024) Zhiqiang Ma, Wenchao Jia, Yutong Zhou, Biqi Xu, Zhiqiang Liu, and Zhuoyi Wu. 2024. Personality enhanced emotion generation modeling for dialogue systems. _Cognitive Computation_, 16(1):293–304.
240
+ * Medeiros and Bosse (2018) Lenin Medeiros and Tibor Bosse. 2018. Using crowdsourcing for the development of online emotional support agents. In _Highlights of Practical Applications of Agents, Multi-Agent Systems, and Complexity: The PAAMS Collection: International Workshops of PAAMS 2018, Toledo, Spain, June 20–22, 2018, Proceedings 16_, pages 196–209. Springer.
241
+ * Meng and Dai (2021) Jingbo Meng and Yue Dai. 2021. Emotional support from ai chatbots: Should a supportive partner self-disclose or not? _Journal of Computer-Mediated Communication_, 26(4):207–222.
242
+ * Norton (1978) Robert W Norton. 1978. Foundation of a communicator style construct. _Human communication research_, 4(2):99–112.
243
+ * Pan and Zeng (2023) Keyu Pan and Yawen Zeng. 2023. Do llms possess a personality? making the mbti test an amazing evaluation for large language models. _arXiv preprint arXiv:2307.16180_.
244
+ * Peng et al. (2022) Wei Peng, Yue Hu, Luxi Xing, Yuqiang Xie, Yajing Sun, and Yunpeng Li. 2022. Control globally, understand locally: A global-to-local hierarchical graph network for emotional support conversation. _arXiv preprint arXiv:2204.12749_.
245
+ * Petty and Cacioppo (2012) Richard E Petty and John T Cacioppo. 2012. _Communication and persuasion: Central and peripheral routes to attitude change_. Springer Science & Business Media.
246
+ * Petty et al. (1981) Richard E Petty, John T Cacioppo, and Martin Heesacker. 1981. Effects of rhetorical questions on persuasion: A cognitive response analysis. _Journal of personality and social psychology_, 40(3):432.
247
+ * Porvatov et al. (2024) Vadim A Porvatov, Carlo Strapparava, and Marina Tiuleneva. 2024. Big-five backstage: A dramatic dataset for characters personality traits & gender analysis. In _Proceedings of the Workshop on Cognitive Aspects of the Lexicon@ LREC-COLING 2024_, pages 114–119.
248
+ * Qiu and Lan (2024) Huachuan Qiu and Zhenzhong Lan. 2024. [Interactive agents: Simulating counselor-client psychological counseling via role-playing llm-to-llm interactions](https://doi.org/10.48550/arXiv.2408.15787). _CoRR_, abs/2408.15787.
249
+ * Rashkin et al. (2019) Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. [Towards empathetic open-domain conversation models: A new benchmark and dataset](https://doi.org/10.18653/v1/P19-1534). In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_, pages 5370–5381, Florence, Italy. Association for Computational Linguistics.
250
+ * Safdari et al. (2023) Mustafa Safdari, Greg Serapio-García, Clément Crepy, Stephen Fitz, Peter Romero, Luning Sun, Marwa Abdulhai, Aleksandra Faust, and Maja Matarić. 2023. Personality traits in large language models. _arXiv preprint arXiv:2307.00184_.
251
+ * Sharma et al. (2020a) Ashish Sharma, Adam S Miner, David C Atkins, and Tim Althoff. 2020a. A computational approach to understanding empathy expressed in text-based mental health support. In _EMNLP_.
252
+ * Sharma et al. (2020b) Ashish Sharma, Adam S Miner, David C Atkins, and Tim Althoff. 2020b. A computational approach to understanding empathy expressed in text-based mental health support. _arXiv preprint arXiv:2009.08441_.
253
+ * Shen et al. (2020) Siqi Shen, Charles Welch, Rada Mihalcea, and Verónica Pérez-Rosas. 2020. [Counseling-style reflection generation using generative pretrained transformers with augmented context](https://doi.org/10.18653/v1/2020.sigdial-1.2). In _Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue_, pages 10–20, 1st virtual meeting. Association for Computational Linguistics.
254
+ * Sun et al. (2021) Hao Sun, Zhenru Lin, Chujie Zheng, Siyang Liu, and Minlie Huang. 2021. [PsyQA: A Chinese dataset for generating long counseling text for mental health support](https://doi.org/10.18653/v1/2021.findings-acl.130). In _Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021_, pages 1489–1503, Online. Association for Computational Linguistics.
255
+ * Tan et al. (2024) Zhen Tan, Alimohammad Beigi, Song Wang, Ruocheng Guo, Amrita Bhattacharjee, Bohan Jiang, Mansooreh Karami, Jundong Li, Lu Cheng, and Huan Liu. 2024. [Large language models for data annotation: A survey](https://doi.org/10.48550/arXiv.2402.13446). _CoRR_, abs/2402.13446.
256
+ * Tu et al. (2023) Quan Tu, Chuanqi Chen, Jinpeng Li, Yanran Li, Shuo Shang, Dongyan Zhao, Ran Wang, and Rui Yan. 2023. Characterchat: Learning towards conversational ai with personalized social support. _arXiv preprint arXiv:2308.10278_.
257
+ * Tu et al. (2022) Quan Tu, Yanran Li, Jianwei Cui, Bin Wang, Ji-Rong Wen, and Rui Yan. 2022. [MISC: A mixed strategy-aware model integrating COMET for emotional support conversation](https://doi.org/10.18653/v1/2022.acl-long.25). In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_, pages 308–319, Dublin, Ireland. Association for Computational Linguistics.
258
+ * Turcan and McKeown (2019) Elsbeth Turcan and Kathy McKeown. 2019. [Dreaddit: A Reddit dataset for stress analysis in social media](https://doi.org/10.18653/v1/D19-6213). In _Proceedings of the Tenth International Workshop on Health Text Mining and Information Analysis (LOUHI 2019)_, pages 97–107, Hong Kong. Association for Computational Linguistics.
259
+ * van Pinxteren et al. (2023) Michelle ME van Pinxteren, Mark Pluymaekers, Jos Lemmink, and Anna Krispin. 2023. Effects of communication style on relational outcomes in interactions between customers and embodied conversational agents. _Psychology & Marketing_, 40(5):938–953.
260
+ * Wang et al. (2023) Zekun Moore Wang, Zhongyuan Peng, Haoran Que, Jiaheng Liu, Wangchunshu Zhou, Yuhan Wu, Hongcheng Guo, Ruitong Gan, Zehao Ni, Jian Yang, et al. 2023. Rolellm: Benchmarking, eliciting, and enhancing role-playing abilities of large language models. _arXiv preprint arXiv:2310.00746_.
261
+ * Wu et al. (2024) Shenghan Wu, Wynne Hsu, and Mong Li Lee. 2024. [EHDChat: A knowledge-grounded, empathy-enhanced language model for healthcare interactions](https://doi.org/10.18653/v1/2024.sicon-1.10). In _Proceedings of the Second Workshop on Social Influence in Conversations (SICon 2024)_, pages 141–151, Miami, Florida, USA. Association for Computational Linguistics.
262
+ * Zhang et al. (2024) Tenggan Zhang, Xinjie Zhang, Jinming Zhao, Li Zhou, and Qin Jin. 2024. Escot: Towards interpretable emotional support dialogue systems. _arXiv preprint arXiv:2406.10960_.
263
+ * Zhao et al. (2024) Haiquan Zhao, Lingyu Li, Shisong Chen, Shuqi Kong, Jiaan Wang, Kexin Huang, Tianle Gu, Yixu Wang, Dandan Liang, Zhixu Li, et al. 2024. Esc-eval: Evaluating emotion support conversations in large language models. _arXiv preprint arXiv:2406.14952_.
264
+ * Zheng et al. (2023) Chujie Zheng, Sahand Sabour, Jiaxin Wen, Zheng Zhang, and Minlie Huang. 2023. [Augesc: Dialogue augmentation with large language models for emotional support conversation](https://doi.org/10.18653/v1/2023.findings-acl.99). In _Findings of the Association for Computational Linguistics: ACL 2023_, pages 1552–1568.
265
+ * Zheng et al. (2024) Zhonghua Zheng, Lizi Liao, Yang Deng, Libo Qin, and Liqiang Nie. 2024. [Self-chats from large language models make small emotional support chatbot better](https://doi.org/10.18653/v1/2024.acl-long.611). In _Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2024_, pages 11325–11345. Association for Computational Linguistics.
266
+
267
+ Appendix A Prompts for Generating Persona Cards
268
+ -----------------------------------------------
269
+
270
+ In the section, it shows the prompts for generating persona cards, including basic persona and persona traits. The prompt in Figure [11](https://arxiv.org/html/2502.11451v2#A1.F11 "Figure 11 ‣ Appendix A Prompts for Generating Persona Cards ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations") is used to extract basic persona information from dialogues, including age, gender, occupation and socio-demographic description. Figure [12](https://arxiv.org/html/2502.11451v2#A1.F12 "Figure 12 ‣ Appendix A Prompts for Generating Persona Cards ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations") presents the prompt used to determine the most suitable description of HEXACO or CSI indicators based on the extracted socio-demographic description. Figure [13](https://arxiv.org/html/2502.11451v2#A1.F13 "Figure 13 ‣ Appendix A Prompts for Generating Persona Cards ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations") shows the prompt used by LLMs to answer HEXACO and CSI inventories derived from the extracted personas. Figure [14](https://arxiv.org/html/2502.11451v2#A1.F14 "Figure 14 ‣ Appendix A Prompts for Generating Persona Cards ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations") provides the prompt used to filter out unclear personas and those that do not provide identifiable identity information.
271
+
272
+ ![Image 11: Refer to caption](https://arxiv.org/html/2502.11451v2/x11.png)
273
+
274
+ Figure 11: Prompt for extracting the basic persona from the dialogue.
275
+
276
+ ![Image 12: Refer to caption](https://arxiv.org/html/2502.11451v2/x12.png)
277
+
278
+ Figure 12: Prompt for producing the best describes on HEXACO/CSI indicators.
279
+
280
+ ![Image 13: Refer to caption](https://arxiv.org/html/2502.11451v2/x13.png)
281
+
282
+ Figure 13: Prompt for answering HEXACO/CSI inventories based on persona.
283
+
284
+ ![Image 14: Refer to caption](https://arxiv.org/html/2502.11451v2/x14.png)
285
+
286
+ Figure 14: Prompt for filtering personas.
287
+
288
+ Appendix B Prompts for Measuring the Stability of Inferring Persona Traits
289
+ --------------------------------------------------------------------------
290
+
291
+ ![Image 15: Refer to caption](https://arxiv.org/html/2502.11451v2/x15.png)
292
+
293
+ Figure 15: Prompt for extending personas from Persona Hub.
294
+
295
+ In this section, we introduce the prompts used to measure whether LLMs can infer stable traits from persona in emotional support dialogues. Figure [15](https://arxiv.org/html/2502.11451v2#A2.F15 "Figure 15 ‣ Appendix B Prompts for Measuring the Stability of Inferring Persona Traits ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations") shows the prompt used to extend the socio-demographic description of the persona. Figure [17](https://arxiv.org/html/2502.11451v2#A4.F17 "Figure 17 ‣ Appendix D Prompts for Synthesizing Dialogues w/o Persona Traits ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations") displays the prompt to generate emotional support dialogues with strategies based on the given persona. Appendix [A](https://arxiv.org/html/2502.11451v2#A1 "Appendix A Prompts for Generating Persona Cards ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations") lists the prompts used to obtain HEXACO and CSI scores from personas.
296
+
297
+ Appendix C Correlations on Other Datasets
298
+ -----------------------------------------
299
+
300
+ In this section, we introduce the correlations of HEXACO and CSI on the CAMS (Figure [12](https://arxiv.org/html/2502.11451v2#A3.T12 "Table 12 ‣ Appendix C Correlations on Other Datasets ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations"), [12](https://arxiv.org/html/2502.11451v2#A3.T12 "Table 12 ‣ Appendix C Correlations on Other Datasets ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations") and [12](https://arxiv.org/html/2502.11451v2#A3.T12 "Table 12 ‣ Appendix C Correlations on Other Datasets ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations")) and Dreaddit (Figure [15](https://arxiv.org/html/2502.11451v2#A3.T15 "Table 15 ‣ Appendix C Correlations on Other Datasets ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations"), [15](https://arxiv.org/html/2502.11451v2#A3.T15 "Table 15 ‣ Appendix C Correlations on Other Datasets ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations") and [15](https://arxiv.org/html/2502.11451v2#A3.T15 "Table 15 ‣ Appendix C Correlations on Other Datasets ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations")) dataset. The findings are same as in the Section [4](https://arxiv.org/html/2502.11451v2#S4 "4 Measurement of Persona Traits (RQ1) ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations").
301
+
302
+ Table 10: Correlations between CSI and HEXACO of CAMS personas measured by gpt-40-mini.
303
+
304
+ Table 11: Correlations between CSI and HEXACO from CAMS measured by Claude-3.5-haiku.
305
+
306
+ Table 12: Correlations between CSI and HEXACO from CAMS measured by LLaMA-3.1-8B-Instruct.
307
+
308
+ Table 13: Correlations between CSI and HEXACO of Dreaddit personas measured by gpt-40-mini.
309
+
310
+ Table 14: Correlations between CSI and HEXACO from Dreaddit measured by Claude-3.5-haiku.
311
+
312
+ Table 15: Correlations between CSI and HEXACO from Dreaddit measured by LLaMA-3.1-8B-Instruct.
313
+
314
+ Appendix D Prompts for Synthesizing Dialogues w/o Persona Traits
315
+ ----------------------------------------------------------------
316
+
317
+ In this part, we give prompts for synthesizing emotional support dialogues with and without persona traits continuing writing ESConv dialogues. Figure [16](https://arxiv.org/html/2502.11451v2#A4.F16 "Figure 16 ‣ Appendix D Prompts for Synthesizing Dialogues w/o Persona Traits ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations") illustrates the prompt to generate dialogues with personas, while the prompt shown in Figure [18](https://arxiv.org/html/2502.11451v2#A4.F18 "Figure 18 ‣ Appendix D Prompts for Synthesizing Dialogues w/o Persona Traits ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations") is utilized to generate dialogues without personas.
318
+
319
+ ![Image 16: Refer to caption](https://arxiv.org/html/2502.11451v2/x16.png)
320
+
321
+ Figure 16: Prompt for synthesizing dialogues with persona traits.
322
+
323
+ ![Image 17: Refer to caption](https://arxiv.org/html/2502.11451v2/x17.png)
324
+
325
+ Figure 17: Prompt for generating emotional support dialogue based on the given persona.
326
+
327
+ ![Image 18: Refer to caption](https://arxiv.org/html/2502.11451v2/x18.png)
328
+
329
+ Figure 18: Prompt for synthesizing dialogues without persona traits.
330
+
331
+ Appendix E Results of Persoan Consistency
332
+ -----------------------------------------
333
+
334
+ Table [7](https://arxiv.org/html/2502.11451v2#S5.F7 "Figure 7 ‣ 5 Persona Consistency in LLM-simulated Emotional Support Dialogues (RQ2) ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations") and [21](https://arxiv.org/html/2502.11451v2#A5.F21 "Figure 21 ‣ Appendix E Results of Persoan Consistency ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations") shows the results of measuring the communication style consistency after synthesizing emotional support dialogues by GPT-40-mini, Claude-3.5-Haiku, LLaMA-3.1-8B-Instruct. The results are comparable to those of other models discussed in Section [5.2](https://arxiv.org/html/2502.11451v2#S5.SS2 "5.2 Results and Discussions ‣ 5 Persona Consistency in LLM-simulated Emotional Support Dialogues (RQ2) ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations").
335
+
336
+ ![Image 19: Refer to caption](https://arxiv.org/html/2502.11451v2/x19.png)
337
+
338
+ Figure 19: Comparison of CSI scores between the original persona and the one extracted from the dialogue generated by gpt-4o-mini.
339
+
340
+ ![Image 20: Refer to caption](https://arxiv.org/html/2502.11451v2/x20.png)
341
+
342
+ Figure 20: Comparison of CSI scores between the original persona and the one extracted from the dialogue generated by claude-3.5-haiku.
343
+
344
+ ![Image 21: Refer to caption](https://arxiv.org/html/2502.11451v2/x21.png)
345
+
346
+ Figure 21: Comparison of CSI scores between the original persona and the one extracted from the dialogue generated by LLaMA-3.1-8B-Instruct.
347
+
348
+ ![Image 22: Refer to caption](https://arxiv.org/html/2502.11451v2/x22.png)
349
+
350
+ Figure 22: The persona card of the case study.
351
+
352
+ ![Image 23: Refer to caption](https://arxiv.org/html/2502.11451v2/x23.png)
353
+
354
+ Figure 23: The historical dialogue of the case study.
355
+
356
+ Appendix F Persona Card and Dialogue History of Case Study
357
+ ----------------------------------------------------------
358
+
359
+ Figure [22](https://arxiv.org/html/2502.11451v2#A5.F22 "Figure 22 ‣ Appendix E Results of Persoan Consistency ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations") shows the persona card injected into the generation, while Figure [23](https://arxiv.org/html/2502.11451v2#A5.F23 "Figure 23 ‣ Appendix E Results of Persoan Consistency ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations") shows the historical dialogue of the case from ESConv.
360
+
361
+ Appendix G Definitions of Emotional Support Strategies
362
+ ------------------------------------------------------
363
+
364
+ In our experiments, we use the emotional support strategies used in ESConv dataset (Liu et al., [2021](https://arxiv.org/html/2502.11451v2#bib.bib31)). The definitions of emotional support strategies are shown below:
365
+
366
+ Question: Asking open-ended or specific questions related to the problem to help the seeker articulate their issues and provide clarity.
367
+
368
+ Restatement or Paraphrasing: Concisely rephrasing the seeker’s statements to help them better understand their situation.
369
+
370
+ Reflection of feelings: Expressing and clarifying the seeker’s emotions to acknowledge their feelings.
371
+
372
+ Self-disclosure: Sharing similar experiences or emotions to build empathy and connection.
373
+
374
+ Affirmation and Reassurance: Affirm the seeker’s strengths, motivation, and capabilities and provide reassurance and encouragement.
375
+
376
+ Providing Suggestions: Offering possible ways forward while respecting the seeker’s autonomy.
377
+
378
+ Information: Provide useful information to the seeker.
379
+
380
+ Others: Exchange pleasantries and use strategies beyond the defined categories.
381
+
382
+ Appendix H Dialogue Generation Statistics
383
+ -----------------------------------------
384
+
385
+ Table 16: Statistics of generated dialogues.
386
+
387
+ Table [16](https://arxiv.org/html/2502.11451v2#A8.T16 "Table 16 ‣ Appendix H Dialogue Generation Statistics ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations") shows the statistics of dialogues generated w/ and w/o personas. We can observe dialogues generated with personas have fewer turns but are substantially longer on average per turn for both seeker and supporter. This aligns with our qualitative findings (Case study, Figure [10](https://arxiv.org/html/2502.11451v2#S6.F10 "Figure 10 ‣ 6.1 Analysis of Emotional Support Strategies ‣ 6 Impact of Persona on LLM-simulated Emotional Support Dialogues (RQ3) ‣ From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations")): the persona-guided supporter asks more targeted questions, leading to more substantive replies from the seeker and a more efficient, in-depth conversation.