diff --git "a/2510/2510.16829.md" "b/2510/2510.16829.md" new file mode 100644--- /dev/null +++ "b/2510/2510.16829.md" @@ -0,0 +1,591 @@ +Title: Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation + +URL Source: https://arxiv.org/html/2510.16829 + +Published Time: Tue, 21 Oct 2025 00:52:23 GMT + +Markdown Content: +Navreet Kaur 1 Hoda Ayad 1 Hayoung Jung 2 1 1 footnotemark: 1 + +Shravika Mittal 3 Munmun De Choudhury 3 Tanushree Mitra 1 + +1 University of Washington 2 Princeton University 3 Georgia Institute of Technology + +kanavr@uw.edu, tmitra@uw.edu + +###### Abstract + +Language model users often embed personal and social context in their questions. The asker’s role—implicit in how the question is framed—creates specific needs for an appropriate response. However, most evaluations, while capturing the model’s capability to respond, often ignore who is asking. This gap is especially critical in stigmatized domains such as opioid use disorder (OUD), where accounting for users’ contexts is essential to provide accessible, stigma-free responses. We propose CoRUS (CO mmunity-driven R oles for U ser-centric Question S imulation), a framework for simulating role-based questions. Drawing on role theory and posts from an online OUD recovery community (r/OpiatesRecovery), we first build a taxonomy of asker roles—patients, caregivers, practitioners. Next, we use it to simulate 15,321 15{,}321 questions that embed each role’s goals, behaviors, and experiences. Our evaluations show that these questions are both highly believable and comparable to real-world data. When used to evaluate five LLMs, for the same question but differing roles, we find systematic differences: vulnerable roles, such as patients and caregivers, elicit more supportive responses (+17%+17\%) and reduced knowledge content (−19%-19\%) in comparison to practitioners. Our work demonstrates how implicitly signaling a user’s role shapes model responses, and provides a methodology for role-informed evaluation of conversational AI.1 1 1 We will release the datasets and code upon publication. + +Who’s Asking? Simulating Role-Based Questions + +for Conversational AI Evaluation + +Navreet Kaur 1 Hoda Ayad 1††thanks: Equal contribution. Hayoung Jung 2 1 1 footnotemark: 1††thanks: Work done while at the University of Washington.Shravika Mittal 3 Munmun De Choudhury 3 Tanushree Mitra 1 1 University of Washington 2 Princeton University 3 Georgia Institute of Technology kanavr@uw.edu, tmitra@uw.edu + +1 Introduction +-------------- + +![Image 1: Refer to caption](https://arxiv.org/html/2510.16829v1/figures/framework.001.png) + +Figure 1: CoRUS framework. Users’ framing of a question often reflects the social context in which it is asked, yet most evaluations ignore this. We propose CoRUS to (1) derive a taxonomy of askers’ roles, grounded in role theory and posts from an online recovery community (§[3](https://arxiv.org/html/2510.16829v1#S3 "3 Taxonomy of Information-seeking Roles ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")), (2) simulate role-based questions embedded with social context (§[4](https://arxiv.org/html/2510.16829v1#S4 "4 Simulating Role-based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")), and (3) evaluate models to assess how role-based context influences response (§[5](https://arxiv.org/html/2510.16829v1#S5 "5 Application: Role-based Evaluations ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). + +> User 1 (patient): I am on day 3 of detoxing from opiates. I can’t sleep and keep throwing up. I don’t wanna give up. How to make this bearable? +> +> +> User 2 (caregiver): My son is withdrawing from opiates. I feel so helpless watching him suffer. What can I do to help him through this? + +In the examples above, both users pose the same question—“How to manage withdrawal symptoms?”—yet from different perspectives: first is from a patient recovering from withdrawal, while the second is from a caregiver caring for a loved one. These perspectives reflect users’ roles (§[2](https://arxiv.org/html/2510.16829v1#S2 "2 Why Role-based Evaluations? ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")), which capture the goals, behaviors, and experiences that shape how they frame questions (§[3](https://arxiv.org/html/2510.16829v1#S3 "3 Taxonomy of Information-seeking Roles ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). Users often share such personal and social context with large language model (LLM) powered conversational AI systems Mireshghallah et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib59)) to receive tailored advice Zhang et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib105)). + +Yet, most evaluations of such systems strip away this context Malaviya et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib54)), ignoring the social framing that shapes how people actually ask (§[2](https://arxiv.org/html/2510.16829v1#S2 "2 Why Role-based Evaluations? ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). This limitation is especially consequential in stigmatized domains like opioid use disorder (OUD)—a leading cause of death in the U.S. NIDA ([2023](https://arxiv.org/html/2510.16829v1#bib.bib61))—where stigma and mistrust limit access to care Cernasev et al. ([2021](https://arxiv.org/html/2510.16829v1#bib.bib12)); Woo et al. ([2017](https://arxiv.org/html/2510.16829v1#bib.bib101)). As a result, people increasingly turn to online communities Balsamo et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib5)), and now to LLMs Choy et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib20)) for treatment planning, and even seeking hope McCain et al. ([2025](https://arxiv.org/html/2510.16829v1#bib.bib56)). Thus the same question posed to an LLM may carry different expectations depending on _who_ is asking (as seen in our opening examples). While LLMs invite role-specific disclosure due to perceived anonymity and judgment-free interaction, we still lack systematic understanding of how an asker’s role and their social context—implicitly signaled in their queries—influence responses in stigmatized domains. Simply put, current LLM evaluations are role-agnostic and fail to capture the nuances of _who’s asking_. + +In this work, we propose CoRUS—CO mmunity-driven R oles for U ser-centric Question S imulation—a framework that embeds role-based context implicitly into a given question for LLM evaluation, drawing on how people share personal and social context in their queries on online communities (Figure [1](https://arxiv.org/html/2510.16829v1#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). By applying this framework to OUD recovery, we investigate three main research questions. + +First, we investigate _who_ brings their questions into an OUD recovery community (§[3](https://arxiv.org/html/2510.16829v1#S3 "3 Taxonomy of Information-seeking Roles ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). Drawing on role theory Biddle ([1979](https://arxiv.org/html/2510.16829v1#bib.bib9)); Turner ([1990](https://arxiv.org/html/2510.16829v1#bib.bib90)), which conceptualizes roles as socially structured expectations shaping how individuals behave and how others respond, we derive a taxonomy of roles using posts on r/OpiatesRecovery—an online community dedicated to OUD recovery discussions. We identify three primary roles defined by their goals, behaviors, and experiences (see Table [1](https://arxiv.org/html/2510.16829v1#S3.T1 "Table 1 ‣ 3 Taxonomy of Information-seeking Roles ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). + +Second, we study whether believable questions—those that are natural, realistic, or close proxies of questions asked by humans and aligned with the asker’s social context Park et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib69)); Zhou et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib109))—can be simulated for different roles (§[4](https://arxiv.org/html/2510.16829v1#S4 "4 Simulating Role-based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). Questions must be believable to mimic how people actually ask; otherwise, they remain detached from real use and lose ecological validity Liu ([2025](https://arxiv.org/html/2510.16829v1#bib.bib53)). We simulate role-specific variants of questions extracted from r/OpiatesRecovery, by leveraging our taxonomy to embed each role’s goals, behaviors, and experiences. Our evaluations show that 90%90\% of these questions are as believable as real-user queries from WildChat Zhao et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib106)) and Reddit, and 99%99\% faithfully reflect each role’s perspective (§[4.1](https://arxiv.org/html/2510.16829v1#S4.SS1 "4.1 Evaluating Simulated Questions ‣ 4 Simulating Role-based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). We contribute 15,321 15{,}321 role-specific questions, a large-scale resource for evaluating LLMs beyond context-free questions. + +Third, we evaluate five user-facing LLMs to examine how role-based contexts change responses relative to role-agnostic queries (§[5](https://arxiv.org/html/2510.16829v1#S5 "5 Application: Role-based Evaluations ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). Unlike existing evaluations that ignore who is asking, CoRUS reveals systematic shifts (§[5.3](https://arxiv.org/html/2510.16829v1#S5.SS3 "5.3 How does the asker’s role-based context shape LLM responses? ‣ 5 Application: Role-based Evaluations ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")): patient and caregiver contexts reduce knowledge (≥15%\geq 15\%) but increase support (≥15%\geq 15\%) and readability (≥24%\geq 24\%); practitioner context leaves knowledge unchanged, and decreases support (9%9\%) and readability (15%15\%). While models provide supportive responses, they are less informative for those most in need(Eslami-Jahromi et al., [2021](https://arxiv.org/html/2510.16829v1#bib.bib25); Mardani et al., [2023](https://arxiv.org/html/2510.16829v1#bib.bib55)), posing a challenge to the usefulness of LLMs in sensitive domains. + +Overall, we show that role-agnostic evaluations, which ignore the asker’s role, miss systematic differences in how LLMs respond to the same question across users. Overlooking such role-based variation risks designing systems that fail to meet distinct user needs. While we focus on OUD recovery, CoRUS is extendable to other domains, and we hope it inspires more role-aware evaluation of conversational AI, where social context shapes human-AI interaction Liao and Xiao ([2023](https://arxiv.org/html/2510.16829v1#bib.bib51)). + +2 Why Role-based Evaluations? +----------------------------- + +What users share, and how they phrase questions, implicitly represent their needs, expertise, and lived experiences Chan et al. ([2010](https://arxiv.org/html/2510.16829v1#bib.bib13)); Yang et al. ([2019](https://arxiv.org/html/2510.16829v1#bib.bib103)); Saxena and Reddy ([2022](https://arxiv.org/html/2510.16829v1#bib.bib78)); Tseng et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib89)) i.e. their role. Role theory conceptualizes a role as the socially structured expectations that shape how an individual behaves and how others respond Biddle ([1979](https://arxiv.org/html/2510.16829v1#bib.bib9)), which varies with circumstances Turner ([1990](https://arxiv.org/html/2510.16829v1#bib.bib90)). Asking a question is thus a performance of role, enacted through framing and disclosure Goffman ([1949](https://arxiv.org/html/2510.16829v1#bib.bib31)). Questions are therefore not role-agnostic requests, but social acts that encode the asker’s role and expectations. + +Such role-based questions are especially critical in stigmatized, high-stakes domains, where users seek not just information but also support. In OUD recovery, individuals turn to online communities Balsamo et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib5)), and increasingly to LLMs for guidance Choy et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib20)), as stigma limits access to care Centers for Disease Control and Prevention ([2023](https://arxiv.org/html/2510.16829v1#bib.bib11)). Crucially, what counts as an appropriate response to the same question depends on who asks it—a distinction that current role-agnostic evaluations fail to capture (see Figure [1](https://arxiv.org/html/2510.16829v1#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). For instance, a generic response to “How to manage withdrawal symptoms?” that suggests seeking medication, tapering opioids, and warns about relapse may seem reasonable; however, it can fail to comfort a distressed patient, be impractical for a caregiver, or lack rigor for a practitioner. Current role-agnostic evaluations fail to identify such nuanced failures, a gap we address in this work. + +3 Taxonomy of Information-seeking Roles +--------------------------------------- + +Table 1: Taxonomy of information-seeking roles. Roles are characterized by their goals, behaviors and experiences, derived from r/OpiatesRecovery posts, with each illustrated by a paraphrased example post. + +We develop a taxonomy of information-seeking roles by combining a top-down framework grounded in role theory Biddle ([1979](https://arxiv.org/html/2510.16829v1#bib.bib9)); Turner ([1990](https://arxiv.org/html/2510.16829v1#bib.bib90)); Yang et al. ([2019](https://arxiv.org/html/2510.16829v1#bib.bib103)) (§[3.1](https://arxiv.org/html/2510.16829v1#S3.SS1 "3.1 Operationalizing Role ‣ 3 Taxonomy of Information-seeking Roles ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")) with a large-scale, bottom-up analysis of Reddit posts (§[3.2](https://arxiv.org/html/2510.16829v1#S3.SS2 "3.2 Taxonomy Construction ‣ 3 Taxonomy of Information-seeking Roles ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). + +### 3.1 Operationalizing Role + +Informed by role theory Biddle ([1979](https://arxiv.org/html/2510.16829v1#bib.bib9)); Turner ([1990](https://arxiv.org/html/2510.16829v1#bib.bib90)), we adapt Yang et al. ([2019](https://arxiv.org/html/2510.16829v1#bib.bib103))’s framework to characterize information-seeking roles in OUD recovery through the following facets: (i) Goals: intents such as seeking information or emotional support, or disclosing struggles; (ii) Behaviors: observable linguistic features, including emotional expressions, mentions of medical terms, or references to external resources; and (iii) Experiences: descriptions of lived experiences, reflecting comfort with disclosing sensitive information. + +### 3.2 Taxonomy Construction + +Recent work shows that LLM-based summarization and clustering pipelines can support large-scale data analysis Tamkin et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib88)); Lam et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib47)); Wan et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib94)); Rao et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib72)). We build on this methodology to construct a taxonomy of information-seeking roles in OUD recovery community, grounding the analysis in role-specific facets i.e. goal, behavior and experience (§[3.1](https://arxiv.org/html/2510.16829v1#S3.SS1 "3.1 Operationalizing Role ‣ 3 Taxonomy of Information-seeking Roles ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). Our method blends inductive coding with deductive guidance from prior work Yang et al. ([2019](https://arxiv.org/html/2510.16829v1#bib.bib103)); Biddle ([1979](https://arxiv.org/html/2510.16829v1#bib.bib9)); Turner ([1990](https://arxiv.org/html/2510.16829v1#bib.bib90)): while LLMs surface preliminary codes, human judgments remain central in refining the taxonomy. Below we outline the key steps for constructing the taxonomy. + +(1) Summarization. We collect 10,017 10{,}017 posts between January 1, 2023, and December 31, 2024, from r/OpiatesRecovery, a Reddit community “dedicated to helping each other stop and stay stopped” from opioid use. Following Tamkin et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib88)), we scale qualitative analysis by using Claude 3 Haiku (claude-3-haiku-20240307) to generate facet-based summaries (goal, behavior, experience) per post (examples in Table [4](https://arxiv.org/html/2510.16829v1#A1.T4 "Table 4 ‣ Method ‣ Appendix A Taxonomy Constructions Details ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")), mirroring the initial coding phase of thematic analysis Clarke and Braun ([2017](https://arxiv.org/html/2510.16829v1#bib.bib21)). (2) Clustering. We embed these summaries using a sentence transformer (all-mpnet-base-v2) Reimers and Gurevych ([2019](https://arxiv.org/html/2510.16829v1#bib.bib73)); Song et al. ([2020](https://arxiv.org/html/2510.16829v1#bib.bib85)), and cluster them with k-means, selecting four clusters based on UMAP visualization McInnes et al. ([2018](https://arxiv.org/html/2510.16829v1#bib.bib57)) and Silhouette analysis Rousseeuw ([1987](https://arxiv.org/html/2510.16829v1#bib.bib75)) (see Appendix [A](https://arxiv.org/html/2510.16829v1#A1 "Appendix A Taxonomy Constructions Details ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation") for details). We assign preliminary cluster descriptions using Claude 3.7 Sonnet (claude-3-7-sonnet-20250219) and refine each by manually reviewing 40 40 posts per cluster (see Appendix [A](https://arxiv.org/html/2510.16829v1#A1 "Appendix A Taxonomy Constructions Details ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation") for details). (3) Human Validation. Two expert annotators with prior experience in health NLP research validate both generated facet-based summaries and cluster labels, showing high inter-rater agreement and close alignment with model outputs (Cohen’s κ​0.71−0.93\kappa~0.71{-}0.93, accuracy 0.70−0.88 0.70{-}0.88; see Tables [5](https://arxiv.org/html/2510.16829v1#A1.T5 "Table 5 ‣ Facet-based Inputs ‣ Appendix A Taxonomy Constructions Details ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")–[6](https://arxiv.org/html/2510.16829v1#A1.T6 "Table 6 ‣ Facet-based Inputs ‣ Appendix A Taxonomy Constructions Details ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). Further details are in Appendix [A](https://arxiv.org/html/2510.16829v1#A1 "Appendix A Taxonomy Constructions Details ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation"). + +#### Taxonomy Structure. + +Through this process, we construct a taxonomy of three information-seeking roles in OUD recovery community: Patient, Caregiver, and Practitioner, each characterized by specific goals, behaviors and experiences (Table [1](https://arxiv.org/html/2510.16829v1#S3.T1 "Table 1 ‣ 3 Taxonomy of Information-seeking Roles ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")).2 2 2 We use the term information-seeking broadly to include requests for information, support, or guidance. We also identify a Community Participant role, but focus on the former three—most relevant to conversational AI use Chatterji et al. ([2025](https://arxiv.org/html/2510.16829v1#bib.bib15))—and discuss the rest in Appendix [A](https://arxiv.org/html/2510.16829v1#A1 "Appendix A Taxonomy Constructions Details ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation") (Table [7](https://arxiv.org/html/2510.16829v1#A1.T7 "Table 7 ‣ Facet-based Inputs ‣ Appendix A Taxonomy Constructions Details ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). + +4 Simulating Role-based Questions +--------------------------------- + +Datasets that capture how different user roles frame questions do not exist (§[2](https://arxiv.org/html/2510.16829v1#S2 "2 Why Role-based Evaluations? ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")), and simply prompting LLMs to generate synthetic data often drifts from real-world style and distribution Veselovsky et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib93)). Moreover, explicitly specifying demographic or identity attributes in prompts risks stereotyping Cheng et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib17)); Dammu et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib22)). To address these challenges, we embed roles implicitly in queries. We first extract information-seeking questions from r/OpiatesRecovery, and then use our taxonomy of information-seeking roles (§[3](https://arxiv.org/html/2510.16829v1#S3 "3 Taxonomy of Information-seeking Roles ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")) to generate role-framed variants. + +Table 2: Metrics for evaluating the believability of simulated role-based questions (§[4.1](https://arxiv.org/html/2510.16829v1#S4.SS1 "4.1 Evaluating Simulated Questions ‣ 4 Simulating Role-based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). + +#### Curating Information-Seeking Questions. + +Following Kumar et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib46)), we filter 10,017 10{,}017 posts to retain only English, non-deleted, text-only posts, excluding moderator posts. We detect questions by checking if the title, or first or last two body sentences end with a question mark (86%86\% precision by manual check). We then prompt GPT-4o-mini to rewrite each as a role-agnostic question, removing any personal details and role-revealing cues. Two authors validate 50 50 samples, confirming that 94%94\% rewrites preserve the original question, and that only 0.06%0.06\% questions have role leakage (Cohen’s κ\kappa = 0.82 0.82). GPT-4.1, validated against these human labels (72−78%72{-}78\% accuracy), provides additional filtering, resulting in 5,107 5{,}107 information-seeking questions (see Appendix [B](https://arxiv.org/html/2510.16829v1#A2.SS0.SSS0.Px1 "Validating Role-agnostic Information-seeking Questions ‣ Appendix B Simulating Role-based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation") for details). + +#### Role-Based Simulation Design. + +We simulate role-based questions by combining: (i) information-seeking questions that capture topical diversity, and (ii) taxonomy of information-seeking roles (§[3](https://arxiv.org/html/2510.16829v1#S3 "3 Taxonomy of Information-seeking Roles ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")), that encodes role-based attributes. For each seed question, we condition Claude-3.7-Sonnet on these attributes, along with sampled facet-based summaries (Table [4](https://arxiv.org/html/2510.16829v1#A1.T4 "Table 4 ‣ Method ‣ Appendix A Taxonomy Constructions Details ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")), enriching simulated questions with behavioral and experiential nuances (details are in Appendix §[B](https://arxiv.org/html/2510.16829v1#A2 "Appendix B Simulating Role-based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). Applying CoRUS to 5,107 5{,}107 information-seeking questions across three roles—Patients, Caregivers, and Practitioners—yields 15,321 15,321 role-based questions (examples in Table [9](https://arxiv.org/html/2510.16829v1#A2.T9 "Table 9 ‣ Evaluating Role-Disclosure ‣ Appendix B Simulating Role-based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). Next, we demonstrate the usefulness of the taxonomy in simulating these realistic role-based queries (§[4.1](https://arxiv.org/html/2510.16829v1#S4.SS1 "4.1 Evaluating Simulated Questions ‣ 4 Simulating Role-based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). + +### 4.1 Evaluating Simulated Questions + +#### Metrics. + +To ensure ecological validity Liu ([2025](https://arxiv.org/html/2510.16829v1#bib.bib53)), simulated questions should mimic how people actually ask. We therefore evaluate whether CoRUS generates believable role-based questions. Prior work defines believability as behavior perceived as natural, realistic and role-aligned Zhou et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib109)); Park et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib69)). We operationalize this through five metrics grounded in style transfer, role-playing, and user-modeling literature, outlined in Table [2](https://arxiv.org/html/2510.16829v1#S4.T2 "Table 2 ‣ 4 Simulating Role-based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation"). Examples of questions that satisfy or do not satisfy these metrics are in Table [11](https://arxiv.org/html/2510.16829v1#A3.T11 "Table 11 ‣ C.1 Metric Details ‣ Appendix C Evaluating Believability of Simulated Role-Based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation"). + +#### Evaluation Setup. + +We use the following evaluation setup: (1) Automatic: GPT-4.1 judges Zheng et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib107)) all questions on all metrics (Table [2](https://arxiv.org/html/2510.16829v1#S4.T2 "Table 2 ‣ 4 Simulating Role-based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")) using binary labels; for content preservation, we collapse five rating levels to binary following Briakou et al. ([2021](https://arxiv.org/html/2510.16829v1#bib.bib10)) (prompts are in Appendix [C.2](https://arxiv.org/html/2510.16829v1#A3.SS2 "C.2 Automatic Evaluation ‣ Appendix C Evaluating Believability of Simulated Role-Based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). (2) Human: We validate each GPT-4.1 judge with 7,365 7{,}365 human annotations across 491 491 queries (Table [10](https://arxiv.org/html/2510.16829v1#A3.T10 "Table 10 ‣ C.1 Metric Details ‣ Appendix C Evaluating Believability of Simulated Role-Based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). Three annotators, recruited via Prolific, independently perform the same task as GPT-4.1 rater (Gwet’s AC1 0.63−0.99 0.63{-}0.99; GPT-4.1 accuracy 0.70−0.90 0.70{-}0.90). Full details are in Appendix [C.3](https://arxiv.org/html/2510.16829v1#A3.SS3 "C.3 Human Annotation ‣ Appendix C Evaluating Believability of Simulated Role-Based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation"). + +We evaluate CoRUS-simulated questions against two sets (Table [10](https://arxiv.org/html/2510.16829v1#A3.T10 "Table 10 ‣ C.1 Metric Details ‣ Appendix C Evaluating Believability of Simulated Role-Based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")): (i) real-user queries from r/OpiatesRecovery and Wildchat Zhao et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib106)), as a reference to test believability,3 3 3 Wildchat lacks OUD-specific queries, so we use health- and clinical-related questions from the dataset, identified using task and domain labels from Mireshghallah et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib59)). and (ii) prompting variants—R, RG, RGB, and RGBE (role+goal+behavior+experience)—as ablation baselines against the full CoRUS pipeline that also incorporates behavior- and experience-based summaries for greater nuance. + +### 4.2 Results + +![Image 2: Refer to caption](https://arxiv.org/html/2510.16829v1/figures/corus-eval.jpeg) + +Figure 2: Evaluation of Simulated Role-based Questions. Top: CoRUS produces questions with believability comparable to real-world ones. Bottom: CoRUS produces significantly more human-like and contextually plausible queries than other prompting methods, while maintaining high interaction plausibility and other properties (Appendix [C](https://arxiv.org/html/2510.16829v1#A3 "Appendix C Evaluating Believability of Simulated Role-Based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation"), Figure [7](https://arxiv.org/html/2510.16829v1#A3.F7 "Figure 7 ‣ Role Faithfulness ‣ C.1 Metric Details ‣ Appendix C Evaluating Believability of Simulated Role-Based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). * denotes significant difference from CoRUS(Chi-square test, p<0.05 p<0.05); error bars show 95%95\% CI. + +![Image 3: Refer to caption](https://arxiv.org/html/2510.16829v1/figures/response-score-change.png) + +Figure 3: Percentage change in Knowledge, Support and Readability scores of model responses when role-based context is added to the queries. Across all models, patient and caregiver context reduces knowledge, but increases support and readability. Practitioner context shows the inverse trend, with lower support, higher readability, and no significant change in knowledge. More details in Figures [10](https://arxiv.org/html/2510.16829v1#A5.F10 "Figure 10 ‣ Appendix E Extended Results ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation") and [11](https://arxiv.org/html/2510.16829v1#A5.F11 "Figure 11 ‣ Appendix E Extended Results ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation"). Error bars represent 95% CI. + +#### Comparison with Real-User Queries. + +CoRUS-generated queries achieve believability comparable to real-user queries (Figure [2](https://arxiv.org/html/2510.16829v1#S4.F2 "Figure 2 ‣ 4.2 Results ‣ 4 Simulating Role-based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation"), top). 93%93\% queries are judged human-like, closely matching WildChat (96%96\%) and Reddit (100%100\%). 94%94\% are judged contextually plausible within r/OpiatesRecovery, similar to Reddit (98%98\%); WildChat is not comparable here, as it lacks OUD-specific questions. For interaction plausibility, 89%89\% are judged plausible as chatbot-directed questions, compared to 59%59\% for Reddit and 97%97\% for WildChat. The lower proportion for Reddit reflects that community posts are phrased for collective audiences rather than conversational AI systems. Role faithfulness holds in 99%99\% of cases (99%99\% patient, 98%98\% caregiver, 100%100\% practitioner), comparable to Reddit (98%98\%). Content preservation requires paired inputs and is therefore not applicable in this comparison. Overall, CoRUS produces questions that are human-like, contextually plausible, directed towards a chatbot, and role-faithful—achieving believability on par with real-world queries.4 4 4 While some are significantly different (Figure [2](https://arxiv.org/html/2510.16829v1#S4.F2 "Figure 2 ‣ 4.2 Results ‣ 4 Simulating Role-based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation"), top), effect sizes are small to moderate (0.2−0.4 0.2{-}0.4), except for Reddit’s interaction plausibility (0.7 0.7) (details are in Appendix [E](https://arxiv.org/html/2510.16829v1#A5 "Appendix E Extended Results ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). + +#### Comparison with Prompting Baselines. + +CoRUS outperforms the prompting baselines (Figure[2](https://arxiv.org/html/2510.16829v1#S4.F2 "Figure 2 ‣ 4.2 Results ‣ 4 Simulating Role-based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation"), bottom). 93%93\% of CoRUS generated queries are judged human-like, compared to 27%−67%27\%-67\% for baselines. Contextual plausibility also increases to 95%95\%, compared to 68%−90%68\%-90\%. For interaction plausibility, 89%89\% of CoRUS queries are judged plausible, lower than R (99%99\%) and RG (96%96\%), but higher than more complex baselines (76%−78%76\%-78\%). The higher proportions for R and RG reflect overly direct phrasing that looks plausible but lacks naturalness and community grounding. In contrast, CoRUS balances human-likeness, and contextual and interaction plausibility. Role faithfulness remains high across all methods (∼99%\sim 99\%), but 18%−72%18\%-72\% of baseline queries rely on explicit role mentions (such as “I am a patient …”), while CoRUS achieves role faithfulness without such mentions (Table [13](https://arxiv.org/html/2510.16829v1#A3.T13 "Table 13 ‣ Subjectivity in Human Annotations ‣ C.3 Human Annotation ‣ Appendix C Evaluating Believability of Simulated Role-Based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). Content preservation is also high, with 96%96\% of CoRUS-generated questions retaining the intent of the source question while naturally adjusting framing and style. Overall, CoRUS produces more human-like and contextually plausible queries than baselines, while maintaining high interaction plausibility, role faithfulness, and content preservation. + +5 Application: Role-based Evaluations +------------------------------------- + +We demonstrate the utility of CoRUS by using the simulated questions to evaluate how an asker’s role-based context influences model responses. + +### 5.1 Models Evaluated + +We generate 15,321 15{,}321 role-specific questions across patients, caregivers, and practitioners, and use them to evaluate five LLMs deployed in user-facing applications: GPT-4o Hurst et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib38)), Gemini-2.5-Flash DeepMind ([2025](https://arxiv.org/html/2510.16829v1#bib.bib24)), Llama-3.1–8B, Llama-3.1-70B Grattafiori et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib32)), and Llama3-OpenBioLLM-70B, a specialized medical LLM Pal and Sankarasubbu ([2024](https://arxiv.org/html/2510.16829v1#bib.bib67)). Experimental details are in Appendix [D](https://arxiv.org/html/2510.16829v1#A4 "Appendix D Experimental Details ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation"). + +### 5.2 Response Evaluation Metrics + +Grounded in users’ goals from our taxonomy (Table [1](https://arxiv.org/html/2510.16829v1#S3.T1 "Table 1 ‣ 3 Taxonomy of Information-seeking Roles ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")), sociology and health communication literature, we evaluate responses along the following: + +* •Knowledge Fiske et al. ([2007](https://arxiv.org/html/2510.16829v1#bib.bib26)); Levin and Cross ([2004](https://arxiv.org/html/2510.16829v1#bib.bib48)): Presence of guidance-oriented content, indicating the amount of information conveyed. Since people seeking OUD recovery information often lack access to medical guidance, information provision is critical Eslami-Jahromi et al. ([2021](https://arxiv.org/html/2510.16829v1#bib.bib25)); Basak et al. ([2025](https://arxiv.org/html/2510.16829v1#bib.bib6)); Cernasev et al. ([2021](https://arxiv.org/html/2510.16829v1#bib.bib12)). +* •Support Baumeister and Leary ([2017](https://arxiv.org/html/2510.16829v1#bib.bib8)); Fiske et al. ([2007](https://arxiv.org/html/2510.16829v1#bib.bib26)); Vaux ([1988](https://arxiv.org/html/2510.16829v1#bib.bib92)): Expressions of empathy, reassurance or emotional aid. Supportive communication is especially important in OUD recovery, as stigma discourages seeking information Cernasev et al. ([2021](https://arxiv.org/html/2510.16829v1#bib.bib12)). +* •Readability: Ease with which responses can be understood. Low health literacy is correlated with worse health outcomes Wolf et al. ([2005](https://arxiv.org/html/2510.16829v1#bib.bib99)), and inaccessible language can further exacerbate barriers to care in OUD contexts Cernasev et al. ([2021](https://arxiv.org/html/2510.16829v1#bib.bib12)). We measure readability using the Flesch Reading Ease score Flesch ([1948](https://arxiv.org/html/2510.16829v1#bib.bib27)), normalized to lie in 0−1 0{-}1. + +We adopt the classifiers from Choi et al. ([2020](https://arxiv.org/html/2510.16829v1#bib.bib19)) to measure knowledge and support (scored between 0 and 1 1), validating them for our setting through 160 160 human annotations (Cohen’s κ\kappa 0.7 0.7, accuracy 0.9 0.9; details are in Appendix [D](https://arxiv.org/html/2510.16829v1#A4 "Appendix D Experimental Details ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). + +### 5.3 How does the asker’s role-based context shape LLM responses? + +![Image 4: Refer to caption](https://arxiv.org/html/2510.16829v1/figures/abstention.png) + +Figure 4: Abstention. Proportion of role-agnostic and role-based queries that models refuse to answer. + +We show how role-based context shifts responses across knowledge, support, and readability (§[5.2](https://arxiv.org/html/2510.16829v1#S5.SS2 "5.2 Response Evaluation Metrics ‣ 5 Application: Role-based Evaluations ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")), reporting percentage changes relative to role-agnostic queries (Figure [3](https://arxiv.org/html/2510.16829v1#S4.F3 "Figure 3 ‣ 4.2 Results ‣ 4 Simulating Role-based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). We also analyze cases where queries are refused (Figure [4](https://arxiv.org/html/2510.16829v1#S5.F4 "Figure 4 ‣ 5.3 How does the asker’s role-based context shape LLM responses? ‣ 5 Application: Role-based Evaluations ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). + +#### Patient. + +Embedding patient context consistently reduces knowledge, with an average drop of 15%15\% across models, largest in Llama-3.1-8B (23%23\%) and smallest in Gemini-2.5-Flash (7%7\%). In contrast, support increases by 15%15\% on average, from 6%6\% in Llama-3.1-70B to 22%22\% in OpenBioLLM-70B. Readability improves substantially, increasing from 27%27\% to 43%43\%, highest in Gemini-2.5-Flash. + +#### Caregiver. + +For caregiver context, models again trade off knowledge for support and readability. Knowledge scores show the largest drop across roles (23%23\% on average), with largest drop in OpenBioLLM-70B (26%26\%) and GPT-4o (27%27\%). In contrast, support is highest for this role, with a 19%19\% increase on average, with largest increase in OpenBioLLM-70B (25%25\%). Readability also improves by 24%24\% on average, with largest gain in Gemini-2.5-Flash (34%34\%) and GPT-4o (29%29\%). + +#### Practitioner. + +Practitioner context shows the opposite trend. There is no significant change in knowledge (1%1\% on average), except Llama-3.1-8B showing an increase of 5%5\%. Support decreases across all models by 9%9\% on average, with Llama-3.1-70B showing the largest drop (14%14\%). Readability also decreases, with an average drop of 15%15\%, and again more pronounced in Llama-3.1-70B (22%22\%). Scores of responses to questions with practitioner context embedded in them are often comparable to role-agnostic queries, suggesting that explicit expert framing does not necessarily elicit higher knowledge content. + +#### Abstentions. + +We also examine model abstentions (Figure [4](https://arxiv.org/html/2510.16829v1#S5.F4 "Figure 4 ‣ 5.3 How does the asker’s role-based context shape LLM responses? ‣ 5 Application: Role-based Evaluations ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation"); method details in Appendix [D](https://arxiv.org/html/2510.16829v1#A4 "Appendix D Experimental Details ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")): while 8%8\% of role-agnostic questions are refused, rates increase with role-based context—15%15\% for patients, 9%9\% for caregivers and 5%5\% for practitioner. Llama-3.1-8B abstains most (38%38\% of patient queries), whereas GPT-4o and OpenBioLLM-70B abstain rarely (1 1-3%3\%). This suggest that vulnerability cues in role-based queries may trigger conservative guardrails against sensitive queries (see Table [16](https://arxiv.org/html/2510.16829v1#A4.T16 "Table 16 ‣ Identifying Abstention in Responses ‣ D.2 Response Evaluation ‣ Appendix D Experimental Details ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation") for refused queries). + +6 Discussion and Future Work +---------------------------- + +#### Applications of the Taxonomy of Information-seeking Roles. + +Our taxonomy, grounded in social science theories Biddle ([1979](https://arxiv.org/html/2510.16829v1#bib.bib9)); Turner ([1990](https://arxiv.org/html/2510.16829v1#bib.bib90)); Yang et al. ([2019](https://arxiv.org/html/2510.16829v1#bib.bib103)), captures how roles shape question-asking in OUD recovery. Through this taxonomy, we hope to inspire directions for future work. + +_Ecologically valid evaluations_: By identifying who asks (roles), how they frame questions, and what they need, role taxonomies can enable evaluations that account for underrepresented roles and support role-specific metrics. CoRUS offers a starting point; future work can extend taxonomy construction and role-framed question simulation to other domains where role-specific framing matters, such as personal advice Cheng et al. ([2025](https://arxiv.org/html/2510.16829v1#bib.bib18)), mental health Rousmaniere et al. ([2025](https://arxiv.org/html/2510.16829v1#bib.bib74)), and maternal health Antoniak et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib2)). Developing such extensions is essential, as people increasingly turn to conversational AI for sensitive needs Chatterji et al. ([2025](https://arxiv.org/html/2510.16829v1#bib.bib15)); Tamkin et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib88)). + +_User modeling_: We demonstrate the use of taxonomy to simulate different users’ questions. Future work can broaden this to user simulation, while (1) _avoiding caricatures_ by grounding in behaviors rather than demographics or identities Cheng et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib17)); Wang et al. ([2025](https://arxiv.org/html/2510.16829v1#bib.bib95)), and (2) _refraining from using real-user data_ which raises privacy and ethical concerns. + +_Assessing LLM responses_: Prior work shows that giving evaluators context behind under-specified queries improves their assessment of response quality Malaviya et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib54)). Future work can similarly explore the use of role-based taxonomies for providing user-specific context. + +#### Alignment with User Needs. + +Sharing personal context helps tailor responses but trades privacy for utility Zhang et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib105)); Guo et al. ([2012](https://arxiv.org/html/2510.16829v1#bib.bib34)). In OUD recovery, this trade-off is not always justified: role-based context can suppress knowledge for certain roles, amplify support, and increase refusals, potentially exacerbating information gaps in stigmatized domains. Future work could build models that balance support and knowledge, study role-dependent refusals, and capture long-tail, underrepresented users’ needs beyond this work. Crucially, post-training and evaluations must involve real users, not just to see which response is preferred, but by whom. + +#### Downstream Model Selection. + +We show that models balance providing information, support, and refusing the query differently by asker role (§[5.3](https://arxiv.org/html/2510.16829v1#S5.SS3 "5.3 How does the asker’s role-based context shape LLM responses? ‣ 5 Application: Role-based Evaluations ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). For instance, caregiver-framed queries to Gemini-2.5-Flash receive stronger support and higher readability with smaller knowledge drops than other models, but also face 11%11\% refusals. These differences do not imply a single best model, but instead highlight variations that can inform model selection in downstream applications. In the absence of such nuanced role-based evaluations, users are left to run ad-hoc tests to identify suitable models Liu ([2025](https://arxiv.org/html/2510.16829v1#bib.bib53)). CoRUS provides a blueprint for such evaluations, guiding model selection for different user contexts. + +#### Design Implications for Conversational AI. + +We analyze the correlation between queries and model responses, finding that supportive cues in queries moderately correlate with supportive responses (Pearson’s correlation r=0.3 r=0.3). Such mirroring Jain et al. ([2025](https://arxiv.org/html/2510.16829v1#bib.bib39)) can aid personalization, trust, and emotional support Sun and Wang ([2025](https://arxiv.org/html/2510.16829v1#bib.bib87)), but can also reinforce (potentially harmful) beliefs Jones et al. ([2025](https://arxiv.org/html/2510.16829v1#bib.bib42)); Sharma et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib82)), and lead to over-reliance Jones et al. ([2025](https://arxiv.org/html/2510.16829v1#bib.bib42)); Cheng et al. ([2025](https://arxiv.org/html/2510.16829v1#bib.bib18)). Designing for this trade-off requires dynamically adjusting mirroring across roles, contexts and topics to provide support without compromising the utility and needs of particular roles, especially in sensitive domains. Beyond query framing, conversational AI systems may also mirror users through stored ‘memories’ i.e. preferences, interests or personal details from past conversations Siliski ([2025](https://arxiv.org/html/2510.16829v1#bib.bib83)); OpenAI ([2024](https://arxiv.org/html/2510.16829v1#bib.bib65)). Future work is needed to understand which details are stored as ‘memories’, how they form role-like representations of users, and how these representations shape responses. + +7 Related Work +-------------- + +#### Simulating Users. + +Simulated users are used to test LLM behaviors, from generative agents in sandbox environments Park et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib69)) and online community personas Huh et al. ([2016](https://arxiv.org/html/2510.16829v1#bib.bib37)) to role-playing benchmarks Wang et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib97)); Shao et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib81)); Salemi et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib77)); Dammu et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib22)) and therapy-patient simulators Wang et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib96)). Existing approaches have two limitations, especially pronounced in stigmatized domains: (1) demographic or psychometric-based simulations Safdari et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib76)); Huang et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib36)); Chawla et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib16)) risk stereotyping Cheng et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib17)); Wang et al. ([2025](https://arxiv.org/html/2510.16829v1#bib.bib95)), and (2) using real user data to reconstruct individuals Wang et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib97)); Li et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib49)); Gao et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib29)); Park et al. ([2024b](https://arxiv.org/html/2510.16829v1#bib.bib70)) has privacy concerns Staab et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib86)); Kim et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib45)). CoRUS addresses these issues by shifting to _roles_, centered on behaviors and topics rather than demographics or individuals, balancing believability with privacy and ethical concerns. + +#### User-centric Evaluation. + +Roles shape how users ask questions and what support they seek (§[2](https://arxiv.org/html/2510.16829v1#S2 "2 Why Role-based Evaluations? ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). Prior work on personas and user simulations has explored personalization Nolte et al. ([2022](https://arxiv.org/html/2510.16829v1#bib.bib62)); Xiang et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib102)); Davidson et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib23)), but rarely tests whether LLMs adapt to role-specific needs. While human–AI logs show that users disclose personal and sensitive context Zhao et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib106)); Mireshghallah et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib59)), health-related LLM evaluations focus mainly on factuality Kaur et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib44)); Giorgi et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib30)). Responding to calls for more contextual realism in AI evaluation Liao and Xiao ([2023](https://arxiv.org/html/2510.16829v1#bib.bib51)), we foreground roles by generating role-specific queries that capture differences in goals, behaviors and experiences, enabling systematic evaluation of how conversational AI shifts responses across roles. + +8 Conclusion +------------ + +We presented CoRUS, a framework for embedding social context into queries to foreground who is asking. Applied to the OUD recovery community, we showed that LLM responses shift systematically by asker role, with supportive responses often lacking knowledge content for vulnerable roles. Our findings highlight the limits of role-agnostic evaluation, and the need for ecologically valid, socially grounded methods. CoRUS provides a blueprint for such methods, extendable to other domains. + +Limitations +----------- + +#### Context Scope. + +Our taxonomy and dataset is derived from a single online community (r/OpiatesRecovery), which provides diverse questions and social contexts but we acknowledge that it may not capture the full range of roles in real-world human–AI conversations. The taxonomy in Table [1](https://arxiv.org/html/2510.16829v1#S3.T1 "Table 1 ‣ 3 Taxonomy of Information-seeking Roles ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation") therefore may not be exhaustive, as additional or intersectional roles (such as caregiver–community participant) roles are likely. Although we focus on OUD recovery, our approach of constructing role taxonomies and socially framed queries systematically from community-derived roles is potentially reusable across other domains, including online communities on personal advice, mental health, or sexual and reproductive health. + +#### Reliance on Automatic Evaluators. + +We rely on LLM-based evaluators to assess quality of simulated queries, which improves efficiency and reduces annotation cost given the dataset size. To ensure reliability, we validated them against human annotations. Because provider-side updates and the stochastic nature of LLMs can affect results, LLM-judge annotations should be interpreted as indicative of broader trends rather than exact measurements. To mitigate variance, we use the large models as judges, provide detailed task descriptions rather than brief instructions, avoid claims when p p-values are near the 0.05 0.05 threshold, and report error bars and statistical significance throughout, following recommendations by Baumann et al. ([2025](https://arxiv.org/html/2510.16829v1#bib.bib7)). + +#### Evaluation Scope. + +We analyze the amount of knowledge content of LLM responses but not their factual accuracy. Most health-domain LLM evaluations focus on factuality Singhal et al. ([2022](https://arxiv.org/html/2510.16829v1#bib.bib84)); Kaur et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib44)); Nori et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib63)); Giorgi et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib30)); Liévin et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib52)), but not role-based variation. We therefore prioritize role-based needs here, and leave factuality, particularly challenging in OUD recovery Jung et al. ([2025](https://arxiv.org/html/2510.16829v1#bib.bib43)), for future work. + +#### User Context Beyond Roles. + +Our analysis is limited to single-turn interactions where all context is provided in the initial query. Future work should extend to multi-turn and temporally evolving settings, where social context accumulates across turns and roles can shift over time (e.g., a user who was once a patient may later become a caregiver). While we assume roles are implicitly embedded in queries through users’ goals, behaviors, and experiences, this is only one way to model social context, and query structures or disclosure patterns may vary across users and topics in practice. For instance, roles may also be signaled through users’ “memories” (details stored from past conversations, such as preferences, interests, or instructions) Siliski ([2025](https://arxiv.org/html/2510.16829v1#bib.bib83)); OpenAI ([2024](https://arxiv.org/html/2510.16829v1#bib.bib65)). Future work is needed to understand how such signals of social context shape model behavior. + +Ethical Considerations +---------------------- + +#### Privacy Concerns. + +We de-identified all data and constructed the taxonomy from privacy-preserving summaries Tamkin et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib88)) rather than original posts. The released dataset will exclude original text or questions, and any quotes referenced in this work are paraphrased to minimize re-identification risks. As our study relied on existing public data without direct interaction with original posters, it was deemed exempt from Institutional Review Board (IRB) approval at the authors’ institution. + +#### LLM-based simulations. + +LLM-based simulations pose ethical concerns, including impersonation, misrepresentation, or inappropriate use. In this work, they are used solely to generate queries for evaluation in contexts where collecting real-world data is infeasible, not as substitutes for human participants, and without using personally identifiable information or verbatim Reddit posts. Following recommendations from recent work Cheng et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib17)); Wang et al. ([2025](https://arxiv.org/html/2510.16829v1#bib.bib95)), our design grounds simulations in topical specificity and information-seeking behaviors via the role taxonomy, rather than demographic or surface-level features. For a broader discussion of ethical considerations and best practices in LLM-based simulations, we refer readers to Cheng et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib17)). + +#### Human Annotation. + +All Prolific annotators provided informed consent and were shown disclaimers about the sensitive nature of the queries being assessed. They were compensated at an hourly rate of USD12, in line with fair-pay standards and above the U.S. federal minimum wage. + +#### Researcher Well-being. + +Conducting research on sensitive topics such as substance use recovery can be emotionally demanding. To mitigate potential strain, we took measures such as scheduling breaks during annotation, sharing workload across authors, and creating space for debriefing. We encourage others working in this area to likewise prioritize their well-being and seek support when needed. + +References +---------- + +* Agarwal et al. (2024) Vibhor Agarwal, Yiqiao Jin, Mohit Chandra, Munmun De Choudhury, Srijan Kumar, and Nishanth Sastry. 2024. Medhalu: Hallucinations in responses to healthcare queries by large language models. _arXiv preprint arXiv:2409.19492_. +* Antoniak et al. (2024) Maria Antoniak, Aakanksha Naik, Carla S Alvarado, Lucy Lu Wang, and Irene Y Chen. 2024. Nlp for maternal healthcare: Perspectives and guiding principles in the age of llms. In _Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency_, pages 1446–1463. +* Arditi et al. (2024) Andy Arditi, Oscar Obeso, Aaquib Syed, Daniel Paleka, Nina Panickssery, Wes Gurnee, and Neel Nanda. 2024. Refusal in language models is mediated by a single direction. _Advances in Neural Information Processing Systems_, 37:136037–136083. +* Balog and Zhai (2024) Krisztian Balog and ChengXiang Zhai. 2024. [User simulation for evaluating information access systems](https://arxiv.org/abs/2306.08550). _Preprint_, arXiv:2306.08550. +* Balsamo et al. (2023) Duilio Balsamo, Paolo Bajardi, Gianmarco De Francisci Morales, Corrado Monti, and Rossano Schifanella. 2023. The pursuit of peer support for opioid use recovery on reddit. In _Proceedings of the international AAAI conference on web and social media_, volume 17, pages 12–23. +* Basak et al. (2025) Madhusudan Basak, Omar Sharif, Sarah E Lord, Jacob T Borodovsky, Lisa A Marsch, Sandra A Springer, Edward V Nunes, Charles D Brackett, Luke J Archibald, and Sarah M Preum. 2025. Information needs for opioid use disorder treatment using buprenorphine product: Qualitative analysis of suboxone-focused reddit data. _Journal of Medical Internet Research_, 27:e68886. +* Baumann et al. (2025) Joachim Baumann, Paul Röttger, Aleksandra Urman, Albert Wendsjö, Flor Miriam Plaza-del Arco, Johannes B Gruber, and Dirk Hovy. 2025. Large language model hacking: Quantifying the hidden risks of using llms for text annotation. _arXiv preprint arXiv:2509.08825_. +* Baumeister and Leary (2017) Roy F Baumeister and Mark R Leary. 2017. The need to belong: Desire for interpersonal attachments as a fundamental human motivation. _Interpersonal development_, pages 57–89. +* Biddle (1979) Bruce Jesse Biddle. 1979. _Role Theory: Expectations, Identities, and Behaviors_. Academic Press, New York. +* Briakou et al. (2021) Eleftheria Briakou, Sweta Agrawal, Joel Tetreault, and Marine Carpuat. 2021. Evaluating the evaluation metrics for style transfer: A case study in multilingual formality transfer. _arXiv preprint arXiv:2110.10668_. +* Centers for Disease Control and Prevention (2023) Centers for Disease Control and Prevention. 2023. [Reducing stigma to prevent opioid overdose](https://www.cdc.gov/stop-overdose/stigma-reduction/index.html). Accessed: 2024-06-15. +* Cernasev et al. (2021) Alina Cernasev, Kenneth C Hohmeier, Kelsey Frederick, Hilary Jasmin, and Justin Gatwood. 2021. A systematic literature review of patient perspectives of barriers and facilitators to access, adherence, stigma, and persistence to treatment for substance use disorder. _Exploratory research in clinical and social pharmacy_, 2:100029. +* Chan et al. (2010) Jeffrey Chan, Conor Hayes, and Elizabeth M. Daly. 2010. [Decomposing discussion forums and boards using user roles](https://api.semanticscholar.org/CorpusID:15990380). _Proceedings of the International AAAI Conference on Web and Social Media_. +* Chandra et al. (2024) Mohit Chandra, Siddharth Sriraman, Gaurav Verma, Harneet Singh Khanuja, Jose Suarez Campayo, Zihang Li, Michael L Birnbaum, and Munmun De Choudhury. 2024. Lived experience not found: Llms struggle to align with experts on addressing adverse drug reactions from psychiatric medication use. _arXiv preprint arXiv:2410.19155_. +* Chatterji et al. (2025) Aaron Chatterji, Thomas Cunningham, David J Deming, Zoe Hitzig, Christopher Ong, Carl Yan Shan, and Kevin Wadman. 2025. [How people use chatgpt](https://doi.org/10.3386/w34255). Working Paper 34255, National Bureau of Economic Research. +* Chawla et al. (2023) Kushal Chawla, Ian Wu, Yu Rong, Gale M Lucas, and Jonathan Gratch. 2023. Be selfish, but wisely: Investigating the impact of agent personality in mixed-motive human-agent interactions. _arXiv preprint arXiv:2310.14404_. +* Cheng et al. (2023) Myra Cheng, Tiziano Piccardi, and Diyi Yang. 2023. Compost: Characterizing and evaluating caricature in llm simulations. _arXiv preprint arXiv:2310.11501_. +* Cheng et al. (2025) Myra Cheng, Sunny Yu, Cinoo Lee, Pranav Khadpe, Lujain Ibrahim, and Dan Jurafsky. 2025. Social sycophancy: A broader understanding of llm sycophancy. _arXiv preprint arXiv:2505.13995_. +* Choi et al. (2020) Minje Choi, Luca Maria Aiello, Krisztián Zsolt Varga, and Daniele Quercia. 2020. Ten social dimensions of conversations and relationships. In _Proceedings of The Web Conference 2020_, pages 1514–1525. +* Choy et al. (2024) Vanessa Choy, Sara Martin, and Ashley Lumpkin. 2024. Can we rely on generative ai for healthcare information?| ipsos. +* Clarke and Braun (2017) Victoria Clarke and Virginia Braun. 2017. Thematic analysis. _The journal of positive psychology_, 12(3):297–298. +* Dammu et al. (2024) Preetam Prabhu Srikar Dammu, Hayoung Jung, Anjali Singh, Monojit Choudhury, and Tanu Mitra. 2024. [“they are uncultured”: Unveiling covert harms and social threats in LLM generated conversations](https://doi.org/10.18653/v1/2024.emnlp-main.1134). In _Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing_, pages 20339–20369, Miami, Florida, USA. Association for Computational Linguistics. +* Davidson et al. (2023) Sam Davidson, Salvatore Romeo, Raphael Shu, James Gung, Arshit Gupta, Saab Mansour, and Yi Zhang. 2023. User simulation with large language models for evaluating task-oriented dialogue. _arXiv preprint arXiv:2309.13233_. +* DeepMind (2025) Google DeepMind. 2025. [Gemini 2.5 flash](https://deepmind.google/models/gemini/flash/). Accessed: 2025-09-04. +* Eslami-Jahromi et al. (2021) Maryam Eslami-Jahromi, Sareh Keshvardoost, Roghayeh Ershad-Sarabi, and Kambiz Bahaadinbeigy. 2021. Information needs of addicted individuals: A qualitative case study. _Addiction & Health_, 13(3):138. +* Fiske et al. (2007) Susan T Fiske, Amy JC Cuddy, and Peter Glick. 2007. Universal dimensions of social cognition: Warmth and competence. _Trends in cognitive sciences_, 11(2):77–83. +* Flesch (1948) Rudolph Flesch. 1948. A new readability yardstick. _Journal of applied psychology_, 32(3):221. +* Fu et al. (2018) Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Exploration and evaluation. In _Proceedings of the AAAI conference on artificial intelligence_, volume 32. +* Gao et al. (2023) Jingsheng Gao, Yixin Lian, Ziyi Zhou, Yuzhuo Fu, and Baoyuan Wang. 2023. Livechat: A large-scale personalized dialogue dataset automatically constructed from live streaming. _arXiv preprint arXiv:2306.08401_. +* Giorgi et al. (2024) Salvatore Giorgi, Kelsey Isman, Tingting Liu, Zachary Fried, João Sedoc, and Brenda Curtis. 2024. Evaluating generative ai responses to real-world drug-related questions. _Psychiatry Research_, 339:116058. +* Goffman (1949) Erving Goffman. 1949. Presentation of self in everyday life. _American Journal of Sociology_, 55(1):6–7. +* Grattafiori et al. (2024) Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. 2024. The llama 3 herd of models. _arXiv preprint arXiv:2407.21783_. +* Günther and Hagen (2021) Sebastian Günther and Matthias Hagen. 2021. Assessing query suggestions for search session simulation. In _Causality in Search and Recommendation (CSR) and Simulation of Information Retrieval Evaluation (Sim4IR) workshops at SIGIR 2021 (CEUR Workshop Proceedings, Vol. 2911). CEUR-WS. org_. +* Guo et al. (2012) Xitong Guo, Yongqiang Sun, Ziyu Yan, and Nan Wang. 2012. [Privacy-personalization paradox in adoption of mobile health service: The mediating role of trust](https://api.semanticscholar.org/CorpusID:268385996). In _Pacific Asia Conference on Information Systems_. +* Henriksson et al. (2024) Erik Henriksson, Amanda Myntti, Saara Hellström, Selcen Erten-Johansson, Anni Eskelinen, Liina Repo, and Veronika Laippala. 2024. From discrete to continuous classes: A situational analysis of multilingual web registers with llm annotations. In _Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities_, pages 308–318. +* Huang et al. (2023) Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, and Michael R Lyu. 2023. Who is chatgpt? benchmarking llms’ psychological portrayal using psychobench. _arXiv preprint arXiv:2310.01386_. +* Huh et al. (2016) Jina Huh, Bum Chul Kwon, Sung-Hee Kim, Sukwon Lee, Jaegul Choo, Jihoon Kim, Min-Je Choi, and Ji Soo Yi. 2016. Personas in online health communities. _Journal of biomedical informatics_, 63:212–225. +* Hurst et al. (2024) Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. 2024. Gpt-4o system card. _arXiv preprint arXiv:2410.21276_. +* Jain et al. (2025) Shomik Jain, Charlotte Park, Matheus Mesquita Viana, Ashia Wilson, and Dana Calacci. 2025. Extended ai interactions shape sycophancy and perspective mimesis. _arXiv preprint arXiv:2509.12517_. +* Jakesch et al. (2023) Maurice Jakesch, Jeffrey T Hancock, and Mor Naaman. 2023. Human heuristics for ai-generated language are flawed. _Proceedings of the National Academy of Sciences_, 120(11):e2208839120. +* Ji et al. (2025) Ke Ji, Yixin Lian, Linxu Li, Jingsheng Gao, Weiyuan Li, and Bin Dai. 2025. Enhancing persona consistency for llms’ role-playing using persona-aware contrastive learning. _arXiv preprint arXiv:2503.17662_. +* Jones et al. (2025) Mirabelle Jones, Nastasia Griffioen, Christina Neumayer, and Irina Shklovski. 2025. Artificial intimacy: Exploring normativity and personalization through fine-tuning llm chatbots. In _Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems_, pages 1–16. +* Jung et al. (2025) Hayoung Jung, Shravika Mittal, Ananya Aatreya, Navreet Kaur, Munmun De Choudhury, and Tanushree Mitra. 2025. Mythtriage: Scalable detection of opioid use disorder myths on a video-sharing platform. _arXiv preprint arXiv:2506.00308_. +* Kaur et al. (2024) Navreet Kaur, Monojit Choudhury, and Danish Pruthi. 2024. [Evaluating large language models for health-related queries with presuppositions](https://doi.org/10.18653/v1/2024.findings-acl.850). In _Findings of the Association for Computational Linguistics: ACL 2024_, pages 14308–14331, Bangkok, Thailand. Association for Computational Linguistics. +* Kim et al. (2023) Siwon Kim, Sangdoo Yun, Hwaran Lee, Martin Gubri, Sungroh Yoon, and Seong Joon Oh. 2023. Propile: Probing privacy leakage in large language models. _Advances in Neural Information Processing Systems_, 36:20750–20762. +* Kumar et al. (2024) Sachin Kumar, Chan Young Park, Yulia Tsvetkov, Noah A. Smith, and Hannaneh Hajishirzi. 2024. [ComPO: Community Preferences for Language Model Personalization](https://doi.org/10.48550/arXiv.2410.16027). _arXiv preprint_. ArXiv:2410.16027 [cs]. +* Lam et al. (2024) Michelle S Lam, Janice Teoh, James A Landay, Jeffrey Heer, and Michael S Bernstein. 2024. Concept induction: Analyzing unstructured text with high-level concepts using lloom. In _Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems_, pages 1–28. +* Levin and Cross (2004) Daniel Z Levin and Rob Cross. 2004. The strength of weak ties you can trust: The mediating role of trust in effective knowledge transfer. _Management science_, 50(11):1477–1490. +* Li et al. (2023) Cheng Li, Ziang Leng, Chenxi Yan, Junyi Shen, Hao Wang, Weishi Mi, Yaying Fei, Xiaoyang Feng, Song Yan, HaoSheng Wang, et al. 2023. Chatharuhi: Reviving anime character in reality via large language model. _arXiv preprint arXiv:2308.09597_. +* Li et al. (2017) Jiwei Li, Will Monroe, Tianlin Shi, Sébastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. _arXiv preprint arXiv:1701.06547_. +* Liao and Xiao (2023) Q Vera Liao and Ziang Xiao. 2023. [Rethinking model evaluation as narrowing the socio-technical gap](https://arxiv.org/abs/2306.03100). _arXiv preprint arXiv:2306.03100_. +* Liévin et al. (2024) Valentin Liévin, Christoffer Egeberg Hother, Andreas Geert Motzfeldt, and Ole Winther. 2024. Can large language models reason about medical questions? _Patterns_, 5(3). +* Liu (2025) Nelson F Liu. 2025. _Understanding and Improving Ecological Validity in Natural Language Processing Evaluations_. Stanford University. +* Malaviya et al. (2024) Chaitanya Malaviya, Joseph Chee Chang, Dan Roth, Mohit Iyyer, Mark Yatskar, and Kyle Lo. 2024. Contextualized evaluations: Judging language model responses to underspecified queries. _arXiv preprint arXiv:2411.07237_. +* Mardani et al. (2023) Mostafa Mardani, Fardin Alipour, Hassan Rafiey, Masoud Fallahi-Khoshknab, and Maliheh Arshi. 2023. Challenges in addiction-affected families: a systematic review of qualitative studies. _BMC psychiatry_, 23(1):439. +* McCain et al. (2025) Miles McCain, Ryn Linthicum, Chloe Lubinski, Alex Tamkin, Saffron Huang, Michael Stern, Kunal Handa, Esin Durmus, Tyler Neylon, Stuart Ritchie, Kamya Jagadish, Paruul Maheshwary, Sarah Heck, Alexandra Sanderford, and Deep Ganguli. 2025. [How people use claude for support, advice, and companionship](https://www.anthropic.com/news/how-people-use-claude-for-support-advice-and-companionship). +* McInnes et al. (2018) Leland McInnes, John Healy, and James Melville. 2018. Umap: Uniform manifold approximation and projection for dimension reduction. _arXiv preprint arXiv:1802.03426_. +* Mir et al. (2019) Remi Mir, Bjarke Felbo, Nick Obradovich, and Iyad Rahwan. 2019. Evaluating style transfer for text. _arXiv preprint arXiv:1904.02295_. +* Mireshghallah et al. (2024) Niloofar Mireshghallah, Maria Antoniak, Yash More, Yejin Choi, and Golnoosh Farnadi. 2024. Trust no bot: Discovering personal disclosures in human-llm conversations in the wild. _arXiv preprint arXiv:2407.11438_. +* Mittal et al. (2025) Shravika Mittal, Hayoung Jung, Mai ElSherief, Tanushree Mitra, and Munmun De Choudhury. 2025. Online myths on opioid use disorder: A comparison of reddit and large language model. In _Proceedings of the International AAAI Conference on Web and Social Media_, volume 19, pages 1224–1245. +* NIDA (2023) NIDA. 2023. Drug overdose death rates. [https://nida.nih.gov/research-topics/trends-statistics/overdose-death-rates](https://nida.nih.gov/research-topics/trends-statistics/overdose-death-rates). +* Nolte et al. (2022) Amelie Nolte, Karolin Lueneburg, Dieter P Wallach, and Nicole Jochems. 2022. Creating personas for signing user populations: An ability-based approach to user modelling in hci. In _Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility_, pages 1–6. +* Nori et al. (2023) Harsha Nori, Nicholas King, Scott Mayer McKinney, Dean Carignan, and Eric Horvitz. 2023. Capabilities of gpt-4 on medical challenge problems. _arXiv preprint arXiv:2303.13375_. +* Ohyama (2021) Tetsuji Ohyama. 2021. Statistical inference of gwet’s ac1 coefficient for multiple raters and binary outcomes. _Communications in Statistics-Theory and Methods_, 50(15):3564–3572. +* OpenAI (2024) OpenAI. 2024. [Memory and new controls for chatgpt](https://openai.com/index/memory-and-new-controls-for-chatgpt/). Accessed: 2025-08-20. +* Owoicho et al. (2023) Paul Owoicho, Ivan Sekulic, Mohammad Aliannejadi, Jeffrey Dalton, and Fabio Crestani. 2023. Exploiting simulated user feedback for conversational search: Ranking, rewriting, and beyond. In _Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval_, pages 632–642. +* Pal and Sankarasubbu (2024) Ankit Pal and Malaikannan Sankarasubbu. 2024. Openbiollms: Advancing open-source large language models for healthcare and life sciences. +* Park et al. (2024a) Chan Young Park, Shuyue Stella Li, Hayoung Jung, Svitlana Volkova, Tanushree Mitra, David Jurgens, and Yulia Tsvetkov. 2024a. Valuescope: Unveiling implicit norms and values via return potential model of social interactions. _arXiv preprint arXiv:2407.02472_. +* Park et al. (2023) Joon Sung Park, Joseph O’Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. 2023. Generative agents: Interactive simulacra of human behavior. In _Proceedings of the 36th annual acm symposium on user interface software and technology_, pages 1–22. +* Park et al. (2024b) Joon Sung Park, Carolyn Q Zou, Aaron Shaw, Benjamin Mako Hill, Carrie Cai, Meredith Ringel Morris, Robb Willer, Percy Liang, and Michael S Bernstein. 2024b. Generative agent simulations of 1,000 people. _arXiv preprint arXiv:2411.10109_. +* Peng and Shang (2024) Letian Peng and Jingbo Shang. 2024. Quantifying and optimizing global faithfulness in persona-driven role-playing. _arXiv preprint arXiv:2405.07726_. +* Rao et al. (2024) Varun Nagaraj Rao, Eesha Agarwal, Samantha Dalal, Dan Calacci, and Andrés Monroy-Hernández. 2024. Quallm: An llm-based framework to extract quantitative insights from online forums. _arXiv preprint arXiv:2405.05345_. +* Reimers and Gurevych (2019) Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. _arXiv preprint arXiv:1908.10084_. +* Rousmaniere et al. (2025) Tony Rousmaniere, Xu Li, Yimeng Zhang, and Siddharth Shah. 2025. Large language models as mental health resources: Patterns of use in the united states. +* Rousseeuw (1987) Peter J Rousseeuw. 1987. Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. _Journal of computational and applied mathematics_, 20:53–65. +* Safdari et al. (2023) Mustafa Safdari, Gregory Serapio-Garc’ia, Clé ment Crepy, Stephen Fitz, Peter Romero, Luning Sun, Marwa Abdulhai, Aleksandra Faust, and Maja Matari’c. 2023. [Personality traits in large language models](https://api.semanticscholar.org/CorpusID:259317218). _ArXiv_, abs/2307.00184. +* Salemi et al. (2023) Alireza Salemi, Sheshera Mysore, Michael Bendersky, and Hamed Zamani. 2023. Lamp: When large language models meet personalization. _arXiv preprint arXiv:2304.11406_. +* Saxena and Reddy (2022) Akrati Saxena and Harita Reddy. 2022. Users roles identification on online crowdsourced q&a platforms and encyclopedias: a survey. _Journal of Computational Social Science_, 5(1):285–317. +* Sekulić et al. (2022) Ivan Sekulić, Mohammad Aliannejadi, and Fabio Crestani. 2022. Evaluating mixed-initiative conversational search systems via user simulation. In _Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining_, pages 888–896. +* Shahapure and Nicholas (2020) Ketan Rajshekhar Shahapure and Charles Nicholas. 2020. Cluster quality analysis using silhouette score. In _2020 IEEE 7th international conference on data science and advanced analytics (DSAA)_, pages 747–748. IEEE. +* Shao et al. (2023) Yunfan Shao, Linyang Li, Junqi Dai, and Xipeng Qiu. 2023. Character-llm: A trainable agent for role-playing. _arXiv preprint arXiv:2310.10158_. +* Sharma et al. (2024) Nikhil Sharma, Q Vera Liao, and Ziang Xiao. 2024. Generative echo chamber? effect of llm-powered search systems on diverse information seeking. In _Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems_, pages 1–17. +* Siliski (2025) Michael Siliski. 2025. [Gemini adds temporary chats and new personalization features](https://blog.google/products/gemini/temporary-chats-privacy-controls/). Accessed: 2025-08-20. +* Singhal et al. (2022) Karan Singhal, Shekoofeh Azizi, Tao Tu, S Sara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, et al. 2022. Large language models encode clinical knowledge. _arXiv preprint arXiv:2212.13138_. +* Song et al. (2020) Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. 2020. [Mpnet: Masked and permuted pre-training for language understanding](https://proceedings.neurips.cc/paper_files/paper/2020/file/c3a690be93aa602ee2dc0ccab5b7b67e-Paper.pdf). In _Advances in Neural Information Processing Systems_, volume 33, pages 16857–16867. Curran Associates, Inc. +* Staab et al. (2023) Robin Staab, Mark Vero, Mislav Balunović, and Martin Vechev. 2023. Beyond memorization: Violating privacy via inference with large language models. _arXiv preprint arXiv:2310.07298_. +* Sun and Wang (2025) Yuan Sun and Ting Wang. 2025. Be friendly, not friends: How llm sycophancy shapes user trust. _arXiv preprint arXiv:2502.10844_. +* Tamkin et al. (2024) Alex Tamkin, Miles McCain, Kunal Handa, Esin Durmus, Liane Lovitt, Ankur Rathi, Saffron Huang, Alfred Mountfield, Jerry Hong, Stuart Ritchie, et al. 2024. Clio: Privacy-preserving insights into real-world ai use. _arXiv preprint arXiv:2412.13678_. +* Tseng et al. (2024) Yu-Min Tseng, Yu-Chao Huang, Teng-Yun Hsiao, Wei-Lin Chen, Chao-Wei Huang, Yu Meng, and Yun-Nung Chen. 2024. [Two tales of persona in LLMs: A survey of role-playing and personalization](https://doi.org/10.18653/v1/2024.findings-emnlp.969). In _Findings of the Association for Computational Linguistics: EMNLP 2024_, pages 16612–16631, Miami, Florida, USA. Association for Computational Linguistics. +* Turner (1990) Ralph H Turner. 1990. Role change. _Annual review of Sociology_, 16(1):87–110. +* Vasisht et al. (2024) Kinshuk Vasisht, Navreet Kaur, and Danish Pruthi. 2024. Knowledge graph guided evaluation of abstention techniques. _arXiv preprint arXiv:2412.07430_. +* Vaux (1988) Alan Vaux. 1988. _Social support: Theory, research, and intervention._ Praeger publishers. +* Veselovsky et al. (2023) Veniamin Veselovsky, Manoel Horta Ribeiro, Akhil Arora, Martin Josifoski, Ashton Anderson, and Robert West. 2023. Generating faithful synthetic data with large language models: A case study in computational social science. _arXiv preprint arXiv:2305.15041_. +* Wan et al. (2024) Mengting Wan, Tara Safavi, Sujay Kumar Jauhar, Yujin Kim, Scott Counts, Jennifer Neville, Siddharth Suri, Chirag Shah, Ryen W White, Longqi Yang, et al. 2024. Tnt-llm: Text mining at scale with large language models. In _Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining_, pages 5836–5847. +* Wang et al. (2025) Angelina Wang, Jamie Morgenstern, and John P Dickerson. 2025. Large language models that replace human participants can harmfully misportray and flatten identity groups. _Nature Machine Intelligence_, pages 1–12. +* Wang et al. (2024) Ruiyi Wang, Stephanie Milani, Jamie C Chiu, Jiayin Zhi, Shaun M Eack, Travis Labrum, Samuel M Murphy, Nev Jones, Kate Hardy, Hong Shen, et al. 2024. Patient-{\{\\backslash Psi}\}: Using large language models to simulate patients for training mental health professionals. _arXiv preprint arXiv:2405.19660_. +* Wang et al. (2023) Zekun Moore Wang, Zhongyuan Peng, Haoran Que, Jiaheng Liu, Wangchunshu Zhou, Yuhan Wu, Hongcheng Guo, Ruitong Gan, Zehao Ni, Jian Yang, et al. 2023. Rolellm: Benchmarking, eliciting, and enhancing role-playing abilities of large language models. _arXiv preprint arXiv:2310.00746_. +* Wataoka et al. (2024) Koki Wataoka, Tsubasa Takahashi, and Ryokan Ri. 2024. Self-preference bias in llm-as-a-judge. _arXiv preprint arXiv:2410.21819_. +* Wolf et al. (2005) Michael S Wolf, Julie A Gazmararian, and David W Baker. 2005. Health literacy and functional health status among older adults. _Archives of internal medicine_, 165(17):1946–1952. +* Wongpakaran et al. (2013) Nahathai Wongpakaran, Tinakon Wongpakaran, Danny Wedding, and Kilem L Gwet. 2013. A comparison of cohen’s kappa and gwet’s ac1 when calculating inter-rater reliability coefficients: a study conducted with personality disorder samples. _BMC medical research methodology_, 13(1):61. +* Woo et al. (2017) Julia Woo, Anuja Bhalerao, Monica Bawor, Meha Bhatt, Brittany Dennis, Natalia Mouravska, Laura Zielinski, and Zainab Samaan. 2017. “don’t judge a book by its cover”: A qualitative study of methadone patients’ experiences of stigma. _Substance abuse: research and treatment_, 11:1178221816685087. +* Xiang et al. (2024) Wei Xiang, Hanfei Zhu, Suqi Lou, Xinli Chen, Zhenghua Pan, Yuping Jin, Shi Chen, and Lingyun Sun. 2024. Simuser: Generating usability feedback by simulating various users interacting with mobile applications. In _Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems_, pages 1–17. +* Yang et al. (2019) Diyi Yang, Robert E Kraut, Tenbroeck Smith, Elijah Mayfield, and Dan Jurafsky. 2019. Seekers, providers, welcomers, and storytellers: Modeling social roles in online health communities. In _Proceedings of the 2019 CHI conference on human factors in computing systems_, pages 1–14. +* Zhang et al. (2022) Shuo Zhang, Mu-Chun Wang, and Krisztian Balog. 2022. Analyzing and simulating user utterance reformulation in conversational recommender systems. In _Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval_, pages 133–143. +* Zhang et al. (2024) Zhiping Zhang, Michelle Jia, Hao-Ping Lee, Bingsheng Yao, Sauvik Das, Ada Lerner, Dakuo Wang, and Tianshi Li. 2024. “it’s a fair game”, or is it? examining how users navigate disclosure risks and benefits when using llm-based conversational agents. In _Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems_, pages 1–26. +* Zhao et al. (2024) Wenting Zhao, Xiang Ren, Jack Hessel, Claire Cardie, Yejin Choi, and Yuntian Deng. 2024. [Wildchat: 1m chatgpt interaction logs in the wild](https://arxiv.org/abs/2405.01470). _arXiv preprint arXiv:2405.01470_. +* Zheng et al. (2023) Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. 2023. Judging llm-as-a-judge with mt-bench and chatbot arena. _Advances in neural information processing systems_, 36:46595–46623. +* Zhou et al. (2021) Ke Zhou, Marios Constantinides, Luca Maria Aiello, Sagar Joglekar, and Daniele Quercia. 2021. The role of different types of conversations for meeting success. _IEEE Pervasive Computing_, 20(4):35–42. +* Zhou et al. (2024) Xuhui Zhou, Hao Zhu, Leena Mathur, Ruohong Zhang, Haofei Yu, Zhengyang Qi, Louis-Philippe Morency, Yonatan Bisk, Daniel Fried, Graham Neubig, and Maarten Sap. 2024. [SOTOPIA: Interactive evaluation for social intelligence in language agents](https://openreview.net/forum?id=mM7VurbA4r). In _The Twelfth International Conference on Learning Representations_. + +Appendix A Taxonomy Constructions Details +----------------------------------------- + +#### Data + +We collect posts between January 1, 2023, and December 31, 2024, from r/OpiatesRecovery, a Reddit community “dedicated to helping each other stop and stay stopped” from opioid use. We filter out deleted or title-only posts, resulting in 10,017 10{,}017 posts, which use as the target dataset for deriving the taxonomy. + +#### Method + +We follow the below steps for constructing the taxonomy: + +(1) Facet-based summarization: For each post, we summarize its goals, behaviors, and experiences, using the prompt described below (the and vary by facet). We include few-shot examples, tailored to each facet, drawn from our manual annotations after qualitatively coding a small set of posts. + +(2) Clustering: The facet-based summaries are then embedded using a sentence transformer (all-mpnet-base-v2) Reimers and Gurevych ([2019](https://arxiv.org/html/2510.16829v1#bib.bib73)) and clustered using k-means. To select the number of clusters, we used Silhouette analysis Rousseeuw ([1987](https://arxiv.org/html/2510.16829v1#bib.bib75)); Shahapure and Nicholas ([2020](https://arxiv.org/html/2510.16829v1#bib.bib80)), which evaluates clustering quality by comparing how well points fit within their own cluster relative to the closest points in other clusters. Silhouette score lies between −1-1 to 1 1, with higher value meaning better cluster quality. We measured silhouette scores for k=3 k=3 to k=10 k=10, and found that four clusters gave the highest score of 0.05 0.05 (see Figure [5](https://arxiv.org/html/2510.16829v1#A1.F5 "Figure 5 ‣ Method ‣ Appendix A Taxonomy Constructions Details ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). We note that while may be a low score, such scores are common in noisy and overlapping text data Henriksson et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib35)). To validate coherence, we visualized the embeddings using UMAP McInnes et al. ([2018](https://arxiv.org/html/2510.16829v1#bib.bib57)) (n_neighbors=15=15, min_dist=0=0, cosine metric) (Figure [6](https://arxiv.org/html/2510.16829v1#A1.F6 "Figure 6 ‣ Method ‣ Appendix A Taxonomy Constructions Details ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")) and found the clusters to be interpretable and meaningful during the labeling process (described in next step). + +(3) Cluster Labelling: Clusters are then labeled with LLM-generated labels and descriptions based on sample posts, and refine with human annotation. To interpret each cluster, we sample 50 posts within and 50 nearest-neighbors outside each cluster. Using Claude 3.7 Sonnet (claude-3-7-sonnet-20250219), we generate preliminary cluster descriptions based on these sampled posts Tamkin et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib88)). Two expert annotators with prior experience in health NLP research refine them through manual annotation by reviewing 25 25 posts per cluster for role facet (reflecting its central importance), and 5 5 per cluster for goals, behaviors and experiences. We find high inter-rater reliability (Cohen���s κ​0.71−0.93\kappa~0.71{-}0.93) and agreement between human and model-assigned cluster descriptions (accuracy 0.71−0.81 0.71{-}0.81) (Table[6](https://arxiv.org/html/2510.16829v1#A1.T6 "Table 6 ‣ Facet-based Inputs ‣ Appendix A Taxonomy Constructions Details ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). This process validates clustering quality and supports qualitative interpretation of the categories. + +Role Patient Caregiver Practitioner +% posts 91.2 91.2 5.2 5.2 2.7 2.7 + +Table 3: Distribution of posts on r/OpiatesRecovery + +![Image 5: Refer to caption](https://arxiv.org/html/2510.16829v1/figures/silhouette-analysis.png) + +Figure 5: Evaluation of clusters using Silhouette Analysis. + +![Image 6: Refer to caption](https://arxiv.org/html/2510.16829v1/figures/umap.png) + +Figure 6: UMAP visualization. + +Table 4: Examples of r/OpiatesRecovery posts with facet-based summaries illustrating each role-defining facet i.e. goal, behavior and experience. Posts are paraphrased for brevity and to protect privacy. + +Prompts for taxonomy construction are inspired from Tamkin et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib88)) and adapted for our setting. + +### Prompt for Facet-based Summarization + +### Prompt for Clustering Labelling + +### Facet-based Inputs + +Table 5: Human validation of facet-based summaries + +Table 6: Human validation of cluster descriptions + +Table 7: Full Taxonomy of Roles. We define roles by their goals, behaviors and experiences observed in r/OpiatesRecovery (§[3](https://arxiv.org/html/2510.16829v1#S3 "3 Taxonomy of Information-seeking Roles ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). + +Appendix B Simulating Role-based Questions +------------------------------------------ + +Table 8: Validating role-agnostic seed questions. GPT-4.1 judgments and their agreement with majority vote. + +#### Validating Role-agnostic Information-seeking Questions + +We use GPT-4o-mini to rewrite each verbatim question extracted from a r/OpiatesRecovery into a role-agnostic form by removing any personal details and role-revealing cues. We select GPT-4o-mini for this step as it is efficient and cost-effective, making it well-suited for processing our large dataset (10,017 10{,}017 queries). To validate the extracted and rewritten questions, two expert annotators evaluate 50 50 sampled questions for: (i) Completeness – whether the rewritten question preserves original intent, and (ii) Role Disclosure – whether the text leaks the poster’s role or personal details. We observe strong agreement (Cohen’s κ\kappa = 0.82 0.82); completeness holds in 94%94\% of cases, and role disclosure occurs in only 0.06%0.06\%. As an additional check, we use GPT-4.1 as a judge Zheng et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib107)) to filter out questions that are incomplete or reveal a role, with judgments closely aligned with human annotations (78%78\% agreement for correctness, 72%72\% for role disclosure; see Table [8](https://arxiv.org/html/2510.16829v1#A2.T8 "Table 8 ‣ Appendix B Simulating Role-based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). The final dataset contains 5,107 5{,}107 role-agnostic, information-seeking questions, which we use to generate role-based variants. + +#### Simulating Role-based Questions + +In r/OpiatesRecovery, patients account for most posts, while caregivers—who also face substantial emotional and informational burdens Mardani et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib55))—remain underrepresented (Table [3](https://arxiv.org/html/2510.16829v1#A1.T3 "Table 3 ‣ Method ‣ Appendix A Taxonomy Constructions Details ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). To evaluate models across such high-stakes roles, we ground simulation in the taxonomy of information-seeking roles. + +Given a seed role-agnostic, information-seeking question Q 0 Q_{0}, we simulate role-based variants by conditioning a language model (Claude-3.7-Sonnet) on a structured set of attributes from the taxonomy: + +[Q 0;R;G;B;E;{B i(R)}i=1 n;{E j(R)}j=1 n]\left[Q_{0};\ R;\ G;\ B;\ E;\ \{B_{i}^{(R)}\}_{i=1}^{n};\ \{E_{j}^{(R)}\}_{j=1}^{n}\right] + +* •Q 0 Q_{0} is a role-agnostic information-seeking question. +* •R R is the role, drawn from a predefined set ℛ={Patient,Caregiver,Practitioner}\mathcal{R}=\{\text{Patient},\text{Caregiver},\text{Practitioner}\}. +* •G G, B B, and E E are descriptions of the role’s goal, behavior, and experience, respectively. +* •{B i(R)}i=1 n\{B_{i}^{(R)}\}_{i=1}^{n} and {E j(R)}j=1 n\{E_{j}^{(R)}\}_{j=1}^{n} are sets of behavior- and experience-based summaries associated with role R R. + +We use Claude-3.7-Sonnet to simulate role-based questions. We also experimented with GPT-4o on a subset of 50 50 role-agnostic questions, generating three role-based variants for each. A qualitative comparison showed that Claude-3.7-Sonnet followed instructions more reliably and produced questions that were consistently more role-faithful and contextually appropriate. We therefore adopt Claude-3.7-Sonnet for generating the full dataset. + +### Prompt for Extracting Role-agnostic Information-seeking Questions + +### Prompt for Generating Role-based Questions + +### Evaluating Completeness + +### Evaluating Role-Disclosure + +Table 9: Examples of simulated role-based questions. Role-based questions reflect the goals, behaviors and experiences of patients, caregivers, and practitioners (§[3](https://arxiv.org/html/2510.16829v1#S3 "3 Taxonomy of Information-seeking Roles ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). We start with role-agnostic seed questions extracted from r/OpiatesRecovery posts, and use them along with the taxonomy (Table [1](https://arxiv.org/html/2510.16829v1#S3.T1 "Table 1 ‣ 3 Taxonomy of Information-seeking Roles ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")) to simulate role-based questions (§[4](https://arxiv.org/html/2510.16829v1#S4 "4 Simulating Role-based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). + +Appendix C Evaluating Believability of Simulated Role-Based Questions +--------------------------------------------------------------------- + +### C.1 Metric Details + +Question Source Automatic Evaluation Human Evaluation +Wildchat 75 75 10 10 +Reddit 944 944 31 31 +R 2832 2832 90 90 +RG 2832 2832 90 90 +RGB 2832 2832 90 90 +RGBE 2832 2832 90 90 +CoRUS 2832 2832 90 90 +Total 15179 15179 491 491 + +Table 10: Number of queries from each source setting (real-user queries and prompting variants) for automatic and human evaluation + +Table 11: Examples of queries that satisfy or do not satisfy metrics for assessing believable questions (§[4.1](https://arxiv.org/html/2510.16829v1#S4.SS1 "4.1 Evaluating Simulated Questions ‣ 4 Simulating Role-based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")) + +#### Human-Likeness + +Whether a question is indistinguishable from a human-written one. Based on adversarial and Turing-test-style methods Li et al. ([2017](https://arxiv.org/html/2510.16829v1#bib.bib50)); Mir et al. ([2019](https://arxiv.org/html/2510.16829v1#bib.bib58)), this captures fluency and naturalness Ji et al. ([2025](https://arxiv.org/html/2510.16829v1#bib.bib41)); Balog and Zhai ([2024](https://arxiv.org/html/2510.16829v1#bib.bib4)); Owoicho et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib66)); Sekulić et al. ([2022](https://arxiv.org/html/2510.16829v1#bib.bib79)); Zhang et al. ([2022](https://arxiv.org/html/2510.16829v1#bib.bib104)). + +#### Context Plausibility + +Whether the scenario or personal context embedded in a question is plausible within the discourse of a relevant online community. Grounded in user simulation and synthetic data generation literature Günther and Hagen ([2021](https://arxiv.org/html/2510.16829v1#bib.bib33)); Balog and Zhai ([2024](https://arxiv.org/html/2510.16829v1#bib.bib4)); Park et al. ([2024a](https://arxiv.org/html/2510.16829v1#bib.bib68)), this metric captures whether simulated questions reflect the authentic experiences and narratives shared in online communities. + +#### Interaction Plausibility + +Whether a question is likely to be a part of human-AI conversation i.e. whether it is phrased as a query that an individual could plausibly pose to a chatbot. Unlike community-oriented posts, which address a collective audience, questions to a chatbot resemble dyadic interactions between a single user and an AI system. This reflects the realism of simulated questions within the setting in which they are posed Balog and Zhai ([2024](https://arxiv.org/html/2510.16829v1#bib.bib4)). + +#### Content Preservation + +Measures the content fidelity to the original question. Drawing from style transfer literature Fu et al. ([2018](https://arxiv.org/html/2510.16829v1#bib.bib28)); Mir et al. ([2019](https://arxiv.org/html/2510.16829v1#bib.bib58)); Briakou et al. ([2021](https://arxiv.org/html/2510.16829v1#bib.bib10)); Park et al. ([2024a](https://arxiv.org/html/2510.16829v1#bib.bib68)), this ensures simulations reframe rather than distort the original question. + +#### Role Faithfulness + +Whether the question reflects the language, perspective, and lived experience of the target role. This metric is inspired by work in role-playing and persona-consistent generation Peng and Shang ([2024](https://arxiv.org/html/2510.16829v1#bib.bib71)); Ji et al. ([2025](https://arxiv.org/html/2510.16829v1#bib.bib41)). + +![Image 7: Refer to caption](https://arxiv.org/html/2510.16829v1/figures/corus-eval-appx.jpeg) + +Figure 7: Evaluation of Simulated Role-based Questions. While there is no significant difference between prompting methods when considering content preservation and role faithfulness, CoRUS produces significantly more human-like and contextually plausible queries than other prompting methods, and maintains high interaction plausibility (§[4.1](https://arxiv.org/html/2510.16829v1#S4.SS1 "4.1 Evaluating Simulated Questions ‣ 4 Simulating Role-based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation"); Figure [2](https://arxiv.org/html/2510.16829v1#S4.F2 "Figure 2 ‣ 4.2 Results ‣ 4 Simulating Role-based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). Error bars show 95% CI. + +### C.2 Automatic Evaluation + +We use GPT-4.1 as the LLM-judge Zheng et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib107)) to evaluate role-based questions, avoiding self-preference bias Wataoka et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib98)) since Claude-3.7-Sonnet was used to generate them. + +For each evaluation item (Table [10](https://arxiv.org/html/2510.16829v1#A3.T10 "Table 10 ‣ C.1 Metric Details ‣ Appendix C Evaluating Believability of Simulated Role-Based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")), GPT-4.1 is shown a shown a pair of a role-agnostic question and a corresponding role-based simulation, and asked to rate it on five metrics described in §[C.1](https://arxiv.org/html/2510.16829v1#A3.SS1 "C.1 Metric Details ‣ Appendix C Evaluating Believability of Simulated Role-Based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation"). All judgments are collected as binary labels (whether the characteristic is present or not). For content preservation, we collapse the original five labels (Completely dissimilar; Not equivalent, but share some details; Roughly equivalent; Mostly equivalent; Completely equivalent) into a binary decision following Briakou et al. ([2021](https://arxiv.org/html/2510.16829v1#bib.bib10)). We apply this evaluation across all input variants (§[4.1](https://arxiv.org/html/2510.16829v1#S4.SS1.SSS0.Px1 "Metrics. ‣ 4.1 Evaluating Simulated Questions ‣ 4 Simulating Role-based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")), including simulations from CoRUS, its prompting variants, and real-user queries (Reddit posts and Wildchat examples). Prompts are provided below. + +### Prompt for Evaluating Human-Likeness + +### Prompt for Evaluating Context Plausibility + +### Prompt for Evaluating Interaction Plausibility + +### Prompt for Evaluating Role Faithfulness + +### Prompt for Evaluating Content Preservation + +### C.3 Human Annotation + +![Image 8: Refer to caption](https://arxiv.org/html/2510.16829v1/figures/task-guidelines.png) + +Figure 8: Task Guidelines for human evaluation of believability of simulated role-based questions. + +![Image 9: Refer to caption](https://arxiv.org/html/2510.16829v1/figures/full-ui.png) + +Figure 9: User Interface for Human Annotations. + +As part of our evaluation setup, we validate GPT-4.1 judgments with 7,365 7{,}365 human annotations on 491 491 queries (Table [10](https://arxiv.org/html/2510.16829v1#A3.T10 "Table 10 ‣ C.1 Metric Details ‣ Appendix C Evaluating Believability of Simulated Role-Based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")), with three annotators independently performing the same task as the GPT-4.1 judge. This section provides full details of the human evaluation process. + +#### Evaluation Dataset + +The human evaluation dataset draws from three sources: (i) 31 31 Reddit posts stratified across roles (7 7 each for patient, caregiver, and practitioner), using post-level role annotations and the extracted role-agnostic information-seeking questions described in §[4](https://arxiv.org/html/2510.16829v1#S4 "4 Simulating Role-based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation"); (ii) 450 450 simulated questions, obtained by sampling 30 30 role-agnostic questions and their corresponding 3 3 role-based simulated questions for each role, across CoRUS and four prompting variants; and (iii) 10 10 health- or clinical-related queries from the Wildchat dataset Zhao et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib106)), identified using task and domain labels from Mireshghallah et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib59)). While OUD-specific Wildchat queries would have been preferable, none were present, likely due to topic sensitivity and the public nature of the dataset. + +#### Annotator Selection + +While moderators or community members of r/OpiatesRecovery familiar with AI tools would be the ideal evaluators, the sensitivity of the domain makes this infeasible. Instead, we recruit annotators via Prolific, filtered to (i) U.S. residents (where OUD discussions are most contextually relevant), (ii) active Reddit users, and (iii) English as their primary language. + +Annotators first review detailed guidelines, including sample r/OpiatesRecovery posts to understand tone and norms, Wildchat examples to contextualize chatbot queries, and an overview of the role taxonomy and evaluation criteria (see Figures [8](https://arxiv.org/html/2510.16829v1#A3.F8 "Figure 8 ‣ C.3 Human Annotation ‣ Appendix C Evaluating Believability of Simulated Role-Based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation") and [9](https://arxiv.org/html/2510.16829v1#A3.F9 "Figure 9 ‣ C.3 Human Annotation ‣ Appendix C Evaluating Believability of Simulated Role-Based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). To ensure task comprehension, we include four qualification questions developed and validated in a pilot study by two expert (author) annotators; only participants who answer all correctly can proceed. + +#### Task Setup + +Annotators are shown pairs of questions—a role-agnostic information-seeking question and its corresponding role-based simulated question—and asked to evaluate them across five metrics mentioned in §[C.1](https://arxiv.org/html/2510.16829v1#A3.SS1 "C.1 Metric Details ‣ Appendix C Evaluating Believability of Simulated Role-Based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation"). We show task guidelines and user interface in Figures [8](https://arxiv.org/html/2510.16829v1#A3.F8 "Figure 8 ‣ C.3 Human Annotation ‣ Appendix C Evaluating Believability of Simulated Role-Based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation") and [9](https://arxiv.org/html/2510.16829v1#A3.F9 "Figure 9 ‣ C.3 Human Annotation ‣ Appendix C Evaluating Believability of Simulated Role-Based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation"). Each annotation task consists of 17 17 question pairs sampled randomly from the full pool (Table [10](https://arxiv.org/html/2510.16829v1#A3.T10 "Table 10 ‣ C.1 Metric Details ‣ Appendix C Evaluating Believability of Simulated Role-Based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")). A pilot study established an average completion time of 25 25 minutes, and compensation is set to align with an hourly rate of USD12. In the full study, annotator completion times ranged from 10−70 10{-}70 minutes. + +Each question pair was independently rated by three annotators. To assess reliability, we compute inter-annotator agreement using Gwet’s AC1, which is more robust than Fleiss’ κ\kappa under skewed label distributions Ohyama ([2021](https://arxiv.org/html/2510.16829v1#bib.bib64)); Wongpakaran et al. ([2013](https://arxiv.org/html/2510.16829v1#bib.bib100)). This is appropriate for our setting, as many simulated questions are expected to exhibit the target property, leading to imbalanced distributions. Metric-wise agreement scores are in Table [12](https://arxiv.org/html/2510.16829v1#A3.T12 "Table 12 ‣ Subjectivity in Human Annotations ‣ C.3 Human Annotation ‣ Appendix C Evaluating Believability of Simulated Role-Based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation"). + +#### Subjectivity in Human Annotations + +Because judgments of human-likeness are inherently subjective and prior work shows that humans are not always reliable judges of AI-generated text Jakesch et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib40)), we further familiarized annotators with examples of both human- and AI-written text and required free-text rationales for their labels. These steps improved consistency but could not fully eliminate subjectivity in evaluations. + +Table 12: Inter-annotator agreement between human annotators (Gwet’s AC1), and agreement between majority vote and GPT-4.1 rater (Accuracy). + +Table 13: Number of role-based questions with explicit role mentions (such as “I am a patient …”), which may inflate the role faithfulness judgments. + +Table 14: Evaluating believability of CoRUS generated questions (§[4.1](https://arxiv.org/html/2510.16829v1#S4.SS1 "4.1 Evaluating Simulated Questions ‣ 4 Simulating Role-based Questions ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")) + +Appendix D Experimental Details +------------------------------- + +### D.1 Model Details + +Table [15](https://arxiv.org/html/2510.16829v1#A4.T15 "Table 15 ‣ D.1 Model Details ‣ Appendix D Experimental Details ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation") lists the identifiers of all models used in our methodology and experiments. Models were accessed via their organization’s official APIs, except for Llama and OpenBioLLM-70B, which we accessed via the Nebius API. For LLM judges, we set the temperature to 0 to minimize output randomness. For the evaluated LLMs (§[5.3](https://arxiv.org/html/2510.16829v1#S5.SS3 "5.3 How does the asker’s role-based context shape LLM responses? ‣ 5 Application: Role-based Evaluations ‣ Who’s Asking? Simulating Role-Based Questions for Conversational AI Evaluation")), we used a temperature of 0.6 0.6 and max_tokens of 512 512, following prior work Chandra et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib14)); Agarwal et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib1)). + +Table 15: List of models used in our methodology and experiments, and their official identifiers. + +### D.2 Response Evaluation + +#### Validating Knowledge and Support Classifiers + +Prior work has used the classifiers of Choi et al. ([2020](https://arxiv.org/html/2510.16829v1#bib.bib19)) to study peer support Balsamo et al. ([2023](https://arxiv.org/html/2510.16829v1#bib.bib5)), conversation types Zhou et al. ([2021](https://arxiv.org/html/2510.16829v1#bib.bib108)), and framing strategies Mittal et al. ([2025](https://arxiv.org/html/2510.16829v1#bib.bib60)). We use them to measure knowledge and support, and validate these classifiers for our setting to ensure they reliably capture these dimensions in our domain. Two authors independently annotated the 20 20 highest- and 20 20 lowest-scoring responses per dimension (80 80 total) with a binary label (high or low), after reaching a shared understanding of the criteria. We observe high inter-annotator agreement (Cohen’s κ\kappa 0.7 0.7) and strong alignment with classifier scores (accuracy 0.9 0.9). + +#### Identifying Abstention in Responses + +We define abstention as cases where the model refuses to answer a query. To measure abstention rates, we detect such responses using a phrase-matching method with heuristics adapted from prior work Arditi et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib3)); Vasisht et al. ([2024](https://arxiv.org/html/2510.16829v1#bib.bib91)). The method expands contractions (e.g., ‘can’t’ to ‘cannot’), looks for refusal markers (e.g., ‘I am unable’, ‘I cannot support’), and applies a length-based heuristic to avoid false positives when the response later shifts into providing an answer. To validate, we manually checked 50 50 flagged responses and found the method to be reasonably reliable (0.73 0.73 F-1 score). + +Table 16: Examples of queries which are refused + +Appendix E Extended Results +--------------------------- + +![Image 10: Refer to caption](https://arxiv.org/html/2510.16829v1/figures/knowledge-scores.png) + +(a) Knowledge Scores + +![Image 11: Refer to caption](https://arxiv.org/html/2510.16829v1/figures/support-scores.png) + +(b) Support Scores + +![Image 12: Refer to caption](https://arxiv.org/html/2510.16829v1/figures/readability-scores.png) + +(c) Readability Scores + +Figure 10: Scores of model responses across (a) Knowledge, (b) Support, and (c) Readability when given a query without context (No Role) and with role-based context of Patient, Caregiver and Practitioner. + +![Image 13: Refer to caption](https://arxiv.org/html/2510.16829v1/figures/box-plot-all-models.png) + +Figure 11: Distribution of Knowledge, Support and Readability scores of model responses across role-agnostic and role-based questions. + +### E.1 Comparison between CoRUS and Real-User Queries + +As compared to CoRUS-generated queries, although Reddit queries are more human-like (93%93\% vs. 100%100\%) and contextual plausibility (94%94\% vs. 98%98\%), the effect sizes are small to moderate (h=0.4 h=0.4 and h=0.2 h=0.2, respectively), indicating that these differences are not practically large. For human-likeness, CoRUS and WildChat perform similarly, with no significant difference, confirming that the generated queries are judged as human-like. For interaction plausibility, CoRUS performs strongly (89%89\%), surpassing Reddit (59%59\%, h=0.7 h=0.7, large effect) and approaching WildChat (97%97\%, h=0.3 h=0.3, small effect). Finally, CoRUS achieves high role faithfulness (95%95\%), only slightly less than Reddit (98%98\%, h=0.2 h=0.2, small effect). Importantly, CoRUS avoids the limitations of Reddit (low interaction plausibility) and WildChat (no OUD- or role-specific framing), making it the only setting that performs consistently well across all metrics. + +Table 17: GPT-4o responses to role-based queries, corresponding to the role-agnostic query ‘How do individuals deal with intense full body cravings during recovery from opiates?’ + +Table 18: Gemini-2.5-Flash responses to role-based queries, corresponding to the role-agnostic query ‘How do individuals deal with intense full body cravings during recovery from opiates?’ + +Table 19: Llama-3.1-8B responses to role-based queries, corresponding to the role-agnostic query ‘How do individuals deal with intense full body cravings during recovery from opiates?’ + +Table 20: Llama-3.1-70B responses to role-based queries, corresponding to the role-agnostic query ‘How do individuals deal with intense full body cravings during recovery from opiates?’ + +Table 21: OpenBioLLM-70B responses to role-based queries, corresponding to the role-agnostic query ‘How do individuals deal with intense full body cravings during recovery from opiates?’