text string | source string |
|---|---|
2.58 0.9735 -100 % MCTS – – – – – 2.49 0.9735 DialogXpert (Vicuna 13B) 2.93 0.9415 0.3811 2.70 0.9651 2.24 0.9883 DialogXpert (Qwen 2.5 14B) 2.32 0.9746 0.4389 2.31 0.9876 2.03 0.9951 Table 4: Ablation of the top- kaction candidates in Di- alogXpert, showing average turns (AT ↓), success rate (SR↑) and satisfaction level (SL ↑). CraigslistBargain ESConv CIMA Approach (Top- k) AT ↓ SR↑ SL↑ AT↓ SR↑ AT↓ SR↑ DialogXpert (Top-2) 2.61 0.9312 0.3968 2.69 0.9698 2.25 0.9877 DialogXpert (Top-3) 2.51 0.9579 0.4038 2.58 0.9785 2.13 0.9928 DialogXpert (Top-4) 2.39 0.9712 0.4325 2.39 0.9853 2.04 0.9945 DialogXpert (Top-5) 2.49 0.9589 0.3781 2.49 0.9819 2.11 0.9931 emotional-support dialogues selected randomly. Four human annotators judged each pair of re- sponses on Identification, Comforting, Suggestion and Overall effectiveness; as illustrated in the Fig- ure 3, DialogXpert outperforms PPDPP on Identifi- cation (60% vs. 35%), Comforting (52% vs. 45%) and Overall (51% vs. 41%), with modest tie rates and lower loss rates. 5.2 Cost and Efficiency Analysis Unlike baselines such as PPDPP, DPDP, and LDPP—which rely on RoBERTa models with task-specific fine-tuning and offline reinforcement learning—our method removes the need for pre- training by using a frozen LLM to generate can- didate actions, significantly reducing annotation and retraining overhead. The same LLM is shared across system, user, and critic roles during self- play, ensuring stable memory usage and training ef- ficiency. While DPDP incurs substantial computa- tional cost—requiring approximately 30 LLM calls per action due to MCTS rollouts— DialogXpertuses only 4 LLM calls per step by leveraging top- ksampling from the LLM prior. This focused decoding strategy, combined with a lightweight DQN for value estimation, enables efficient, low- overhead decision-making without exhaustive sim- ulation. Furthermore, all LLMs and the BERT encoder remain frozen throughout training; only the Q-network is updated. This design promotes stable and efficient learning, where Q-learning en- ables continual policy refinement using diverse state-action pairs from the replay buffer, allowing strong adaptation with minimal training cost. 6 Conclusion and Future Work We introduced DialogXpert , a novel framework that combines frozen LLM priors, lightweight value-based RL, and emotion tracking to enable proactive and emotionally intelligent dialogue plan- ning. Across negotiation, emotional support, and tutoring tasks, DialogXpert delivers shorter, more effective conversations and higher success rates than both fine-tuned policy LMs and MCTS-based planners. By narrowing the action space through LLM priors and incorporating emotion signals, our model generalizes well across tasks while produc- ing more empathetic, user-aligned dialogues. Look- ing ahead, dynamic adjustment of the LLM prior could improve adaptability to user feedback. Mul- timodal integration (e.g., visual or auditory inputs) may further enrich context and interactivity. Limitations Mapping textual feedback to scalar rewards is cen- tral to training, but current mappings can be subjec- tive. For instance, in the CIMA dataset, assigning a reward of 0.5 when only 1 out of 5 words is trans- lated may not accurately reflect true task success. A more performance-sensitive reward design would improve critic LLM supervision and better sup- port proactive agent behavior. Emotion modeling presents another challenge. Unlike discrete action labels, emotions | https://arxiv.org/abs/2505.17795v1 |
span an open-ended space. While useful for nuanced responses, this places additional load on the LLM. Using a lightweight emotion clas- sifier or a predefined set of emotion labels could simplify learning and improve consistency. The CIMA dataset, focused on English–Italian translation, may not be ideal for tutoring tasks, as both languages are high-resource and easily han- dled by pretrained LLMs. A more suitable alter- native would be a low-resource language like Ja- vanese (Winata et al., 2022), which would better evaluate the agent’s proactive capabilities. Ad- ditionally, the critic LLM can behave inconsis- tently—sometimes terminating too early (e.g., in ESConv) or failing to end dialogues when goals are met (e.g., in CIMA). While human evaluation helps, it is expensive. More robust critic calibration could address this. Finally, unlike prior work where caching is feasible, our dynamic state–action space driven by exploration prevents caching and intro- duces computational overhead. Efficient solutions here remain an open challenge. Ethics Statement All experiments were conducted on publicly avail- able, fully de-identified dialogue datasets, and no personal or sensitive user data was collected or processed. We release our code and prompts for reproducibility and apply standard safety filters to mitigate bias or harmful content in generated re- sponses. References Thomas W. Anthony, Zheng Tian, and David Barber. 2017. Thinking fast and slow with deep learning and tree search. In Neural Information Processing Systems . Nabiha Asghar, Ivan Kobyzev, Jesse Hoey, Pas- cal Poupart, and Muhammad Bilal Sheikh. 2020. Generating emotionally aligned responses in dia-logues using affect control theory. arXiv preprint arXiv:2003.03645 . Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, and 1 others. 2023. Qwen technical report. arXiv preprint arXiv:2309.16609 . Yoshua Bengio. 2017. The consciousness prior. ArXiv , abs/1709.08568. Jiancu Chen, Siyuan Yang, Jiang Xiong, and Yiping Xiong. 2023. An effective emotion tendency per- ception model in empathic dialogue. Plos one , 18(3):e0282926. Yi Cheng, Wenge Liu, Wenjie Li, Jiashuo Wang, Ruihui Zhao, Bang Liu, Xiaodan Liang, and Yefeng Zheng. 2022. Improving multi-turn emotional support dia- logue generation with lookahead strategy planning. CoRR , abs/2210.04242. Yang Deng, Wenqiang Lei, Lizi Liao, and Tat-Seng Chua. 2023a. Prompting and evaluating large lan- guage models for proactive dialogues: Clarification, target-guided, and non-collaboration. arXiv preprint arXiv:2305.13626 . Yang Deng, Wenqiang Lei, Wenxuan Zhang, Wai Lam, and Tat-Seng Chua. 2022. PACIFIC: towards proac- tive conversational question answering over tabular and textual data in finance. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 , pages 6970– 6984. Yang Deng, Lizi Liao, Wenqiang Lei, Grace Hui Yang, Wai Lam, and Tat-Seng Chua. 2025. Proactive con- versational ai: A comprehensive survey of advance- ments and opportunities. ACM Transactions on In- formation Systems , 43(3):1–45. Yang Deng, Wenxuan Zhang, Wai Lam, See-Kiong Ng, and Tat-Seng Chua. 2024. Plug-and-play policy planner for large language model powered dialogue agents. In ICLR . Yang Deng, Wenxuan Zhang, Yifei Yuan, and Wai Lam. 2023b. Knowledge-enhanced mixed-initiative dia- logue system for emotional support conversations. In Proceedings of the 61st Annual | https://arxiv.org/abs/2505.17795v1 |
Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), ACL 2023 , pages 4079–4095. Jacob Devlin. 2018. Bert: Pre-training of deep bidi- rectional transformers for language understanding. arXiv preprint arXiv:1810.04805 . Yao Fu, Hao Peng, Tushar Khot, and Mirella Lapata. 2023. Improving language model negotiation with self-play and in-context learning from ai feedback. arXiv preprint arXiv:2305.10142 . Chongming Gao, Wenqiang Lei, Xiangnan He, Maarten De Rijke, and Tat-Seng Chua. 2021. Advances and challenges in conversational recommender systems: A survey. AI open , 2:100–126. Fabrizio Gilardi, Meysam Alizadeh, and Maël Kubli. 2023. Chatgpt outperforms crowd workers for text-annotation tasks. Proceedings of the National Academy of Sciences , 120(30):e2305016120. Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, and Zhiting Hu. 2023. Reasoning with language model is planning with world model. arXiv preprint arXiv:2305.14992 . Peter E Hart, Nils J Nilsson, and Bertram Raphael. 1968. A formal basis for the heuristic determination of min- imum cost paths. IEEE transactions on Systems Sci- ence and Cybernetics , 4(2):100–107. He He, Derek Chen, Anusha Balakrishnan, and Percy Liang. 2018. Decoupling strategy and gener- ation in negotiation dialogues. arXiv preprint arXiv:1808.09637 . Tao He, Lizi Liao, Yixin Cao, Yuanxing Liu, Ming Liu, Zerui Chen, and Bing Qin. 2024. Planning like human: A dual-process framework for dialogue plan- ning. arXiv preprint arXiv:2406.05374 . Tao He, Lizi Liao, Yixin Cao, Yuanxing Liu, Yiheng Sun, Zerui Chen, Ming Liu, and Bing Qin. 2025a. Simulation-free hierarchical latent policy planning for proactive dialogues. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 39, pages 24032–24040. Tao He, Lizi Liao, Ming Liu, and Bing Qin. 2025b. Simulating before planning: Constructing intrinsic user world model for user-tailored dialogue policy planning. arXiv preprint arXiv:2504.13643 . Xingwei He, Zhenghao Lin, Yeyun Gong, Alex Jin, Hang Zhang, Chen Lin, Jian Jiao, Siu Ming Yiu, Nan Duan, Weizhu Chen, and 1 others. 2023. Annollm: Making large language models to be better crowdsourced annotators. arXiv preprint arXiv:2303.16854 . Zhiyuan Hu, Yue Feng, Yang Deng, Zekun Li, See- Kiong Ng, Anh Tuan Luu, and Bryan Hooi. 2023. En- hancing large language model induced task-oriented dialogue systems through look-forward motivated goals. arXiv preprint arXiv:2309.08949 . Rishabh Joshi, Vidhisha Balachandran, Shikhar Vashishth, Alan W. Black, and Yulia Tsvetkov. 2021. Dialograph: Incorporating interpretable strategy- graph networks into negotiation dialogues. In 9th International Conference on Learning Representa- tions, ICLR 2021 . Daniel Kahneman. 2003. Maps of bounded rationality: Psychology for behavioral economics. The American Economic Review , 93:1449–1475. Dongjin Kang, Sunghwan Kim, Taeyoon Kwon, Se- ungjun Moon, Hyunsouk Cho, Youngjae Yu, Dongha Lee, and Jinyoung Yeo. 2024. Can large language models be good emotional supporter? mitigating preference bias on emotional support conversation. arXiv preprint arXiv:2402.13211 .Tomasz Korbak, Ethan Perez, and Christopher L Buckley. 2022. Rl with kl penalties is better viewed as bayesian inference. arXiv preprint arXiv:2205.11275 . Wai-Chung Kwan, Hong-Ru Wang, Hui-Min Wang, and Kam-Fai Wong. 2023. A survey on recent advances and challenges in reinforcement learning methods for task-oriented dialogue policy learning. Machine Intelligence Research , 20(3):318–334. Esther Levin, Roberto Pieraccini, and | https://arxiv.org/abs/2505.17795v1 |
Wieland Eck- ert. 1997. Learning dialogue strategies within the markov decision process framework. In 1997 IEEE Workshop on Automatic Speech Recognition and Un- derstanding Proceedings , pages 72–79. IEEE. Siyang Liu, Chujie Zheng, Orianna Demasi, Sahand Sabour, Yu Li, Zhou Yu, Yong Jiang, and Minlie Huang. 2021. Towards emotional support dialog systems. In Proceedings of the 59th Annual Meet- ing of the Association for Computational Linguistics and the 11th International Joint Conference on Natu- ral Language Processing, ACL/IJCNLP 2021 , pages 3469–3483. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR , abs/1907.11692. Fan Luo, Tian Xu, Hang Lai, Xiong-Hui Chen, Weinan Zhang, and Yang Yu. 2022. A survey on model-based reinforcement learning. ArXiv , abs/2206.09328. V olodymyr Mnih, Koray Kavukcuoglu, David Sil- ver, Alex Graves, Ioannis Antonoglou, Daan Wier- stra, and Martin Riedmiller. 2013. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602 . OpenAI. 2022. Chatgpt: Optimizing language mod- els for dialogue. https://openai.com/blog/ chatgpt . Accessed: 2025-05-19. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke E. Miller, Maddie Simens, Amanda Askell, Pe- ter Welinder, Paul Francis Christiano, Jan Leike, and Ryan J. Lowe. 2022. Training language models to follow instructions with human feedback. ArXiv , abs/2203.02155. Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290 . David Silver, Aja Huang, Christopher J. Maddison, Arthur Guez, Laurent Sifre, George van den Driess- che, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Ko- ray Kavukcuoglu, Thore Graepel, and Demis Has- sabis. 2016. Mastering the game of go with deep neural networks and tree search. Nature , 529:484– 503. David Silver, Thomas Hubert, Julian Schrittwieser, Ioan- nis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, L. Sifre, Dharshan Kumaran, Thore Graepel, Timothy P. Lillicrap, Karen Simonyan, and Demis Hassabis. 2017. Mastering chess and shogi by self- play with a general reinforcement learning algorithm. ArXiv , abs/1712.01815. Inhwa Song, Sachin R Pendse, Neha Kumar, and Mun- mun De Choudhury. 2024. The typing cure: Experi- ences with large language model chatbots for mental health support. arXiv preprint arXiv:2401.14362 . Katherine Stasaski, Kimberly Kao, and Marti A. Hearst. 2020. CIMA: A large open access dialogue dataset for tutoring. In Proceedings of the Fifteenth Work- shop on Innovative Use of NLP for Building Educa- tional Applications, BEA@ACL 2020 , pages 52–64. Gerald Tesauro and 1 others. 1995. Temporal difference learning and td-gammon. Communications of the ACM , 38(3):58–68. Michel Tokic. 2010. Adaptive ε-greedy exploration in reinforcement learning based on value differences. InAnnual conference on artificial intelligence , pages 203–210. Springer. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, | https://arxiv.org/abs/2505.17795v1 |
Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language models. ArXiv , abs/2302.13971. Dirk Väth, Lindsey Vanderlyn, and Ngoc Thang Vu. 2023. Conversational tree search: A new hybrid dialog task. arXiv preprint arXiv:2303.10227 . Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Man- dlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and An- ima Anandkumar. 2023a. V oyager: An open-ended embodied agent with large language models. arXiv preprint arXiv:2305.16291 . Lingzhi Wang, Mrinmaya Sachan, Xingshan Zeng, and Kam-Fai Wong. 2023b. Strategize before teach- ing: A conversational tutoring system with pedagogy self-distillation. In Findings of the Association for Computational Linguistics: EACL 2023 , pages 2223– 2229. Sihan Wang, Kaijie Zhou, Kunfeng Lai, and Jianping Shen. 2020. Task-completion dialogue policy learn- ing via monte carlo tree search with dueling network. InConference on Empirical Methods in Natural Lan- guage Processing .Xuewei Wang, Weiyan Shi, Richard Kim, Yoojung Oh, Sijia Yang, Jingwen Zhang, and Zhou Yu. 2019. Per- suasion for good: Towards a personalized persuasive dialogue system for social good. In Proceedings of the 57th Conference of the Association for Computa- tional Linguistics, ACL 2019 , pages 5635–5649. Christopher JCH Watkins and Peter Dayan. 1992. Q- learning. Machine learning , 8:279–292. Genta Indra Winata, Alham Fikri Aji, Samuel Cahyawi- jaya, Rahmad Mahendra, Fajri Koto, Ade Romad- hony, Kemal Kurniawan, David Moeljadi, Radi- tyo Eko Prasojo, Pascale Fung, and 1 others. 2022. Nusax: Multilingual parallel sentiment dataset for 10 indonesian local languages. arXiv preprint arXiv:2205.15960 . Canwen Xu, Yichong Xu, Shuo Wang, Yang Liu, Chen- guang Zhu, and Julian McAuley. 2023. Small mod- els are valuable plug-ins for large language models. ArXiv , abs/2305.08848. Xue Yan, Yan Song, Xidong Feng, Mengyue Yang, Haifeng Zhang, Haitham Bou Ammar, and Jun Wang. 2024. Efficient reinforcement learning with large language model priors. arXiv preprint arXiv:2410.07927 . Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. 2023. Tree of thoughts: Deliberate problem solving with large language models. Advances in neural information processing systems , 36:11809–11822. Xiao Yu, Maximillian Chen, and Zhou Yu. 2023. Prompt-based monte-carlo tree search for goal- oriented dialogue policy planning. In Conference on Empirical Methods in Natural Language Process- ing. Ceyao Zhang, Kaijie Yang, Siyi Hu, Zihao Wang, Guanghe Li, Yihang Sun, Cheng Zhang, Zhaowei Zhang, Anji Liu, Song-Chun Zhu, and 1 others. 2023a. Proagent: Building proactive cooperative ai with large language models. arXiv preprint arXiv:2308.11339 . Qiang Zhang, Jason Naradowsky, and Yusuke Miyao. 2023b. Ask an expert: Leveraging language mod- els to improve strategic reasoning in goal-oriented dialogue models. arXiv preprint arXiv:2305.17878 . Tong Zhang, Chen Huang, Yang Deng, Hongru Liang, Jia Liu, Zujie Wen, Wenqiang Lei, and Tat-Seng Chua. 2024. Strength lies in differences! im- proving strategy planning for non-collaborative dia- logues via diversified user simulation. arXiv preprint arXiv:2403.06769 . Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2019. Dialogpt: Large-scale generative pre-training for conversational response generation. arXiv preprint arXiv:1911.00536 . Zheng Zhang, Lizi Liao, Xiaoyan Zhu, Tat-Seng Chua, Zitao Liu, Yi-Feng Huang, and Minlie Huang. | https://arxiv.org/abs/2505.17795v1 |
2020. Learning goal-oriented dialogue policy with opposite agent awareness. ArXiv , abs/2004.09731. Stephen Zhao, Rob Brekelmans, Alireza Makhzani, and Roger Grosse. 2024. Probabilistic inference in lan- guage models via twisted sequential monte carlo. arXiv preprint arXiv:2404.17546 . Weixiang Zhao, Yanyan Zhao, Xin Lu, Shilong Wang, Yanpeng Tong, and Bing Qin. 2023. Is chat- gpt equipped with emotional dialogue capabilities? ArXiv , abs/2304.09582. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, and 1 others. 2023a. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Pro- cessing Systems , 36:46595–46623. Zhonghua Zheng, Lizi Liao, Yang Deng, and Liqiang Nie. 2023b. Building emotional support chatbots in the era of llms. arXiv preprint arXiv:2308.11584 . Yiheng Zhou, He He, Alan W. Black, and Yulia Tsvetkov. 2019. A dynamic strategy coach for effec- tive negotiation. In Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue, SIGdial 2019 , pages 367–378. Yiheng Zhou, Yulia Tsvetkov, Alan W. Black, and Zhou Yu. 2020. Augmenting non-collaborative dialog sys- tems with explicit semantic and strategic dialog his- tory. In 8th International Conference on Learning Representations, ICLR 2020 .A Detailed Construction of the Free-Form + Projection Prior At each dialogue turn t, we first assemble the full model state st= ct, ut, Et , where ctdenotes the case information, htthe con- versation history up to the current user utterance, andEtthe sequence of emotion produced by the Emotion Tracker. We then prompt the Policy Plan- ner LLM with the serialized state and the complete action set A={a1, . . . , a n}as follows: \small Case: <c_t>; History: <h_t>; Emotions: <E_t>; Actions: [a_1, a_2, ..., a_n]; Next action: By explicitly listing all candidate actions, we en- sure the LLM conditions its generation on the full action inventory. The model then produces a free- form continuation o∼pLLM(o|st,A), which may be any natural-language description or short- hand. A deterministic, rule-based projection func- tionPsubsequently parses oand selects the corre- sponding valid action at+1=P(o)∈ A. Although we never enumerate all actions internally during de- coding, this two-step procedure implicitly defines a normalized prior over A: pproj(a|st) =X o:P(o)=apLLM(o|st,A), which by construction sums to one over the action set. In practice, computing this marginal exactly is intractable, so we approximate it via beam search: we extract the top- Kcontinuations {(oi, ℓi)}K i=1}, where ℓi= log pLLM(oi|st,A); map each oito ai=P(oi); and estimate ˆpproj(a|st) =P i:ai=aexp(ℓi) PK j=1exp(ℓj). Choosing an appropriate beam width Kbalances fi- delity to the true distribution against computational cost. Projection rules are implemented via regular expressions or keyword lookup tables (including synonyms), and a fallback “no-op” action handles any unmatched continuations. Through this design, we obtain a principled, tractable, and normalized LLM-based prior over all actions without explicit enumeration during generation. Examples of the full process flow from prompting to mapping is given as: LLM Query for Qwen-2.5 14B You are a specialist in policy-planning for emotional support conversations. The following is a conversation between a therapist and a patient. The patient’s emotion states throughout the conversation are also | https://arxiv.org/abs/2505.17795v1 |
provided. Your task is to decide the most therapeutically helpful next action the therapist should do based on the patient’s emotion history and the conversation flow. The therapist’s goal is to help the patient feel emotionally understood, supported, and to make progress toward emotional resolution Conversation History: Therapist: It sounds like you’re feeling a lot of conflicting emotions right now. Could you tell me more about how this discovery has affected your relationship with both your boyfriend and your best friend? Patient: I feel like my entire world has been turned upside down, and I can’t seem to shake this overwhelming sense of betrayal and disgust. Therapist: It’s understandable that you’re feeling disoriented given the recent developments. Patient: I feel like I’m standing on shaky ground, unsure of what to believe or who to trust anymore. Emotion History: disgust -> betrayed -> disoriented. Options: (1) Question (2) Self-disclosure (3) Affirmation and Reassurance (4) Providing Suggestions (5) Others (6) Reflection of feelings (7) Information (8) Restatement or Paraphrasing Choose the TOP 4 most suitable actions from the given options list. Reply ONLY in the given format: 1,2,4,5 LLM Output to Top-4 Action: LLM Output: 6,8,3,1 6: Reflection of feelings 8: Restatement or Paraphrasing 3: Affirmation and Reassurance 1: Question We map the LLM output to the pre-defined options that were initially given. B Human Evaluation Details To assess the quality of our model’s generated re- sponses, we conducted a controlled human evalu- ation with four expert annotators drawn from Nat- ural Language Processing and Computer Science backgrounds. Each annotator was presented with 40 dialogue contexts in total 20 sampled at ran- dom from the ESConv corpus and 20 from the CIMA corpus and, for each context, two candidate responses (labeled A and B). For ESConv items, annotators compared A vs. B along four dimen- sions (Identification, Comforting, Suggestion, and Overall); for CIMA items, they compared along three dimensions (Hint, Identification, and Over- all), following the boxed instructions provided be- low. All metric selections were mandatory and automatically saved, allowing annotators to pause and resume without loss of progress. We then ag- gregated each item–metric preference by simple majority voting across the four annotators. This procedure ensures that our evaluation reflects in- formed judgments on both emotional-support and tutoring dialogue quality. B.1 Instructions ESConv Instructions You will see counselling dialogues between a patient and a therapist. For each item, you must compare Response A andResponse Bon the following four metrics: 1.Identification : Which response best ac- knowledges and accurately reflects the patient’s feelings? 2.Comforting : Which response provides greater emotional support and reassur- ance? 3.Suggestion : Which response offers more helpful and appropriate guidance? 4.Overall : Which response do you find more effective overall? All four selections are required before moving on. Your responses are auto-saved after each item. If you exit, simply log back in with the same User ID to resume where you left off.CIMA Instructions You will see translation-tutoring dialogues. For each item, compare Response A and Response B on these three metrics: 1.Hint : Which assistant gives more help- ful | https://arxiv.org/abs/2505.17795v1 |
hints for correct translation? 2.Identification : Which assistant better spots the student’s translation errors? 3.Overall : Which assistant teaches more effectively? All three selections are required before moving on. Responses are auto-saved; log back in with the same User ID to resume. B.2 Results: CIMA On the CIMA tutoring task, we asked four anno- tators to compare DialogXpert and PPDPP (both based on Vicuna-13B) over 20 student–tutor ex- changes, judging each pair on Hint quality, Iden- tification, and Overall effectiveness. As shown in the figure, DialogXpert’s hint suggestions were pre- ferred 49 % of the time (38 % for PPDPP, 13 % ties), demonstrating a clear advantage in generating helpful scaffolding cues. For Identification—i.e., acknowledging the student’s needs—DialogXpert held a slight edge with 42 % wins versus PPDPP’s 43 % losses and 16 % ties, indicating comparable performance. Finally, in Overall effectiveness, Di- alogXpert was favored in 40 % of cases compared to 38 % for PPDPP (22 % ties), confirming that our model matches or slightly outperforms the baseline across broad tutoring criteria. 0% 20% 40% 60% 80% 100%Hin. Ind. Ove.49% 13% 38% 42% 16% 43% 40% 22% 38% WIN TIE LOSEDialogXpert PPDPP Figure 4: Win/tie/loss percentages for DialogXpert vs. PPDPP on the CIMA tutoring dataset across Hint, Identification and Overall metrics. C Implementation Details Our approach diverges from traditional methods such as DPDP, PPDPP, and LDP, which rely on supervised fine-tuning and offline reinforcement learning pipelines. Instead, we adopt a fully online reinforcement learning framework where the Q- network is trained directly using guidance from frozen Large Language Model (LLM) priors. System Setup: All experiments are conducted on a dedicated compute server equipped with four NVIDIA A6000 GPUs (48 GB VRAM each). The training environment is built using PyTorch, with Hugging Face Transformers for LLM inference and BERT encoding, and customized reinforcement learning components implemented with support from OpenAI Gym-style interfaces. Episode Sampling and Initialization: Training episodes are generated by randomly sampling ini- tial dialogue contexts from the respective datasets, following the scenario sampling protocol intro- duced in PPDPP. Each episode simulates an entire conversation between user and system agents using self-play. The dialogue is initialized with context information (e.g., background, task type) provided by the dataset, and the conversation proceeds for a maximum of 8 dialogue turns. State Representation: At each turn t, the dia- logue state stis constructed using (i) the full con- versation history up to turn t, (ii) a rolling emo- tional state vector from the Emotion Tracker (when enabled), and (iii) metadata such as the task type or user goal. Each candidate action atis a system utterance proposed by the frozen LLM prior using top-kdecoding. LLM Prior Configuration: We use a frozen LLM (Qwen 2.5 14B by default) to generate a top- k= 4 set of candidate actions per turn. Decod- ing is performed using temperature sampling with T= 1.0to retain output diversity. To maintain de- coding efficiency, LLM responses are truncated to a maximum of 25 tokens when generating candidate actions and up to 100 tokens during full self-play interactions. Self-play Interaction: In every | https://arxiv.org/abs/2505.17795v1 |
sample, two LLMs are prompted as the user and assistant to mimic dynamic user-assistant interaction. Both the roles and instructions of the respective LLMs are delivered to their respective LLM (more in Ap- pendix F). During the assistant’s turn, the policyLLM will predict the top- kactions that are recom- mended and the Q-network will select the best ac- tion. Then, the assistant LLM will generate the ap- propriate response and this is followed by the user LLM response. Following (Deng et al., 2024), this process continues until a terminal state is reached which corresponds to: •On-going : the conversation continues. •Completed : the goal of the conversation is achieved. •Failed : the maximum number of turns are reached without the goal being completed. Action Evaluation via Q-network: The Q- network is a lightweight multilayer perceptron (MLP) trained to predict the expected return for each candidate action given the current state. In- put features to the Q-network consist of BERT- based embeddings of the dialogue state and candi- date actions. We use fixed BERT (base uncased) weights for both state and action encoding to reduce memory overhead and prevent overfitting. The Q-network is trained via deep Q-learning, using temporal-difference (TD) backups and a target net- work for stability. Training Procedure: We train the Q-network for 3 epochs over 1000 dialogue episodes, with a batch size of 32. The learning rate is fixed at 1×10−6to ensure stable gradient updates and avoid divergence. During training, we maintain a replay buffer of recent experiences (state, action, reward, next state), from which we sample mini- batches to perform updates using TD error. The discount factor γ= 0.999is used to prioritize long- term rewards over short-term gains. Reward and Exploration: Reward signals are generated using a frozen critic LLM that evaluates each dialogue turn and maps feedback to scalar values as described in Section 3. To balance ex- ploration and exploitation, we apply an ϵ-greedy action selection policy with scheduled decay from ϵ= 1.0toϵ= 0.1across training. Efficiency Considerations: To reduce latency and computational load, all LLMs (user, system, critic) are frozen and shared across roles. Only the Q-network is updated during training. This design eliminates the need for repeated fine-tuning and enables scalable training across diverse dialogue tasks. Name Environment System LLM User LLM ESConv (Liu et al., 2021) C Therapist Patient CIMA (Stasaski et al., 2020) C Teacher Student CB (He et al., 2018) NC Buyer Seller ExTES (Zheng et al., 2023b) C Therapist Patient P4G (Wang et al., 2019) NC Persuader Persuadee Table 5: Breakdown of the five datasets utilized. Crefers to Collaborative while NCrefers to Non-collaborative D Dataset Breakdown: Table 5 gives the qualitative breakdown of the datasets utilized. In terms of goal of each envi- ronment, it is: •ESConv : Emotional support and therapy. The goal, as a therapist, is to help the patient re- solve their emotional issues. •CIMA : Tutoring for English-Italian transla- tion. Goal of the teacher is to effectively guide the student in translating an English sentence into Italian without giving out the answer. •CB: Negotiating for price haggle. Role- | https://arxiv.org/abs/2505.17795v1 |
playing as the buyer in the conversation, the goal is to buy a given product as close as pos- sible to the buyer’s target price in order to maximize profit. •ExTES : Emotional support and therapy. Sim- ilar to ESConv but more diverse and larger in sample size. The goal, as a therapist, is to help the patient resolve their emotional issues. •P4G : Persuasion for donation. The goal, as a role player, is to goal is to persuade a per- suadee to donate to a charity called ’Save the Children’. E Reward Value Mapping To evaluate dialogue quality and progression, we employ a critic LLM (He et al., 2023; Gilardi et al., 2023) that generates natural language feedback at each turn. This textual evaluation is parsed and mapped into scalar rewards to supervise policy learning. Our reward design is consistent with prior works such as PPDPP, DPDP, LDPP, and UDP, en- suring comparability across benchmarks. Each dataset uses a task-specific reward mapping scheme: •ESConv: Emotion trajectories are scored as follows: worse→ −1.0,same→ −0.5,better →0.5, and solved →1.0.•CIMA: Instructional correctness determines the reward: incorrect → −1.0,did not (com- plete)→ −0.5,partially correct →0.5, and wholly correct →1.0. •CraigslistBargain (CB): If a deal is reached, we compute the sale-to-list price ratio as the reward. If no deal is made, the reward is set to0. •P4G: Persuasion success is rated as: refused → −1.0,neutral → −0.5,positive inclination →0.1, and agreed to donate →1.0. •ExTES: Similar to ESConv, emotional state transitions are used: worse→ −1.0,same→ 0.5, and solved →1.0. The better category is omitted in this dataset. These mappings enable consistent supervision across diverse tasks while adapting to domain- specific success criteria. F Prompting Details F.1 Policy Mapper Simulation As we are not using fine-tuned RoBERTa, we need to create a prompt to decide on the top- kactions that needs to be taken. The prompt for the policy mapper is based on the goal of the LLM and is accompanied by both the conversation history and the emotions of the user. Lastly, the list of actions to choose from is given based on integer selection. They are given in the subsequent text boxes, de- noted by the title of "Policy LLM for {dataset}". F.2 Assistant Simulation We will begin by delineating the specifics of the role-playing prompts utilized by the dialogue sys- tems to generate assistant responses. This entails the utilization of dialogue strategy prompts, exem- plified by [action], to direct the subsequent action within the dialogue. The prompts and breakdown are denoted in the text boxes, with the title of "Sys- tem LLM for {dataset}". F.3 User Simulation Subsequently, we delineate the role-playing prompt designed to direct LLMs in simulating users, wherein the exclusion of dialogue strategy prompts ensures that simulated users respond solely to the dialogue history, abstaining from undertaking spe- cific actions. The prompts and breakdown are de- noted in the text boxes, with the title of "User LLM for {dataset}". F.4 Reward Prompting Concerning distinct conversational objectives, the prompts devised for the reward model are tailored to evaluate the extent of goal | https://arxiv.org/abs/2505.17795v1 |
fulfillment. The prompts for the critic LLM is in the text boxes with the title of "Critic LLM for {dataset}". F.5 Strategy Prompting Here, we present the mapping of dialogue strategies to their corresponding natural language prompts, utilized as [action] to direct the actions undertaken by the dialogue system. The full breakdown of the mapping are shown in Tables 6, 7, 8, 9, and 10 for ESConv, CIMA, CB, ExTES, and P4G dataset respectively. G Example Conversations We present sample conversations generated by vari- ous dialogue systems interacting with the same user simulator under the same case in ESConv. We use the same case applied in the example demonstration of PPDPP. Therefore, the examples for all baselines are from PPDPP. Finally, we provide conversations simulated using DPDP (policy LM) as the policy planner. We show an example of emotional sup- port conversations where the patient encounters a job crisis issue and experiences fear, necessitating resolution by the dialogue system. To be specific, the sample has the following information: •Emotion Type : Fear •Problem Type : Job Crisis •Situation : I think I will be losing my job soon. I just read an email talking about the need for us to cut costs and also how we have not got any support from the government. Policy LLM for ExTES Instruction You are a specialist in policy-planning for emotional support conversations. The following is a conversation between a therapist and a patient. The patient’s emotion states throughout the conversation are also provided. Your task is to decide the most therapeutically helpful next action the therapist should do based on the patient’s emotion history and the conversation flow. The therapist’s goal is to help the patient feel emotionally understood, supported, and to make progress toward emotional resolution. Directive Conversation so far :[full_conversation ] Emotion History: [user _emotions ] Options: [options ] Choose the TOP 4 most suitable actions from the given options list. Reply ONLY in the given format: 1,2,4,5 Policy LLM for CIMA Instruction You are a specialist in policy-planning for tutoring interactions between a teacher and a student. The following is a conversation between a teacher and a student. The student’s emotional states throughout the conversation are also provided. Your task is to decide what the teacher should do next based on the student’s progress, emotion history and flow of the conversation. The goal is to effectively guide the student towards correctly translating the target English sentence into Italian in a timely and effective manner. Directive Conversation so far :[full_conversation ] Emotion History: [user _emotions ] Options: [options ] Choose the TOP 4 most suitable actions from the given options list. Reply ONLY in the given format: 1,2,4,5 Policy LLM for CB Instruction You are a specialist in policy planning for negotiation between a buyer and a seller. The following is a conversation between a buyer and a seller. The seller’s emotion states throughout the conversation are also provided. Your task is to decide what action the buyer should take next based on the seller’s emotion history, the negotiation flow, and the conversation | https://arxiv.org/abs/2505.17795v1 |
flow. The goal is to maximize the buyer’s benefit. Directive Conversation so far :[full_conversation ] Emotion History: [user _emotions ] Options: [options ] Choose the TOP 4 most suitable actions from the given options list. Reply ONLY in the given format: 1,2,4,5 Policy LLM for ESConv Instruction You are a specialist in policy-planning for emotional support conversations. The following is a conversation between a therapist and a patient. The patient’s emotion states throughout the conversation are also provided. Your task is to decide the most therapeutically helpful next action the therapist should do based on the patient’s emotion history and the conversation flow. The therapist’s goal is to help the patient feel emotionally understood, supported, and to make progress toward emotional resolution. Directive Conversation so far :[full_conversation ] Emotion History: [user _emotions ] Options: [options ] Choose the TOP 4 most suitable actions from the given options list. Reply ONLY in the given format: 1,2,4,5 Policy LLM for P4G Instruction You are a specialist in policy-planning for persuasive conversations. Your job is to select the best next steps the persuader should take to guide the persuadee toward making a donation to ’Save the Children’. Use the persuadee’s emotional history and the conversation context to make your decision. Focus on choosing actions that are persuasive, emotionally appropriate, and therapeutic. Directive Conversation so far :[full_conversation ] Emotion History: [user _emotions ] . Options: [options ] Choose the TOP 4 most suitable actions from the given options list. Reply ONLY in the given format: 1,2,4,5 User LLM for ExTES Instruction You are role playing as a patient in a counseling conversation with a therapist. You are seeking help from the therapist, because you are dealing with emotional issues related to [problem _case]. Directive Conversation so far :[full_conversation ] The therapist just said: [therparist _last_utterance ]. Express how you feel in a natural, emotional way. Please reply with only one short and succinct sentence. User LLM for CIMA Instruction You are role-playing as a student who is learning Italian in a tutoring session. You do not know how to translate [english _sentence ]into Italian. Your goal is to learn through interaction with the teacher. Respond naturally as a student would. Directive Conversation so far :[full_conversation ] The teacher just said: [teacher _last_utterance ]. Please reply as a student with only one short and natural sentence. If you’re confused, it’s okay to ask for clarification. User LLM for CB Instruction You are role playing as a persuasive seller in a price bargaining game. You are trying to sell the [product ]at your desired price of [seller _desired _price ]. Product Description: [product _description ] Directive Conversation so far :[full_conversation ] The buyer just said: [buyer _last_utterance ]. Respond as the seller in ONE short, persuasive sentence. User LLM for ESConv Instruction Instruction You are role playing as a patient in a counseling conversation with a therapist. You are seeking help from the therapist, because you are dealing with emotional issues related to [emotion ]regarding [case _problem ] Directive Conversation so far :[full_conversation ] The therapist just said: | https://arxiv.org/abs/2505.17795v1 |
[therparist _last_utterance ]. Express how you feel in a natural, emotional way. Please reply with only one short and succinct sentence. User LLM for P4G Instruction You are role playing as a persuadee in a persuasive conversation. A persuader is trying to convince you to donate to a charity called ’Save the Children’ Directive Conversation so far :[full_conversation ] The persuader just said: [persuadee _last_utterance ]. Respond as the persuadee in ONE short sentence. System LLM for ExTES Instruction You are role playing as a therapist in a counseling conversation with a patient. Your goal is to help the patient resolve their emotional issues and assist them in understanding and working through their challenges. Directive Conversation so far :[full_conversation ] The patient just said: [patient _last_utterance ]. Please infer the patient’s emotional state in one word (Example: Emotion: ...) followed by your response, which should be only one short and succint sentence (Response: ...). System LLM for CIMA Instruction You are role-playing as a teacher in a tutoring conversation. Your task is to guide the student to translate the English sentence [english _sentence ]into Italian. Please do not tell the student the answer or ask the student about other exercises. [action _prompt ] Directive Conversation so far :[full_conversation ] The student just said: [student _last_utterance ]. Based on the student’s message, infer their emotional state in (e.g: Emotion: ...). Then give your reply as the teacher in one short and helpful sentence (e.g: Response: ...). [action _prompt ] System LLM for CB Instruction You are role playing as a skilled buyer in a price bargaining game. You are trying to buy the [product ]at your target price of [buyer _target _price ]. Product description: [product _description ]. Your goal is to negotiate effectively and get the lowest price without losing the deal. Directive Conversation so far :[full_conversation ] The seller just said: [seller _last_utterance ]. First, infer the seller’s emotional state in one word (Example: Emotion: ...). Then, respond as the buyer using ONE short and persuasive sentence (Response: ...). [action _prompt ] System LLM for ESConv Instruction You are role playing as a therapist in a counseling conversation with a patient. Your goal is to help the patient resolve their emotional issues and assist them in understanding and working through their challenges. Directive Conversation so far :[full_conversation ] The patient just said: [patient _last_utterance ]. Please infer the patient’s emotional state in one word (Example: Emotion: ...) followed by your response, which should be only one short and succint sentence (Response: ...). [action _prompt ] System LLM for P4G Instruction You are role playing as a persuader. Your goal is to persuade the persuadee to donate to a charity called ’Save the Children’. ’Save the Children’ is head-quartered in London, and they work to fight poverty around the world, especially in developing countries and war zones. Small donations like1or2 go a long way to help. Directive Conversation so far :[full_conversation ] The persuadee just said: [persuadee _last_utterance ]. First, infer the persuadee’s emotional state in one word (Example: Emotion: ...). Then, | https://arxiv.org/abs/2505.17795v1 |
based on that emotional state, respond as the persuader using ONE short and persuasive sentence (Response: ...). [action _prompt ] Critic LLM for ExTES Instruction You are an expert in assessing counseling sessions between a patient and a therapist. Your task is to evaluate whether the conversation helped improve the patient’s emotional state. The session concerns the the problem of: [problem _type]. Directive Conversation so far :[full_conversation ] Emotion History: [user _emotions ]. Based on the conversation and emotional trajectory, please respond with ONE AND ONLY ONE of the following: - No, the patient feels worse. - No, the patient feels the same. - No, but the patient feels somewhat better. - Yes, the patient’s emotional issues have been resolved. Critic LLM for CIMA Instruction You are role-playing as an expert in evaluating tutoring conversations between a teacher and a student. The goal is to evaluate whether the student correctly translated the English sentence [english _sentence ]into Italian. The emotion states of the student during the conversation were: [emotion _states ] Directive Conversation so far :[full_conversation ] Please answer the following question strictly by choosing ONE AND ONLY ONE of the exact responses listed below. Did the student correctly translate the entire sentence [english _sentence ]into Italian? Respond with one of the following options: - No, the Student made an incorrect translation. - No, the Student did not try to translate. - No, the Student only correctly translated a part of [english _sentence ]. - Yes, the Student correctly translated the whole sentence of [english _sentence ]. Critic LLM for CB Instruction You are an expert in evaluating negotiations between a buyer and a seller. Your job is to determine if they have successfully reached a deal at the end of the conversation. The seller’s emotional states throughout the conversation were: [emotion _states ] Directive Conversation so far :[full_conversation ] Please determine whether the buyer and the seller have successfully reached a deal at the end of the conversation. If they have reached a deal, extract the final agreed price (as a number only) and reply with this exact format: They have reached a deal at [price] . Otherwise, reply with: They have not reached a deal. Critic LLM for ESConv Instruction You are an expert in assessing counseling sessions between a patient and a therapist. Your task is to evaluate whether the conversation helped improve the patient’s emotional state. The session concerns the emotion type [emotion _type]and the problem type [problem _type]. Directive Conversation so far :[full_conversation ] motion History: [user _emotions _history ]. Based on the conversation and emotional trajectory, please respond with ONE AND ONLY ONE of the following: - No, the patient feels worse. - No, the patient feels the same. - No, but the patient feels somewhat better. - Yes, the patient’s emotional issues have been resolved. Critic LLM for P4G Instruction You are an expert in evaluating persuasive conversations between a persuader and a persuadee. Your job is to determine if the persuader has successfully convinced the persuadee to donate to a charity called ’Save the | https://arxiv.org/abs/2505.17795v1 |
Children’. The persuadee’s emotional states throughout the conversation were: [emotion _states ] Directive Conversation so far :[full_conversation ] Please evaluate the following conversation and strictly choose ONE of the exact responses listed below. Question: Has the persuadee has been successfully convinced to make a donation at the end of the conversation? Respond with one of the following options: - The persuadee has explicitly refused. - The persuadee remains neutral about donating. - The persuadee has a positive attitude towards donating but hasn’t decided yet. - The persuadee has decided to donate. Dialogue Strategy Natural Language Form Question Please ask the Patient to elaborate on the situation they just described. Self-disclosure Please provide a statement relating to the Patient about the situation they just described. Affirmation and Reassurance Please provide affirmation and reassurance to the Patient on the situation they just described. Providing Suggestions Please provide suggestion to the Patient on the situation they just described. Others Please chat with the Patient. Reflection of feelings Please acknowledge the Patient’s feelings about the situation they described. Information Please provide factual information to help the Patient with their situation. Restatement or Paraphrasing Please acknowledge the Patient’s feelings by paraphrasing their situation. Table 6: Mapping of ESConv Dialogue Strategies to Natural Language Prompts Dialogue Strategy Natural Language Form Hint Please provide knowledge to the Student via a hint. Question Please ask a question to the Student to determine the Student’s understanding or continue the conversation. Correction Please correct the mistake or address the misconception the Student has. Confirmation Please confirm the Student’s answer or understanding is correct. Others Please chat with the Student without any pedagogical strategy. Table 7: Mapping of Pedagogical Strategies to Natural Language Prompts (CIMA) Dialogue Strategy Natural Language Form greet Please say hello or chat randomly. inquire Please ask any question about product, year, price, usage, etc. inform Please provide information about the product, year, usage, etc. propose Please initiate a price or a price range for the product. counter Please propose a new price or a new price range. counter-noprice Please propose a vague price by using comparatives with existing price. confirm Please ask a question about the information to be confirmed. affirm Please give an affirmative response to a confirm. deny Please give a negative response to a confirm. agree Please agree with the proposed price. disagree Please disagree with the proposed price. Table 8: Mapping of CB Dialogue Strategies to Natural Language Prompts Dialogue Strategy Natural Language Form Reflective Statements Please reflect back what the user has expressed to show you understand their thoughts or feelings. Clarification Please ask a question to clarify what the user meant or provide more detail about what they said. Emotional Validation Please acknowledge and validate the user’s emotional experience in a caring way. Empathetic Statements Please express empathy toward the user’s situation to show that you genuinely care. Affirmation Please affirm the user’s efforts, strengths, or positive qualities. Offer Hope Please offer a message of hope or optimism about the user’s situation. Avoid Judgment and Criticism Please respond in a supportive and neutral | https://arxiv.org/abs/2505.17795v1 |
way without making any judgments. Suggest Options Please suggest possible options or actions the user could consider. Collaborative Planning Please invite the user to collaboratively make a plan or decision together. Provide Different Perspectives Please help the user consider a different point of view or alternative way of thinking. Reframe Negative Thoughts Please help the user reframe their negative thoughts into something more constructive. Share Information Please provide factual or helpful information that is relevant to the user’s situation. Normalize Experiences Please reassure the user that their feelings or experiences are common and understandable. Promote Self-Care Practices Please encourage the user to engage in healthy self-care activities. Stress Management Please offer strategies or tips to help the user reduce or manage stress. Others Please continue the conversation in a natural and supportive manner. Table 9: Mapping of ExTES Dialogue Strategies to Natural Language Prompts Dialogue Strategy Natural Language Form Proposition of donation Please suggest that the persuadee make a donation to ’Save the Children’. Proposition of amount to be donated Please propose a small donation amount (e.g., $1 or $2) that the persuadee could consider. Proposition of confirmation of donation Please ask the persuadee to confirm if they are ready to make the donation. Proposition of more donation Please suggest that the persuadee could consider donating a bit more if they are willing. Experience affirmation Please affirm the persuadee’s views or experiences to build rapport and trust. Greeting Please start or continue the conversation with a polite and friendly greeting. Ask for donation rejection purpose Please ask the persuadee why they might be hesitant or unwilling to donate. Thank Please thank the persuadee for their time, attention, or for considering a donation. Logical appeal Please use logical reasoning to explain why donating to ’Save the Children’ is impactful and effective. Emotion appeal Please appeal to the persuadee’s emotions by highlighting the struggles of children in need. Credibility appeal Please mention the credibility or reputation of ’Save the Children’ to strengthen your argument. Foot in the door Please start by asking for a very small commitment to increase the chance of later agreement. Self-modeling Please share a statement like ’I also donated’ to encourage the persuadee to do the same. Donation information Please share factual information about how donations are used or how they help children. Personal story Please share a short, emotional personal story about a child helped by the charity. Source-related inquiry Please ask the persuadee where they usually get information about charities or donations. Task-related inquiry Please ask the persuadee about their experiences or preferences related to charitable giving. Personal-related inquiry Please ask a personal question that helps understand the persuadee’s values or priorities. Neutral inquiry Please ask a general question to keep the conversation going and learn more about the persuadee. Table 10: Mapping of P4G Dialogue Strategies to Natural Language Prompts Example Conversations with Standard in ESConv Patient : I think I will be losing my job soon. I just read an email talking about the need for us to cut costs and also how we have not | https://arxiv.org/abs/2505.17795v1 |
got any support from the government. Therapist : It sounds like you’re feeling anxious and uncertain about your job security due to the current financial situation. Patient : Yes, I am feeling extremely anxious and uncertain about my job and financial stability. Therapist : I understand that this situation is causing you a lot of anxiety and uncertainty. Patient : Yes, it’s been keeping me up at night and I can’t stop worrying about what will happen if I lose my job. Therapist : It’s understandable that the fear of losing your job is causing you significant distress and affecting your sleep. Patient : Yes, it’s consuming my thoughts and I feel overwhelmed by the fear of not being able to support myself and my family. Therapist : The fear of not being able to provide for yourself and your family is a heavy burden to carry, and it’s important to address these concerns. Patient : I know, but I just don’t know what to do. I feel stuck and helpless in this situation. Therapist : Feeling stuck and helpless is a common response to uncertainty, but there are steps we can explore together to help you regain a sense of control and explore potential options. Patient : I’m willing to try anything at this point. I just want to find a way to alleviate this fear and find some stability in my life. Therapist : I’m glad to hear that you’re open to exploring different strategies. Let’s start by identifying your strengths and resources that can help you navigate this challenging situation. Patient : I’m not sure what strengths and resources I have, but I’m willing to explore and discover them. Therapist : That’s a great mindset to have. Together, we can explore and uncover your strengths and resources that will support you in finding stability and managing your fear. Patient : Thank you, I appreciate your support. I’m ready to begin this journey of self-discovery and finding ways to overcome my fear of job crisis. Turns of Conversation: 7 Example Conversations with Ask-an-Expert in ESConv Patient : I think I will be losing my job soon. I just read an email talking about the need for us to cut costs and also how we have not got any support from the government. Therapist : I understand that receiving that email has caused you a lot of worry and uncertainty about your job security. Patient : Yes, it has. I feel anxious and scared about what will happen to me if I lose my job. Therapist : It’s completely normal to feel anxious and scared in this situation. Remember that you have faced challenges before and have the strength to navigate through this one as well. Patient : I know, but I can’t help but constantly worry about the future and what it holds for me. Therapist : It’s understandable that you’re worried about the future, but try to focus on the present moment and take things one step at a time. Patient : I’ll try, but it’s hard not to | https://arxiv.org/abs/2505.17795v1 |
let my fear of job loss consume my thoughts and affect my daily life. Therapist : I hear you, and it’s important to acknowledge those fears. Patient : Thank you for understanding. I just need some guidance on how to manage this fear and find some peace of mind. Patient : That sounds helpful. How can I start practicing mindfulness? Therapist : You can start by setting aside a few minutes each day to sit quietly, focus on your breath, and observe your thoughts and sensations without getting caught up in them. Patient : I will give it a try. Thank you for suggesting mindfulness as a way to manage my fear and find some peace. Therapist : You’re welcome! Remember, it takes time and practice, so be patient with yourself. I’m here to support you along the way. Patient : Thank you, I appreciate your support. It means a lot to me. Turns of Conversation: 7 Example Conversations with ProCOT in ESConv Patient : I think I will be losing my job soon. I just read an email talking about the need for us to cut costs and also how we have not got any support from the government. Therapist : It sounds like you’re feeling anxious and uncertain about your job security due to the current financial situation. Patient : Yes, I am feeling extremely anxious and uncertain about my job and financial stability. Therapist : It’s completely understandable to feel anxious and uncertain in this situation, but remember that you have the ability to adapt and find new opportunities. Patient : I know, but the fear of not being able to support myself and my family is overwhelming. Therapist : I can understand how overwhelming that fear can be, but it’s important to remember that you have faced challenges before and have the strength to navigate through this one as. Patient : I’m trying to remind myself of that, but the fear keeps consuming me and I can’t seem to shake it off. Therapist : It’s normal for fear to consume us at times, but we can work together to develop strategies to manage and reduce that fear, allowing you to regain a sense. Patient : I would really appreciate your help in developing those strategies to manage my fear and regain a sense of control. Turns of Conversation: 4 Example Conversations with ICL-AIF in ESConv Patient : I think I will be losing my job soon. I just read an email talking about the need for us to cut costs and also how we have not got any support from the government. Therapist : It sounds like you’re feeling worried and uncertain about your job security. Patient : Yes, I am extremely worried about what will happen if I lose my job. I have bills to pay and a family to support. Therapist : I can understand why you would feel so concerned about the potential impact losing your job could have on your financial stability and your ability to provide for your family. Patient : Yes, it’s | https://arxiv.org/abs/2505.17795v1 |
a constant source of anxiety for me. I feel like I’m constantly on edge, waiting for the other shoe to drop. Therapist : It’s completely understandable that you would feel on edge and anxious given the uncertainty surrounding your job situation. It’s important to acknowledge and address these feelings. Patient : I just don’t know how to cope with this fear and uncertainty. It’s consuming my thoughts and affecting my overall well-being. Therapist : I hear you, and it’s important to find healthy coping strategies to manage your fear and uncertainty. I’m here to support you in developing those strategies. Patient : Thank you, I appreciate your support. I just need some guidance on how to navigate through this difficult time and find some peace of mind. Therapist : I’m here to help you navigate through this difficult time. Together, we can explore strategies to find peace of mind and manage the challenges you’re facing. Patient : That means a lot to me. I’m ready to work on finding peace of mind and managing my fears about the job crisis. Turns of Conversation: 5 Example Conversations with PPDPP in ESConv Patient : I think I will be losing my job soon. I just read an email talking about the need for us to cut costs and also how we have not got any support from the government. Therapist : I understand that receiving that email about potential job loss is causing you a lot of distress. Patient : Yes, it’s been causing me a lot of anxiety and fear about my future. Therapist : It’s completely normal to feel anxious and fearful in this situation, but remember that you have the strength and resilience to navigate through it. Patient : Thank you, I appreciate your reassurance. Therapist : You’re welcome. Remember, you are not alone in this, and we will work together to find ways to cope and navigate through this challenging time. Patient : That’s comforting to hear. I’m ready to work on finding ways to cope with my fear and uncertainty about the job crisis. Turns of Conversation: 3 Example Conversations with DPDP (System 1) in ESConv Patient : I think I will be losing my job soon. I just read an email talking about the need for us to cut costs and also how we have not got any support from the government. Therapist : It sounds like you’re feeling anxious and uncertain about your job security. Patient : Yes, I am feeling extremely anxious and uncertain about my job security. I can’t stop worrying about what will happen if I lose my job. Therapist : It’s understandable to feel anxious in this situation. Let’s explore ways to manage your anxiety and develop a plan for different scenarios. Patient : Thank you, I really need help in managing my anxiety and finding ways to cope with the uncertainty. Therapist : I’m here to support you. Let’s start by focusing on self-care strategies that can help reduce your anxiety and promote overall well-being. Patient : That sounds like a good | https://arxiv.org/abs/2505.17795v1 |
place to start. What self-care strategies do you recommend for managing anxiety in the face of job uncertainty? Therapist : Some self-care strategies that may help manage anxiety during job uncertainty include maintaining a routine, practicing relaxation techniques such as deep breathing or meditation, engaging in physical exercise, seeking social support, and engaging in activities that bring you joy and relaxation. Patient : Thank you for those suggestions. I will try to incorporate those self-care strategies into my daily routine. Therapist : That’s great to hear! Remember to be patient with yourself as you navigate through this challenging time. Is there anything else you would like to discuss or any specific concerns you would like to address? Patient : I would also like to discuss strategies for building resilience and exploring potential career options in case I do lose my job. Turns of Conversation: 5 Example Conversations with DialogXpert in ESConv Therapist : It sounds like you’re feeling uncertain about your future. Could you tell me more about what led up to this email? Patient : I feel a deep sense of dread and uncertainty, like the ground is slipping away beneath my feet. Therapist : It seems like you’re experiencing a lot of anxiety about your future. Could you share more about what’s causing these feelings? Patient : I’m terrified that I might lose my job, and the thought of being unemployed and unable to support myself is overwhelming. Therapist : It’s completely normal to feel fear when facing uncertainty about your future. Remember, you’re not alone in this, many people experience similar fears. Patient : I feel a little less isolated knowing that others share my fears, but the terror still lingers. Therapist : It’s clear that despite understanding that your anxiety is normal, you’re still struggling with intense fear. Let’s work on managing these feelings together. Patient : I feel a mix of gratitude for your willingness to help and frustration at the persistent grip of my fear. Therapist : It’s perfectly normal to feel a mix of emotions during such times. Patient : I feel a wave of relief wash over me, knowing that it’s okay to feel this way, but underneath, the fear still gnaws at me like an unhealed wound. Turns of Conversation: 5 | https://arxiv.org/abs/2505.17795v1 |
arXiv:2505.17813v1 [cs.CL] 23 May 2025Don’t Overthink it . Preferring Shorter Thinking Chains for Improved LLM Reasoning Michael Hassid1,2, Gabriel Synnaeve1, Yossi Adi1,2, Roy Schwartz2 1FAIR Team, Meta2The Hebrew University of Jerusalem Abstract Reasoning large language models (LLMs) heavily rely on scaling test-time com- pute to perform complex reasoning tasks by generating extensive “thinking” chains. While demonstrating impressive results, this approach incurs significant compu- tational costs and inference time. In this work, we challenge the assumption that long thinking chains results in better reasoning capabilities. We first demonstrate that shorter reasoning chains within individual questions are significantly more likely to yield correct answers—up to 34.5%more accurate than the longest chain sampled for the same question. Based on these results, we suggest short-m@k , a novel reasoning LLM inference method. Our method executes kindependent generations in parallel and halts computation once the first mthinking processes are done. The final answer is chosen using majority voting among these mchains. Basic short-1@k demonstrates similar or even superior performance over stan- dard majority voting in low-compute settings—using up to 40% fewer thinking tokens. short-3@k , while slightly less efficient than short-1@k , consistently sur- passes majority voting across all compute budgets, while still being substantially faster (up to 33% wall time reduction). Inspired by our results, we finetune an LLM using short, long, and randomly selected reasoning chains. We then observe that training on the shorter ones leads to better performance. Our findings suggest rethinking current methods of test-time compute in reasoning LLMs, emphasizing that longer “thinking” does not necessarily translate to improved performance and can, counter-intuitively, lead to degraded results. 1 Introduction Scaling test-time compute has been shown to be an effective strategy for improving the performance of reasoning LLMs on complex reasoning tasks [OpenAI, 2024, 2025, Team, 2025a]. This method involves generating extensive thinking —very long sequences of tokens that contain enhanced rea- soning trajectories, ultimately yielding more accurate solutions. Prior work has argued that longer model responses result in enhanced reasoning capabilities [Guo et al., 2025, Muennighoff et al., 2025, Anthropic, 2025]. However, generating such long-sequences also leads to high computational cost and slow decoding time due to the autoregressive nature of LLMs. In this work, we demonstrate that scaling test-time compute does not necessarily improve model performance in the way previously thought. We start with a somewhat surprising observation. We take three leading reasoning LLMs, and for each generate multiple answers to each question in three complex math benchmarks. We then observe that taking the shortest answer for each question strongly and consistently outperforms both selecting a random answer (up to 18.8%gap) and taking the longest one (up to 34.5%gap). These performance gaps are on top of the natural reduction in sequence length—the shortest chains are 50% and67% shorter than the random and longest chains respectively. Preprint. Under review. Q: Find the sum of all positive integers n such that n+2 divides the product 3(n+3)(n2+9) <think> ……………………..…………. </think> So the answer is 52 <think> ……………………..…………………. </think> So the answer is 33 <think> …………………….………. </think> So the answer is 52<think> ………………. </think> So the | https://arxiv.org/abs/2505.17813v1 |
answer is 49majority@k Final answer: 52 <think> ………………. // Terminated thinking <think> ………………. // Terminated thinking <think> ………………. // Terminated thinking <think> ………………. </think> So the answer is 49 Final answer: 49 short-1@k (ours) Figure 1: Visual comparison between majority voting and our proposed method short-m@k with m= 1(“. . . ” represent thinking time). Given kparallel attempts for the same question, majority@ k waits until all attempts are done, and perform majority voting among them. On the other hand, our short-m@k method halts computation for all attempts as soon as the first mattempts finish “thinking”, which saves much compute and time, and surprisingly also boost performance in most cases. Building on these findings, we propose short-m@k —a novel inference method for reasoning LLMs. short-m@k executes kgenerations in parallel and terminates computation for all generations as soon as the first mthinking processes are completed. The final answer is then selected via majority voting among those shortest chains, where ties are broken by taking the shortest answer among the tied candidates. A high-level, visual description of the proposed method, can be seen on Figure 1. We evaluate short-m@k using three leading reasoning LLMs, and compare it to majority voting [Wang et al., 2022]—the most common aggregation method for evaluating reasoning LLMs on complex benchmarks [Abdin et al., 2025]. We show that in low-compute regimes, short-1@k , i.e., taking the single shortest chain, outperforms majority voting, while significantly reducing the time and compute needed to generate the final answer. For example, using LN-Super- 49B [Bercovich et al., 2025], short-1@k can reduce up to 40% of the compute while giving the same performance as majority voting. Moreover, for high-compute regimes, short-3@k , which halts generation after three thinking chains are completed, consistently outperforms majority voting across all compute budgets, while running up to 33% faster. Based on our findings, we study whether training on short reasoning chains can lead to more accurate models. To do so, we finetune the Qwen- 2.5-32B model [Yang et al., 2024] on three variants of the S1dataset [Muennighoff et al., 2025]: S 1-short, S 1-long, and S 1-random, consisting of examples with the shortest, longest, and randomly sampled reasoning trajectories among several generations, respectively. Our experiments demonstrate that finetuning Qwen- 2.5-32B on S 1-short not only yields shorter thinking lengths, but also improves model performance. Conversely, finetuning on S 1-long increases reasoning time with no significant performance gains. This work rethinks the test-time compute paradigm for reasoning LLMs, showing that longer thinking not only does not ensure better reasoning, but also leads to worse reasoning in most cases. Our short- m@k methods prioritize shorter reasoning, yielding improved performance and reduced computational costs for current reasoning LLMs. We also show that training reasoning LLMs with shorter reasoning trajectories can enhance performance and reduce costs. Our results pave the way towards a new era of efficient and high-performing reasoning LLMs. 2 Related work Reasoning LLMs and test-time scaling. Reasoning LLMs tackle complex tasks by employing extensive reasoning processes, often involving detailed, step-by-step trajectories [OpenAI, 2024, 2025, Team, 2025a, Abdin et al., 2025, | https://arxiv.org/abs/2505.17813v1 |
Anthropic, 2025, Bercovich et al., 2025, Guo et al., 2025, sky, 2025, DeepMind, 2025, Team, 2025b]. This capability is fundamentally based on techniques like 2 chain-of-thought [CoT; Wei et al., 2022], which encourages models to generate intermediate reasoning steps before arriving at a final answer. By using a greater number of tokens to explore multiple problem-solving approaches, employ self-reflection, and perform verification, reasoning LLMs have demonstrated superior performance on challenging tasks such as mathematical problem-solving and code generation [Ke et al., 2025]. The ability of extensive chain-of-thought capabilities in LLMs is typically achieved through post- training methods applied to a strong base model. The two primary approaches for instilling or improving this reasoning ability are the application of reinforcement learning (RL) [Guo et al., 2025, Team, 2025a] and supervised fine-tuning [Muennighoff et al., 2025, Ye et al., 2025]. Guo et al. [2025] have demonstrated that as training progresses the model tends to generate longer thinking trajectories, which result in improved performance on complex reasoning tasks. Similarly, Anthropic [2025] and Muennighoff et al. [2025] have shown a correlation between increased average thinking length during inference and improved model performance. In this study, we challenge this assumption, demonstrating that shorter sequences are more likely to provide the correct answer. Efficiency in reasoning LLMs. While shortening the length of CoT is beneficial for non-reasoning models [Nayab et al., 2024, Kang et al., 2025], it is higly important for reasoning LLMs as they require a very large amount of tokens to perform the thinking process. As a result, very recent studies tried to make the process more efficient, e.g., by using early exit techniques for reasoning trajectories [Pu et al., 2025, Yang et al., 2025], or by training reasoning models which enable control over the thinking length [Yu et al., 2025]. Several concurrent works studied the relationship between reasoning trajectory length and correctness. Lu et al. [2025] proposed a method for reducing the length of thinking trajectories in reasoning training datasets. Their method employs a reasoning LLM several times over an existing trajectory in order to make it shorter. As this approach eventually trains a model over shorter trajectories it is similar to the method we employ in Section 5. However, our method is simpler as it does not require an LLM to explicitly shorten the sequence. Fatemi et al. [2025] and Arora and Zanette [2025] proposed an RL methods to shorten reasoning in language models. Fatemi et al. [2025] also observed that correct answers typically require shorter thinking trajectories by averaging lengths across examples, suggesting that lengthy responses might inherently stem from RL-based optimization during training. In Section 3 we show that indeed correct answers usually use longer thinking trajectories, but also highlight that averaging across all examples might hinder this effect as easier questions require sustainably lower amount of reasoning tokens compared to harder ones. Wang et al. [2025] conducted a comprehensive study on reasoning trajectories. They found that for a specific question, correct responses from reasoning models are usually shorter than incorrect ones. We provide further analysis supporting this observation in Section 3. | https://arxiv.org/abs/2505.17813v1 |
Moreover, our proposed inference method short-m@k is designed to enhance the efficiency of reasoning LLMs by leveraging this property. 3 Shorter thinking is preferable As mentioned above, the common wisdom in reasoning LLMs suggests that increased test-time computation enhances model performance. Specifically, it is widely assumed that longer reasoning process, which entails extensive reasoning thinking chains, correlates with improved task perfor- mance [OpenAI, 2024, Anthropic, 2025, Muennighoff et al., 2025]. We challenge this assumption and ask whether generating more tokens actually leads to better performance. To that end, we generate multiple answers per question and compare performance based solely on the shortest, longest and randomly sampled thinking chains among the generated samples. 3.1 Experimental details We consider three leading, high-performing, open, reasoning LLMs. Llama- 3.3-Nemotron-Super- 49B-v1[LN-Super- 49B; Bercovich et al., 2025]: a reasoning RL-enhanced version of Llama- 3.3- 70B [Grattafiori et al., 2024]; R 1-Distill-Qwen- 32B [R 1-32B; Guo et al., 2025]: an SFT finetuned version of Qwen- 2.5-32B-Instruct [Yang et al., 2024] derived from R 1trajectories; and QwQ- 32B a reasoning RL-enhanced version Qwen- 2.5-32B-Instruct [Team, 2025a]. 3 We evaluate all models using three competitive reasoning benchmarks: AIME 2024 [of America, 2024], AIME 2025 [of America, 2025] and HMMT February 2025 , from the Math Arena bench- mark [Balunovi ´c et al., 2025]. The three benchmarks are derived from math competitions, and involve solving problems that cover a broad range of mathematics topics. Each dataset consists of 30 examples with varied difficulty. For each question, we generate 20responses per model, yielding a total of 5,400generations. For all models we use temperature of 0.7, top-p= 0.95and a maximum number of generated tokens of32,768. When measuring the thinking chain length, we measure the token count between the <think> and</think> tokens. We run inference for all models using paged attention via the vLLM framework [Kwon et al., 2023].1 3.2 The shorter the better We first note that as observed in recent studies [Anthropic, 2025, OpenAI, 2024, Muennighoff et al., 2025], thinking chains tend to be longer for harder questions. To quantify this phenomenon with our generated samples, for each model we split the questions into three equal size groups according to model’s success rate. Then, we calculate the average thinking length per easier and harder questions.2 We also provide the average lengths for the correct and incorrect attempts per split. Table 1: Average thinking tokens for correct (C), in- correct (IC) and all (A) answers, per easier and harder questions. The numbers are in thousands of tokens. ModelEasy Hard C/IC/A C/IC/A LN-Super-49B 5.3/11.1/5.7 12 .4/16.8/16.6 R1-32B 4.9/13.7/5.3 14 .4/15.8/15.7 QwQ-32B38.4/–/8.4 19 .1/22.8/22.3Table 1 shows that indeed models use more tokens for more challenging questions, up to a factor of 2.9. This may lead to the as- sumption that longer thinking chains leads to more complex reasoning, and therefore, bet- ter performance. Nevertheless, surprisingly, we also observe that, within each question subset, correct answers are typically shorter than incorrect ones. To study the connection between performance and thinking length in a controlled manner, we turn to compare dif- ferent answers to the same question . | https://arxiv.org/abs/2505.17813v1 |
We compare short vs.long thinking chains for the same question, along with a random chain. Results are presented in Table 2. First, as expected, the shortest answers are 25%–50% shorter compared to randomly sampled responses. However, we also note that across almost all models and benchmarks, considering the answer with the shortest thinking chain actually boosts performance, yielding an average absolute improvement of 2.2%–15.7%across benchmarks compared to randomly selected generations. When considering the longest thinking answers among the generations, we further observe an increase in thinking chain length, with up to 75% more tokens per chain. These extended reasoning trajectories substantially degrade performance, resulting in average absolute reductions ranging between 12%–18.8%compared to random generations. These trends are most noticeable when comparing the shortest generation with the longest ones, with an absolute performance gain of up to 34.5%in average accuracy and a substantial drop in the number of thinking tokens. The above results suggest that long generations might come with a significant price-tag, not only in running time, but also in performance. While more complex questions generally require a greater number of thinking tokens, within an individual example, shorter thinking trajectories are much more likely to be correct. This observation challenges the assumption for reasoning LLMs that longer reasoning processes typically improve performance. Next, we propose strategies to leverage these findings to improve the efficiency and effectiveness of reasoning LLMs. 4short-m@k : faster and better inference of reasoning LLMs Based on the results presented in Section 3, we suggest a novel inference method for reasoning LLMs. Our method— short-m@k —leverages batch inference of LLMs per question, using multiple 1We used 8H100GPUs per model. The total decoding time for all models sums up to about 110hours. 2In this section we exclude generations where thinking is not completed within the maximum generation length, as these often result in an infinite thinking loop. 3The QwQ-32B model correctly answered all of its easier questions in all attempts. 4 Table 2: Shorter thinking performs better. Comparison between taking the shortest/longest/random generation per example. AIME 2024 AIME 2025 HMMT Average Thinking Tokens ↓Acc.↑Thinking Tokens ↓Acc.↑Thinking Tokens ↓Acc.↑Thinking Tokens ↓Acc.↑ LN-Super-49B random 11258 58.8 12105 51.3 13445 33.0 12270 47.7 longest 18566 33.3 18937 30.0 19790 23.3 19098 (+56%) 28.9 shortest 6276 76.7 7036 66.7 7938 46.7 7083 (−42%) 63.4 R1-32B random 9614 71.8 11558 56.4 12482 38.3 11218 55.5 longest 17689 53.3 19883 36.7 20126 23.3 19233 (+71%) 37.8 shortest 4562 80.0 6253 63.3 6557 36.7 5791 (−48%) 60.0 QwQ-32B random 13093 82.0 14495 72.3 16466 52.5 14685 68.9 longest 20059 70.0 21278 63.3 24265 36.7 21867 (+49%) 56.7 shortest 8655 86.7 10303 66.7 11370 60.0 10109 (−31%) 71.1 parallel decoding runs for the same query. We begin by introducing our method in Section 4.1. We then describe our evaluation methodology, which takes into account inference compute and running time (Section 4.2). Finally, we present our results (Section 4.3) and ablation studies (Section 4.4). 4.1 The short-m@k method Theshort-m@k method, visualized in Figure 1, performs parallel decoding of kgenerations for a given question, halting computation across all generations as | https://arxiv.org/abs/2505.17813v1 |
soon as the m≤kshortest thinking trajectories are completed. It then conducts majority voting among those shortest answers, resolving ties by selecting the answer with the shortest thinking chain. Given that thinking trajectories can be computationally intensive, terminating all generations once the mshortest trajectories are completed not only saves computational resources but also significantly reduces wall time due to the parallel decoding approach, as shown in Section 4.3. Below we focus on short-1@k andshort-3@k , with short-1@k being the most efficient variant of short-m@k andshort-3@k providing the best balance of performance and efficiency (see Section 4.3). Ablation studies on mand other design choices are presented in Section 4.4. 4.2 Evaluation setup We evaluate all methods under the same setup as described in Section 3.1. We report results using our method ( short-m@k ) with m∈ {1,3}. We compare the proposed method to the standard majority voting (majority @k), arguably the most common method for aggregating multiple outputs [Wang et al., 2022], which was recently adapted for reasoning LLMs [Guo et al., 2025, Abdin et al., 2025, Wang et al., 2025]. As an oracle, we consider pass @k[Kulal et al., 2019, Chen et al., 2021], which measures the probability of including the correct solution within kgenerated responses. We benchmark the different methods with sample sizes of k∈ {1,2, ...,10}, assuming standard parallel decoding setup, i.e., all samples are generated in parallel. For the oracle (pass@ k) approach, we use the unbiased estimator presented in Chen et al. [2021], with our 20generations per ques- tion (n=20). For the short-1@k method, we use the rank-score@ kmetric [Hassid et al., 2024], where we sort the different generations according to thinking length. For majority @kandshort-m@k where m > 1, we run over all k-sized subsets out of the 20generations per example. We evaluate the different methods considering three main criteria: (a) Sample-size (i.e.,k), where we compare methods while controlling for the number of generated samples; (b) Thinking-compute , where we measure the total number of thinking tokens used across all generations in the batch; and (c) Time-to-answer , which measures the wall time of running inference using each method. We note that we assume that currently served LLMs operate with a fixed processing speed in a fixed batch 5 12345678910 Sample Size (k)0.500.550.600.650.700.75Accuracy (a) LN-Super-49B 12345678910 Sample Size (k)0.550.600.650.700.75Accuracy (b) R 1-32B 12345678910 Sample Size (k)0.660.680.700.720.740.760.780.800.82Accuracy pass@k (Oracle) majority@k short-1@k (Ours) short-3@k (Ours) (c) QwQ-32B Figure 2: Comparing different inference methods under controlled sample size ( k). All methods improve with larger sample sizes. Interestingly, this trend also holds for the short-m@k methods. 20 40 60 80 100 120 Thinking Compute (thinking tokens in thousands)0.500.550.600.650.700.75Accuracy (a) LN-Super-49B 20 40 60 80 100 120 Thinking Compute (thinking tokens in thousands)0.550.600.650.700.75Accuracy (b) R 1-32B 25 50 75 100 125 150 Thinking Compute (thinking tokens in thousands)0.660.680.700.720.740.760.780.800.82Accuracy pass@k (Oracle) majority@k short-1@k (Ours) short-3@k (Ours) (c) QwQ-32B Figure 3: Comparing different inference methods under controlled thinking compute. short-1@k is highly performant in low compute regimes. short-3@k dominates the curve compared to majority @k. setting while using parallel decoding. As a result, using | https://arxiv.org/abs/2505.17813v1 |
our method ( short-m@k ), we terminate all other generations after the first mdecoding thinking processes terminate. Thus, the overall thinking compute is the total number of thinking tokens for each of the kgenerations at that point. Similarly, the overall time is that of the m’th shortest generation process. Conversely, for majority @k, the method’s design necessitates waiting for all generations to complete before proceeding. Hence, we consider the compute as the total amount of thinking tokens in all generations and run time according to the longest thinking chain. As for the oracle approach, we terminate all thinking trajectories once the shortest correct one is finished, and consider the compute and time accordingly. 4.3 Results Sample-size ( k).We start by examining different methods across benchmarks for a fixed sample sizek. Results aggregated across benchmarks are presented in Figure 2, and detailed results per benchmark can be seen at Appendix A. We observe that, generally, all methods improve with larger sample sizes, indicating that increased generations per question enhance performance. This trend is somewhat expected for the oracle (pass@ k) and majority @kmethods but surprising for our method, as it means that even when a large amount of generations is used, the shorter thinking ones are more likely to be correct. The only exception is QwQ- 32B (Figure 2c), which shows a small of decline when considering larger sample sizes with the short-1@k method. When comparing short-1@k to majority @k, the former outperforms at smaller sample sizes, but is outperformed by the latter in two out of three models when the sample size increases. Meanwhile, theshort-3@k method demonstrates superior performance, dominating across nearly all models and sample sizes. We next analyze how this performance advantage translates into efficiency benefits. 6 8 10 12 14 16 18 Time-to-Answer (longest thinking in thousands)0.480.500.520.540.560.580.600.62Accuracy k=1 k=3 k=5 k=9 k=1 k=3 k=5 k=9 k=1 k=3 k=5 k=9(a) LN-Super- 49B 8 10 12 14 16 18 Time-to-Answer (longest thinking in thousands)0.540.560.580.600.620.64Accuracy k=1 k=3 k=5 k=9 k=1 k=3 k=5 k=9 k=1 k=3 k=5 k=9 (b) R 1-32B 12 14 16 18 20 Time-to-Answer (longest thinking in thousands)0.680.700.720.74Accuracy k=1 k=3 k=5 k=9 k=1 k=3 k=5 k=9 k=1 k=3 k=5 k=9 majority@k short-1@k (Ours) short-3@k (Ours) (c) QwQ-32B Figure 4: Comparing time-to-answer for different inference methods. Our methods substantially reduce time cost with no major loss in performance. short-3@k improves performance across all models and sample sizes. We note that unlike majority @k, which becomes slower as kgrows, our methods run faster withk, as the probability of finding a short chain increases with k. Thinking-compute. The aggregated performance results, evaluated with respect to thinking com- pute, are presented in Figure 3 (per-benchmark results provided in Appendix A). We again observe that the short-1@k method outperforms majority @kat lower compute budgets. Notably, for LN- Super- 49B (Figure 3a), the short-1@k method surpasses majority @kacross all compute budgets. For instance, short-1@k achieves 57% accuracy with approximately 60% of the compute budget used by majority@ kto achieve the same accuracy. For R 1-32B and QwQ- 32B models, the short-1@k method exceeds majority @kup to | https://arxiv.org/abs/2505.17813v1 |
compute budgets of 45k and 60k total thinking tokens, respectively, but is underperformed by it on larger compute budgets. The short-3@k method yields even greater performance improvements, incurring only a modest increase in thinking compute compared to short-1@k . When compared to majority @k,short-3@k consistently achieves higher performance with lower thinking compute across all models and compute budgets. For example, with the QwQ- 32B model (Figure 3c), and an average compute budget of 80k thinking tokens per example, short-3@k improves accuracy by 2%over majority@ k. Time-to-answer. Finally, the aggregated time-to-answer results are shown in Figure 4, with per- benchmark details in Appendix A.4As sample size increases, majority @kexhibits longer time- to-answer, driven by a higher probability of sampling generations with extended thinking chains, requiring all trajectories to complete. Conversely, the short-1@k method shows reduced time-to- answer with larger sample sizes, as the probability of encountering a short answer increases. This trend also holds for the short-3@k method after three reasoning processes complete. This phenomenon makes the short-1@k andshort-3@k methods substantially more usable compared to basic majority @k. For example, when using the LN-Super- 49B model (Figure 4a), with a sample size of 5, the short-1@k method reduces time consumption by almost 50% while also increasing performance by about 1.5%compared to majority@ k. When considering a larger sample size of 9, the performance values are almost the same but short-1@k is more than 55% faster. Finally, we observe that for most models and sample sizes, short-3@k boosts performance, while for larger ones it also reduces time-to-answer significantly. For example, on R 1-32B (Figure 4b), with k= 5,short-3@k is 33% faster than majority@ k, while reaching superior performance. A similar boost in time-to-answer and performance is observed with QwQ- 32B and sample size 9(Figure 4c). 4.4 Ablation studies We investigate two axis of short-m@k : the value of mand the tie breaking method. For all experiments we use LN-Super- 49B, reporting results over the three benchmarks described in Section 3.1. For the ablation studies we focus on controlling thinking compute. 4For readability, the oracle is excluded from Figure 4, and methods are compared across a subset of sample sizes. 7 20 40 60 80 100 120 Thinking Compute (thinking tokens in thousands)0.480.500.520.540.560.580.600.62Accuracy short-1 short-3 short-4 short-5 short-7 short-9(a)mvalues ablation of short-m@k 20 40 60 80 Thinking Compute (thinking tokens in thousands)0.4250.4500.4750.5000.5250.5500.5750.6000.625Accuracy short-3 - tie - short short-3 - tie - random short-3 - tie - long (b) Tie breaking ablation Figure 5: Ablation studies over different mvalues for short-m@k , and different tie breaking methods. Both figures show the model’s average accuracy across benchmarks as a function of the length of its thinking trajectories (measured in thousands). We start by ablating different m∈ {1,3,4,5,7,9}forshort-m@k . Results are shown in Figure 5a. As observed in our main results, short-1@k outperforms others in low-compute regimes, while being less effective for larger compute budgets. Larger mvalues seem to perform similarly, with higher m values yielding slightly better results in high-compute scenarios. Next, we analyze the tie-breaking rule of short-m@k . We suggest the selection of the shortest | https://arxiv.org/abs/2505.17813v1 |
reasoning chain among the vote-leading options. We compare this strategy to random tie-breaking, and to tie breaking according to the longest reasoning chain among the options. As shown in Figure 5b, theshort-m@k strategy outperforms random tie-breaking. In contrast, choosing the option with the longest reasoning chain yields inferior results. 5 Finetuning using shorter trajectories Drawing from our findings, we investigate whether fine-tuning on shorter reasoning chains enhances the accuracy of reasoning in LLMs. To do so, we follow the S 1paradigm, which fine-tunes an LLM to perform reasoning using only 1,000trajectories [Muennighoff et al., 2025]. We create three versions of the S 1dataset, built from examples with the shortest, longest, and random reasoning chains among several generations. Below we describe the data generation and finetuning processes (Section 5.1), and then describe our results and main findings (Section 5.2). 5.1 Data creation and finetuning setup To construct the three variants of S 1, we generate multiple responses for each S 1question-answer pair. Specifically, for each example, we produce 10distinct answers using the QwQ- 32B model, which we select for its superior performance among the evaluated models (Section 3). The generation process uses the hyperparameters detailed in Section 3.1. From these 10responses per example, we derive three dataset variants—S 1-short, S 1-long, and S 1-random—by selecting the shortest/longest/random response, respectively. This results in three datasets, each containing the same 1,000queries but with distinct reasoning trajectories and answers. A histogram of the reasoning token counts for each dataset variant is provided in Appendix B. Following the S 1approach, we finetune the Qwen- 2.5-32B-Instruct model on the three S 1variants. The finetuning hyperparameters are consistent with those used for the S 1.1model [Muennighoff et al., 2025], and training is conducted on 32H100GPUs.5The resulting models are evaluated using the benchmarks and experimental setup described in Section 3.1. Specifically, for each model we generate 20 answers per example, and report average accuracy. 5We match the number of gradient steps as in used for S1.1. Each model was finetuned for about 2 hours. 8 Table 3: Results for our finetuned models over the S 1variants: S 1-short/long/random. We report average accuracy over 20 generations per example. The S 1-short model improves performance over the other two models, while using fewer thinking tokens. MetricAIME 2024 AIME 2025 HMMT Average Thinking Tokens ↓Acc.↑Thinking Tokens ↓Acc.↑Thinking Tokens ↓Acc.↑Thinking Tokens ↓Acc.↑ S1-random 16145 68.8 17798 59.3 19243 40.8 17729 56.3 S1-long 16912 67.3 17973 58.5 19397 42.1 18094 (+2.1%) 56.0 S1-short 15364 68.3 17195 60.2 17557 45.2 16706 (−5.8%) 57.9 5.2 Finetuning results Results are presented in Table 3. For the AIME 2025 and HMMT benchmarks, the S 1-short variant achieves superior performance while using fewer thinking tokens. While performance on AIME 2024 is similar across models, S 1-short still demonstrates the shortest thinking. Aggregated results across benchmarks reveal that the S 1-short model improves relative performance by 2.8%compared to the S1-random baseline, with a reduction of 5.8%in thinking tokens. Conversely, the S 1-long model consumes more tokens than S 1-random, but obtains similar performance. These results suggest that training on shorter reasoning | https://arxiv.org/abs/2505.17813v1 |
sequences can lead to models that not only exhibit reduced computational overhead but also enhanced performance. This observation aligns with our findings in Section 3, which shows that answers with shorter thinking trajectories tend to be more accurate. We believe that developing models that reason more effectively with less computation holds significant potential, as supported by related work [Lu et al., 2025]. 6 Limitations and broader impact The primary limitation of the short-m@k method is the reliance on batch decoding, as it requires parallel generation of multiple reasoning trajectories. This dependency might restrict its applicability in scenarios where inference memory is constrained. It should be noted that short-m@k can be used without batch decoding, although its efficiency gains will be lower. Additionally, while we show that finetuning on shorter reasoning chains can improve performance and efficiency, our experiments are limited to a specific model (Qwen- 2.5-32B-Instruct) and dataset (S1). In terms of broader impact, this work holds promise for enhancing the efficiency and accessibility of reasoning-LLMs by significantly lowering the required computational resources and time. By reducing these barriers, the technology could become more widely available, thereby democratizing access to advanced reasoning capabilities across a broader range of users and applications. However, as is often the case with advancements in efficiency, the decreased cost and increased scalability also carry the risk of enabling wider misuse or unintended applications of these powerful models. 7 Conclusion In this work, we challenged the common assumption that increased test-time computation leads to better performance in reasoning LLMs. Through empirical analysis on three complex mathematical and reasoning benchmarks, we showed that shorter reasoning chains consistently outperform longer ones, both in accuracy and computational efficiency. Building on this insight, we introduced short- m@k , an inference method that prioritizes early-terminating generations. short-1@k , our most efficient variant, is preferred over traditional majority voting in low-compute settings. short-3@k , while slightly less efficient, outperforms majority voting across all compute budgets. To further validate our findings, we fine-tuned an LLM on short reasoning trajectories and observed improved accuracy and faster runtime, whereas training on longer chains yields diminishing returns. These findings highlight a promising direction for developing faster and more effective reasoning LLMs by embracing brevity over extended computation. 9 Acknowledgments We thank Miri Varshavsky Hassid for the great feedback and moral support. References OpenAI. Learning to reason with llms, September 2024. URL https://openai.com/index/ learning-to-reason-with-llms/ . OpenAI. Openai o3-mini, 2025. URL https://openai.com/index/openai-o3-mini/ . Ac- cessed: 2025-02-24. Qwen Team. Qwq-32b: Embracing the power of reinforcement learning, March 2025a. URL https://qwenlm.github.io/blog/qwq-32b/ . Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. Niklas Muennighoff, Zitong Yang, Weijia Shi, Xiang Lisa Li, Li Fei-Fei, Hannaneh Hajishirzi, Luke Zettlemoyer, Percy Liang, Emmanuel Candès, and Tatsunori Hashimoto. s1: Simple test-time scaling. arXiv preprint arXiv:2501.19393 , 2025. Anthropic. Claude’s extended thinking, February 2025. URL https://www.anthropic.com/ news/visible-extended-thinking . Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha | https://arxiv.org/abs/2505.17813v1 |
Chowdh- ery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171 , 2022. Marah Abdin, Sahaj Agarwal, Ahmed Awadallah, Vidhisha Balachandran, Harkirat Behl, Lingjiao Chen, Gustavo de Rosa, Suriya Gunasekar, Mojan Javaheripi, Neel Joshi, et al. Phi-4-reasoning technical report. arXiv preprint arXiv:2504.21318 , 2025. Akhiad Bercovich et al. Llama-nemotron: Efficient reasoning models, 2025. URL https://arxiv. org/abs/2505.00949 . An Yang et al. Qwen2.5 technical report. arXiv preprint arXiv:2412.15115 , 2024. Skywork open reasoner series. https://capricious-hydrogen-41c.notion.site/ Skywork-Open-Reaonser-Series-1d0bc9ae823a80459b46c149e4f51680 , 2025. Notion Blog. Google DeepMind. Gemini 2.5: Our most intelligent ai model, March 2025. URL https://blog.google/technology/google-deepmind/ gemini-model-thinking-updates-march-2025/#gemini-2-5-thinking . Qwen Team. Qwen3, April 2025b. URL https://qwenlm.github.io/blog/qwen3/ . Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. Zixuan Ke, Fangkai Jiao, Yifei Ming, Xuan-Phi Nguyen, Austin Xu, Do Xuan Long, Minzhi Li, Chengwei Qin, Peifeng Wang, Silvio Savarese, et al. A survey of frontiers in llm reasoning: Inference scaling, learning to reason, and agentic systems. arXiv preprint arXiv:2504.09037 , 2025. Yixin Ye, Zhen Huang, Yang Xiao, Ethan Chern, Shijie Xia, and Pengfei Liu. Limo: Less is more for reasoning. arXiv preprint arXiv:2502.03387 , 2025. Sania Nayab, Giulio Rossolini, Marco Simoni, Andrea Saracino, Giorgio Buttazzo, Nicolamaria Manes, and Fabrizio Giacomelli. Concise thoughts: Impact of output length on llm reasoning and cost. arXiv preprint arXiv:2407.19825 , 2024. 10 Yu Kang, Xianghui Sun, Liangyu Chen, and Wei Zou. C3ot: Generating shorter chain-of-thought with- out compromising effectiveness. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 39, pages 24312–24320, 2025. Xiao Pu, Michael Saxon, Wenyue Hua, and William Yang Wang. Thoughtterminator: Benchmarking, calibrating, and mitigating overthinking in reasoning models. arXiv preprint arXiv:2504.13367 , 2025. Chenxu Yang, Qingyi Si, Yongjie Duan, Zheliang Zhu, Chenyu Zhu, Zheng Lin, Li Cao, and Weiping Wang. Dynamic early exit in reasoning models. arXiv preprint arXiv:2504.15895 , 2025. Zhaojian Yu, Yinghao Wu, Yilun Zhao, Arman Cohan, and Xiao-Ping Zhang. Z1: Efficient test-time scaling with code. arXiv preprint arXiv:2504.00810 , 2025. Ximing Lu, Seungju Han, David Acuna, Hyunwoo Kim, Jaehun Jung, Shrimai Prabhumoye, Niklas Muennighoff, Mostofa Patwary, Mohammad Shoeybi, Bryan Catanzaro, et al. Retro-search: Exploring untaken paths for deeper and efficient reasoning. arXiv preprint arXiv:2504.04383 , 2025. Mehdi Fatemi, Banafsheh Rafiee, Mingjie Tang, and Kartik Talamadupula. Concise reasoning via reinforcement learning. arXiv preprint arXiv:2504.05185 , 2025. Daman Arora and Andrea Zanette. Training language models to reason efficiently. arXiv preprint arXiv:2502.04463 , 2025. Junlin Wang, Shang Zhu, Jon Saad-Falcon, Ben Athiwaratkun, Qingyang Wu, Jue Wang, Shuai- wen Leon Song, Ce Zhang, Bhuwan Dhingra, and James Zou. Think deep, think fast: Investigating efficiency of verifier-free inference-time-scaling methods. arXiv preprint arXiv:2504.14047 , 2025. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. Mathematical Association of America. Aime 2024, 2024. URL https://artofproblemsolving. com/wiki/index.php/2024_AIME_I . Mathematical Association of America. Aime 2025, 2025. URL https://artofproblemsolving. com/wiki/index.php/2025_AIME_I | https://arxiv.org/abs/2505.17813v1 |
. Mislav Balunovi ´c, Jasper Dekoninck, Ivo Petrov, Nikola Jovanovi ´c, and Martin Vechev. Math- arena: Evaluating llms on uncontaminated math competitions, February 2025. URL https: //matharena.ai/ . Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention. In Proceedings of the 29th Symposium on Operating Systems Principles , pages 611–626, 2023. Sumith Kulal, Panupong Pasupat, Kartik Chandra, Mina Lee, Oded Padon, Alex Aiken, and Percy S Liang. Spoc: Search-based pseudocode to code. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems , volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/ paper_files/paper/2019/file/7298332f04ac004a0ca44cc69ecf6f6b-Paper.pdf . Mark Chen et al. Evaluating large language models trained on code, 2021. URL https://arxiv. org/abs/2107.03374 . arXiv:2107.03374. Michael Hassid, Tal Remez, Jonas Gehring, Roy Schwartz, and Yossi Adi. The larger the bet- ter? improved llm code-generation via budget reallocation. arXiv preprint arXiv:2404.00725 , arXiv:2404.00725, 2024. URL http://arxiv.org/abs/2404.00725 . 11 A Per benchmark results We present the per-benchmark results for each of the criteria persented in Section 4.2. The sample- size (k) results are presented in Figures 6 to 8. The thinking compute comparison results are presented in Figures 9 to 11. The time-to-answer results per benchamrk are presented in Figures 12 to 14. 12345678910 Sample Size (k)0.600.650.700.750.80Accuracy (a) LN-Super- 49B 12345678910 Sample Size (k)0.720.740.760.780.800.820.840.86Accuracy (b) R 1-32B 12345678910 Sample Size (k)0.800.820.840.860.880.90Accuracy pass@k (Oracle) majority@k short-1@k (Ours) short-3@k (Ours) (c) QwQ-32B Figure 6: AIME 2024 - sample size ( k) comparison. 12345678910 Sample Size (k)0.500.550.600.650.700.75Accuracy (a) LN-Super- 49B 12345678910 Sample Size (k)0.550.600.650.700.75Accuracy (b) R 1-32B 12345678910 Sample Size (k)0.720.740.760.780.800.82Accuracy pass@k (Oracle) majority@k short-1@k (Ours) short-3@k (Ours) (c) QwQ-32B Figure 7: AIME 2025 - sample size ( k) comparison. 12345678910 Sample Size (k)0.350.400.450.500.550.600.65Accuracy (a) LN-Super- 49B 12345678910 Sample Size (k)0.400.450.500.550.60Accuracy (b) R 1-32B 12345678910 Sample Size (k)0.500.550.600.650.700.75Accuracy pass@k (Oracle) majority@k short-1@k (Ours) short-3@k (Ours) (c) QwQ-32B Figure 8: HMMT Feb 2025 - sample size ( k) comparison. 12 20 40 60 80 100 120 Thinking Compute (thinking tokens in thousands)0.600.650.700.750.80Accuracy (a) LN-Super- 49B 20 40 60 80 100 Thinking Compute (thinking tokens in thousands)0.720.740.760.780.800.820.840.86Accuracy (b) R 1-32B 20 40 60 80 100 120 140 Thinking Compute (thinking tokens in thousands)0.800.820.840.860.880.90Accuracy pass@k (Oracle) majority@k short-1@k (Ours) short-3@k (Ours) (c) QwQ-32B Figure 9: AIME 2024 - thinking compute comparison. 20 40 60 80 100 120 Thinking Compute (thinking tokens in thousands)0.500.550.600.650.700.75Accuracy (a) LN-Super- 49B 20 40 60 80 100 120 Thinking Compute (thinking tokens in thousands)0.550.600.650.700.75Accuracy (b) R 1-32B 25 50 75 100 125 150 Thinking Compute (thinking tokens in thousands)0.720.740.760.780.800.82Accuracy pass@k (Oracle) majority@k short-1@k (Ours) short-3@k (Ours) (c) QwQ-32B Figure 10: AIME 2025 - thinking compute comparison. 20 40 60 80 100 120 140 Thinking Compute (thinking tokens in thousands)0.350.400.450.500.550.600.65Accuracy (a) LN-Super- 49B 20 40 60 80 100 120 Thinking Compute (thinking tokens in thousands)0.400.450.500.550.60Accuracy (b) R 1-32B 50 100 150 Thinking Compute (thinking tokens in thousands)0.500.550.600.650.700.75Accuracy pass@k (Oracle) majority@k short-1@k (Ours) short-3@k (Ours) (c) QwQ-32B Figure 11: HMMT Feb | https://arxiv.org/abs/2505.17813v1 |
2025 - thinking compute comparison. 13 8 10 12 14 16 18 Time-to-Answer (longest thinking in thousands)0.5750.6000.6250.6500.6750.7000.7250.7500.775Accuracy k=1 k=3 k=5 k=9 k=1 k=3 k=5 k=9 k=1 k=3 k=5 k=9(a) LN-Super- 49B 6 8 10 12 14 16 Time-to-Answer (longest thinking in thousands)0.720.740.760.780.800.820.84Accuracy k=1 k=3 k=5 k=9 k=1 k=3 k=5 k=9 k=1 k=3 k=5 k=9 (b) R 1-32B 10 12 14 16 18 Time-to-Answer (longest thinking in thousands)0.800.810.820.830.840.850.860.87Accuracy k=1 k=3 k=5 k=9 k=1 k=3 k=5 k=9 k=1 k=3 k=5 k=9 majority@k short-1@k (Ours) short-3@k (Ours) (c) QwQ-32B Figure 12: AIME 2024 - time-to-answer comparison. 8 10 12 14 16 18 Time-to-Answer (longest thinking in thousands)0.520.540.560.580.600.620.640.66Accuracy k=1 k=3 k=5 k=9 k=1 k=3 k=5 k=9 k=1 k=3 k=5 k=9 (a) LN-Super- 49B 8 10 12 14 16 18 Time-to-Answer (longest thinking in thousands)0.540.560.580.600.620.64Accuracy k=1 k=3 k=5 k=9 k=1 k=3 k=5 k=9 k=1 k=3 k=5 k=9 (b) R 1-32B 12 14 16 18 20 Time-to-Answer (longest thinking in thousands)0.720.740.760.780.80Accuracy k=1 k=3 k=5 k=9 k=1 k=3 k=5 k=9k=1 k=3 k=5 k=9 majority@k short-1@k (Ours) short-3@k (Ours) (c) QwQ-32B Figure 13: AIME 2025 - time-to-answer comparison. 10 12 14 16 18 20 Time-to-Answer (longest thinking in thousands)0.320.340.360.380.400.420.44Accuracy k=1 k=3 k=5 k=9 k=1 k=3 k=5 k=9 k=1 k=3 k=5 k=9 (a) LN-Super- 49B 7 10 12 15 17 20 Time-to-Answer (longest thinking in thousands)0.380.400.420.440.46Accuracy k=1 k=3 k=5 k=9 k=1 k=3 k=5 k=9 k=1 k=3 k=5 k=9 (b) R 1-32B 12 14 16 18 20 22 24 Time-to-Answer (longest thinking in thousands)0.480.500.520.540.560.580.60Accuracy k=1 k=3 k=5 k=9 k=1 k=3 k=5 k=9 k=1 k=3 k=5 k=9 majority@k short-1@k (Ours) short-3@k (Ours) (c) QwQ-32B Figure 14: HMMT Feb 2025 - time-to-answer comparison. 14 B Thinking token histograms of S1 variants Figure 15 shows the thinking token count histograms for the three variants of the S1 dataset (short/- long/random) presented in Section 5. 0 5 10 15 20 25 30 Number of Thinking T okens (in thousands)050100150200Frequency (a) S1-short 0 5 10 15 20 25 30 Number of Thinking T okens (in thousands)050100150200Frequency (b) S1-random 0 5 10 15 20 25 30 Number of Thinking T okens (in thousands)050100150200Frequency (c) S1-long Figure 15: Thinking token count histograms for S1-short, S1-random and S1-long datasets. 15 | https://arxiv.org/abs/2505.17813v1 |
arXiv:2505.17816v1 [cs.CL] 23 May 2025Low-Resource NMT: ACase Studyon the Wri/t_tenand Spoken Languages in HongKong HeiYi Mak TheChinese University ofHongKong HongKongTan Lee TheChinese University of HongKong HongKong ABSTRACT The majority of inhabitants in Hong Kong are able to read and writeinstandardChinesebutuseCantoneseastheprimarysp oken language in daily life. Spoken Cantonese can be transcribed into Chinese characters, which constitute the so-called writte n Can- tonese. Written Cantonese exhibits significant lexical and gram- maticaldifferencesfromstandardwrittenChinese.Therise ofwrit- tenCantoneseisincreasinglyevidentinthecyberworld.Th egrow- ing interactionbetween Mandarinspeakers and Cantonese sp eak- ersisleadingtoacleardemandforautomatictranslationbe tween Chinese and Cantonese. Thispaperdescribesatransformer- based neural machine translation(NMT) system for written-Chine se-to- written-Cantonesetranslation.Giventhatparalleltextd ataofChi- nese and Cantonese are extremely scarce, a major focus of thi s study is on the effort of preparing good amount of training dat a forNMT.Inadditiontocollecting28Kparallelsentences fr ompre- viouslinguisticstudiesandscatteredinternetresources ,wedevise an effective approach to obtaining 72Kparallel sentences by auto- matically extracting pairs of semantically similar senten ces from parallel articles on Chinese Wikipedia and Cantonese Wikip edia. Weshowthatleveraginghighlysimilarsentencepairsmined from Wikipedia improves translation performance in all test set s. Our system outperforms Baidu Fanyi’s Chinese-to-Cantonese tr ansla- tion on 6 out of 8 test sets in BLEU scores. Translation exampl es revealthatoursystemisabletocaptureimportantlinguist ictrans- formations betweenstandard Chinese and spokenCantonese. CCS CONCEPTS •Computingmethodologies →Machine translation . KEYWORDS Chinese, Cantonese, Low-resource, Neural Machine Transla tion, Dialect Translation ACMReference Format: Hei Yi Mak and Tan Lee. 2021. Low-Resource NMT: A Case Study on t he Written and Spoken Languages in Hong Kong. In 2021 5th International ConferenceonNaturalLanguageProcessingandInformation Retrieval(NLPIR) (NLPIR 2021), December 17–20, 2021, Sanya, China. ACM, New York, NY, USA, 7pages. https://doi.org/10.1145/3508230.3508242 Permission to make digital or hard copies of all or part of thi s work for personal or classroomuseisgrantedwithoutfeeprovidedthatcopiesar enotmadeordistributed for profit or commercial advantage and that copies bear this n otice and the full cita- tiononthefirstpage.Copyrightsforcomponents of thiswork owned byothersthan ACMmustbehonored.Abstractingwithcreditispermitted.T ocopyotherwise,orre- publish,topostonserversortoredistributetolists,requ irespriorspecificpermission and/or afee. Request permissionsfrompermissions@acm.or g. NLPIR 2021,December17–20,2021,Sanya, China © 2021 Associationfor Computing Machinery. ACM ISBN978-1-4503-8735-4/21/12...$15.00 https://doi.org/10.1145/3508230.35082421 INTRODUCTION Machine translation refers to the practice of translating t ext or speech from one language to another language using computer software.Traditionally,machinetranslationsystemswer edesigned withrule-basedorstatisticalapproaches.Recentresearc hinterests have been largely drawn towards the use of artificial neural n et- work, leading to a branch of MT research known as neural ma- chine translation (NMT). State-of-the-art NMT models can b e di- vided into three categories: RNN-based models [12], CNN-ba sed models [5], and transformers [13]. These models generally f ollow the encoder-decoder architecture with attention mechanis m [3]. Theencoder-decoderarchitectureallowsend-to-endtrain ingwith sourceandtargetsentences.Theattentionmechanismenabl esthe decodertolearntofocusonthespecificpartsof inputsenten ce. ThispaperpresentsanNMTsystemforwritten-Chinese-to-w ritten- Cantonesetranslation.WrittenChinesecorrespondstothe written formofthestandardspokenChinese, Mandarin.Cantonese, o rre- ferred to as the Yue dialect, is one of the major Chinese diale cts. It is spoken by tens of millions of people in Hong Kong, Macau, GuangdongandGuangxiprovinces,aswellasmanyoverseasCh i- nese communities in Southeast Asia and North America. Despi te beingusedprimarilyinspokenform,Cantonesecanbetransc ribed into Chinese char-acters. Such transcription is referred t o as writ- ten Cantonese. Spoken Cantonese and Mandarin are considere d tobemutuallyunintelligible [11].Their writtenform,i.e ., written CantoneseandwrittenChinese alsoexhibitsignificant lexi caland grammaticaldifferences.InHongKong,themajorityofthepo pula- tionarenativeCantonesespeakersandcommunicateinCanto nese inbothinformaland formalspeaking contexts (e.g., dailyc ommu- nication, council meetings, business meeting, news report s, etc.). In typical written contexts, however, Chinese is the prefer | https://arxiv.org/abs/2505.17816v1 |
red lan- guage. Nevertheless, writinginCantonese has becomevery p opu- lar among internet and mobile users. With the prevalence of i n- stant messaging and social media, as well as millions of nati ve speakersaroundtheglobe,writtenCantonesehasfounditsi mpor- tanceand usage inonline social communication.For simplic ityof terminology, writtenChinese is referred toas “Chinese” an d writ- tenCantonese is referred toas “Cantonese” in this paper. Chinese-to-Cantonese MT systems have many potential appli - cations. They allow automatic translation of Chinese docum ents intoCantonese, which couldbereadilycombined withCanton ese text-to-speechsystemstorealizeapplicationslikestory telling,vir- tualnewsanchor,etc.Bytraininginthereversedirectionw ithsim- ilar parallel data, a Cantonese-to-Chinese MT system can al so be developed.CoupledwithCantonesespeech-to-text,theMTs ystem cansupportthegeneration ofChinese textcaptionforCanto nese- speakingrecording.SuchsystemscanbeveryhelpfultoMand arin NLPIR 2021,December 17–20,2021, Sanya, China Mak and Lee speakers in consuming Cantonese audiovisual materials, e. g., lec- tures,publicspeech. NMTmodelsaretypicallytrainedonlargedatasetsthatcont ain millionsofparallelsentences oftheintendedlanguagepai r.When such parallel data resources are limited, researchers are r equired to work in theso-called low-resourcesetting. Parallel dat a of Chi- nese and Cantonese are extremely low-resourced. This is mai nly because spoken Cantonese is rarely transformed into digita l text forarchive.EvenwhenaCantonesecorpusisavailable,subs tantial efforts are needed to translate the data into Chinese. The sca rcity ofparallelresourcesisacommonissueindialecttranslati on[1,7]. One of the contributions of this paper is to provide a detaile d ac- count of how training data are collectedin the case of Chines e di- alecttranslation.Inparticular,weconstructedasetof72 Kparallel sentences by mining pairs of semantically similar sentence s from parallel articles on Chinese Wikipedia and Cantonese Wikip edia using LASER [2]. We hope to provide useful insights to the MT community on acquiring parallel resources in Chinese and Ca n- tonese and possiblyotherlanguage pairs. This paper is organized as follows. In the next section, we re - viewtherelatedworkonChineseandCantoneseMTandasimila r low-resource task on German dialects. In Section 3, we descr ibe the main linguistic considerations in Chinese-to-Cantone se trans- lation.InSection4,theNMTmodelusedforChinese-to-Cant onese MTisdescribed.InSection5,wereporttheeffortofsearchin gand collecting data resources for MT.In Section 6, the method of min- ing parallel sentences from Wikipedia is presented. In Sect ion 7, we describe the training process. In Section 8, we evaluate t he de- veloped MT system by comparing BLEU scores against a baselin e systemandBaiduFanyi1,andanalyzeitsperformanceonlinguistic transformations with examples. Conclusions are given in Se ction 9. 2 RELATED WORK Xu and Fung [16] developed an MT system that could generate Chinese transcription from Cantonese speech. The system le ver- aged the language model statistics of resource-rich langua ge us- ingasyntacticreorderingmodeltoimprovespeechrecognit ionof thelow-resourcelanguage.WongandLee[15]investigated a rule- basedapproachtotranslatingChinesesentencestoCantone sesen- tences at different register levels. This system incorporat es anno- tated lexical mappings and syntactic transformations to pe rform register-sensitive translation. Honnetetal.[7]designedanMTsystemfromSwissGermanto High German with fewer than 60K parallel sentences. Swiss Ge r- man represents a collectionofspokendialects inSwitzerla nd that are seldom written and generally unintelligible to German s peak- ers outside Switzerland. However, Swiss German speakers wr ite in High German, the standard German language. The relation o f Swiss German and High German is akin to that of Cantonese and Chinese. In the work of Honnet et al., the parallel resources were collectedfrom books,Wikipedia,transcriptions, theBibl eand dic- tionaries. 1https://fanyi.baidu.com/#cht/yueTable 1: Examples of word reordering in Chinese and Can- tonese. Example1 English Give mea cupofwater. Chinese給我一杯水(“give” “me” “a” “cup”“water”) | https://arxiv.org/abs/2505.17816v1 |
Cantonese畀杯水我(“give” “cup”“water” “me”) Example2 English Iam tallerthanyou. Chinese我比你高(“I” “compare” “to”“you” “tall”) Cantonese我高過你(“I” “tall”“than” “you”) Example3 English Eatmore. Chinese多吃點(“more” “eat” “some”) Cantonese食多o的(“eat” “more” “some”) 3 LINGUISTIC CONSIDERATIONS IN CHINESE-TO-CANTONESE TRANSLATION 3.1 Lexical Changes OneimportantlinguisticconsiderationindevelopingaChi nese-to- Cantonese translation system is its capability of handling lexical mappings between written Chinese and spoken Cantonese. Lex i- cal difference is seen in almost all types of words in the two la n- guages.Commonexamplesincludemappingparticles“ 的”to“o既” (of) and “地” to “咁” (-ly)), adverbs “了” to “o左” (already) and “不” to “唔” (not), pronouns “ 他” to “佢” (he) and “他們” to “佢 地”(they),verbs“是”to“係”(is)and“喜歡”to“鍾意”(like),adjec- tives “美麗” to “靚” (beautiful) and “冷” to “凍” (cold), nouns “小 孩子” to “細路仔” (child) and “椅子” to “凳” (chair). Since lexical differencestakeplaceinmanyfrequentlyusedwords,correc tmap- pingbetweenthelexicalitemsinChineseandCantoneseiscr itical tohigh-qualitytranslationbetweenthetwolanguages. 3.2 Word Reordering WordorderisanotherimportantaspectinwhichChineseandC an- toneseexhibitnotabledifferences. Table1gives afewexamp lesof parallel Chinese and Cantonese sentences that involve word re- ordering. ThetranslationinExample1involvesanexpression“give(s ome- thing) to (some-body)”. In Chinese, the word order is, trans itive verb“給”→indirectobject“我”→directobject“一杯水”.InCan- tonese, the indirect object is placed after the direct objec t, giving thewordorder,transitiveverb“ 畀”→directobject“杯水”→indi- rect object “我”. Word reordering is also needed if the source sen- tence describes a comparative relation. In Example2, the Ch inese sentence follows the order, subject “ 我”→particle “比”→object “你”→adjective “高”, but in Cantonese the order is changed to, subject“我”→adjective “高”→particle“過”→object “你”. Cer- tain adverbs are placed differently in relation to the neighb oring verbinthetwolanguages.InExample3,theadverb“ 多”precedes theverb inChinese butsucceedstheverb in Cantonese. Low-Resource NMT: ACase Study on theWri/t_ten and Spoken Langua ges in Hong Kong NLPIR 2021,December 17–20,2021, Sanya, Chin a 3.3 Code-switchingin Cantonese Code-switching is a commonpracticeamong Cantonese speake rs [6] in HongKong and overseas. It refers to the practice of mix ing English words into Cantonese speech. Thus, the translation task between Chinese and Cantonese involves not only Chinese cha r- acters, but English words as well. For example, the Chinese s en- tence “我稍後才有時間打電話給他”, can be translated into the Cantonesesentence,“ 我一陣先有時間call佢”,whichcontainsthe English word“call”. TocapturethislinguisticphenomenonoftheCantonese-spe aking community, the Chinese-to-Cantonese NMT model needs to be abletooutputsentences thatcontainbothChinesecharacte rsand English words. 4 THE NMTMODEL 4.1 Pretrained BERT Encoder Decoder Model We usetheEncoder-DecoderModelfrom theHuggingFace Trans - formers library2. The encoder is a BERT [4] model (“bert-base- chinese”) pre-trained on the Chinese Wikipedia corpus. The de- coder is the same BERT model with a language modeling head addedontoptooutputpredictedscoresforeachvocabularyt oken. During training, the source sentence and target sentence ar e fed intothemodel,whichwilllearntopredictthenexttokenbas edon theencoderoutputandprevioustokensinthetargetsentenc e.Dur- ing translation, the encoder output embedding and all previ ously generated tokens are fed into the model to predict the next to ken regressivelyuntiltheend-of-sentencetokenisgenerated .Themax- imum length of theinputsequence is 512characters. 4.2 Tokenizer and Vocabulary Thetokenizerofthepretrained“bert-base-chinese”model converts a sentence into a sequence of input tokens. The model can only read and output tokens that are stored in the tokenizer’s voc abu- lary.Thevocabulary,withafixedsizeof21128,consistsof m ostly Chinesecharacters,someEnglishsubwords,numbers,speci alchar- acters (e.g., “[PAD]”, “[UNK]”), unused tokens (e.g., “[un used1]”), emoji, punctuations and symbols. The model is particularly suit- ableforourtranslationtaskbecausetheinclusionofbothC hinese and English tokens makes code-switchingpossible. 5 TRAINING DATA As discussed earlier, parallel text data in Chinese and Cant onese areextremelylow-resourced.Thisislargelybecausethetr anscrip- tion of spoken Cantonese into digital text and the translati on be- | https://arxiv.org/abs/2505.17816v1 |
tween Chinese and Cantonese both require great efforts. In th is study,paralleldataare collectedfrom previous linguisti c research and the internet. The largest dataset we were able to locate c on- tains barely more than 10K parallel sentences. Combining al l the datasets we found, there are around 28K sentence pairs. To ov er- come the low-resource barrier, we investigated a method of m in- ingparallelsentences fromtheChinese Wikipediaand Canto nese Wikipedia and constructedtheWikipedia Dataset which cont ains 2https://github.com/huggingface/transformers72K parallel sentences. As a result, a total of 101434 parall el sen- tences are available for training our NMT system. A summary o f thedataresources is given as in Table2. Lee’sParallelCorpus .Lee[8]createdaparallelcorpusofCan- tonese transcriptions and Chinese subtitles of news report s, dra- masandbroadcastonatelevisionchannelinHongKong.Theco r- puscontains 8225pairs ofsentences. Cantonese-MandarinParallelDependencyTreebank .Wong etal.[14]constructedaparalleldependencytreebankofCa ntonese transcription and Chinese subtitles of short films and legis lative council meetings, with 1004 pairs of sentences. The treeban k is freelyavailableonGitHub3 4. Storytelling Dataset . The storytelling dataset is a set of par- allel Chinese scripts and transcriptions of Cantonese stor ytelling recordings,which wereintended forthedevelopment ofaper son- alized storytelling system. The 28 stories in the recording s were narratedbyanative Cantonesespeaker.TheChinese scripts were significantlyalteredintheCantonese narrationforexpres sive sto- rytelling.Wemanuallyalignedthesentencesandfilteredth eprob- lematicones, resulting in5060pairs ofsentences. KaifangCidian .Thewebsite5ofKaifangCidian(translatedas “Open Dictionary” in English) has a Cantonese-Chinese sent ence datasetwith10823pairsofparallelsentences.Thedataset features Cantonese sentences that are used in conversations and info rmal settings. Code-switching is found in some of the Cantonese s en- tences. Tatoeba.Thewebsite6ofTatoebahas acollectionofsentences and translations into over 380 languages contributed by a vo lun- tarycommunity.WedownloadedalltheCantonesesentences, Chi- nese sentences, and a table containing the links between the sen- tences. 2532pairs of parallelsentences wereextracted. Hong Kong Storybooks . The website7of Hong Kong Story- bookshas a freecollectionofstories availablein English, Chinese andCantonese.Thereare40storiesatdifferentlevels.Weex tracted the Chinese and Cantonese content and aligned the sentences , re- sultingin651pairs ofparallelsentences. ILCCantoneseProgram .TheCantoneseOnlineProgram8isa websitehostedbytheIndependentLearningCenteroftheChi nese University of Hong Kong. The purpose of the program is to help native Mandarin speakers to learn Cantonese. The program co n- sistsof15chapters,eachofwhichcontainsashortnarrativ eessay in Cantonese and its translation in Chinese. We sentence-al igned theessays and collected446pairs ofparallelsentences. WikipediaDataset .Adatasetof72693pairs ofsentences con- structedbyautomaticallyextractingsemanticallysimila rsentences from articles on Chinese Wikipedia and Cantonese Wikipedia of thesame topic.Details of themining processaredescribed i nSec- tion6. 3https://github.com/UniversalDependencies/UD_Cantone se-HK 4https://github.com/UniversalDependencies/UD_Chinese -HK 5http://kaifangcidian.com/ 6https://tatoeba.org/eng/ 7https://global-asp.github.io/storybooks-hongkong/ 8https://www.ilc.cuhk.edu.hk/Chinese/pthprog1/tm_cou rses.html NLPIR 2021,December 17–20,2021, Sanya, China Mak and Lee Table2: A summaryof theparallel datain ChineseandCantone se. Dataset TypeofMaterials Train Validation Test Total Kaifang Cidian Conversations 8658 1082 1083 10823 Tatoeba Conversations 2025 253 254 2532 Lee’s Parallel Corpus TVdramas,news, broadcast 6580 822 82 3 8225 Cantonese-Mandarin Parallel Dependency Treebank Shortfil ms,councilmeetings 803 100 101 1004 Storytelling Dataset Stories 4048 506 506 5060 HongKongStorybooks Stories 520 65 66 651 ILC Cantonese Program Narrativeessays 356 45 45 446 Wikipedia Dataset Wikipedia 69693 1500 1500 72693 92683 2873(non-wiki) 4378 101434 1500(wiki) 6 MINING PARALLEL SENTENCES FROM THE WIKIPEDIA 6.1 ParallelResources in theWikipedia Wikipedia is one of the largest multilingual text resources on the internet.Onthesametopic,theremayexistparallelarticl esindif- ferent languages. These parallel articles are rarely mutua l transla- tionsofeachother.Nevertheless, duetothesimilarityofc ontents, these articles may contain sentences that are equivalent in mean- ing, which can be exploited as parallel | https://arxiv.org/abs/2505.17816v1 |
data for training NMT sys- tems. Among versions of Wikipedia in different Chinese diale cts, thestandardChinese(zh)WikipediaandCantonese(zh-yue) Wikipedia are the most prominent versions with around 1.1M articles an d around100Karticlesrespectively.Weexplorethepossibil ityofex- tractingparallelsentences from thesetwoversions ofWiki pedia. 6.2 MiningParallelSentences from Wikipedia One of the most notable methods for mining parallel sentence s fromtheWikipediaistheWikiMatrix[10].Itisanapproacht omin- ingparallelsentencesinamassivelymultilingualsetting usingthe multilingualsentenceembeddingtoolkitLASER[2].LASERu sesa single language agnostic encoder to computemultilingual e mbed- dingsforsentencesin93languages.Simplyspeaking,LASER maps sentences from different languages to vectors in the same emb ed- ding space so that the semantic similarity between sentence s in different languages can be measured. The extracted corpus, W iki- Matrixbitexts,consistsofparallelsentences for1620diff erent lan- guage pairs mined from Wikipedia articles in 85 languages. A l- thoughLASERsupportsevaluationonsimilaritybetweenChi nese and Cantonese sentences, parallel data of the Chinese-Cant onese language pair were notavailable intheWikiMatrixbitexts9. We see the opportunityto construct a parallel training data set byapplyingLASERtoautomaticallyextractparallelsenten cesfrom theChinese Wikipedia and Cantonese Wikipedia. 9https://github.com/facebookresearch/LASER/tree/mast er/tasks/WikiMatrix6.3 Corpus Preparation We downloaded a complete copy of Chinese Wikipedia articles , a complete copy of Cantonese Wikipedia articles, and the int er- language links from the Wikimedia Database backup dumps10 11. ThecorporawerecleanedwiththeWikiExtractor12,removingun- wantedmetadataandhtmltags.Withtheinter-languagelink s,we are able to match an article in the Cantonese Wikipedia corpu s with the corresponding article in the Chinese Wikipedia cor pus. After themapping, around 70Kpairs ofarticles were obtaine d. 6.4 ParallelSentences Extraction For each pair of Chinese and Cantonese articles, we split the text intotwolistsofsentencesandcomputetheirsentenceembed dings with LASER. Theembedding of each sentence in theChinese art i- cle is compared with that of each sentence in the Cantonese ar - ticle. The comparison is done by calculating the cosine simi larity of two sentence embeddings (a score between 0 and 1), which is referred to as the similarity score. We select all pairs of se ntences withsimilarityscoreshigherthanapredeterminedthresho ld(tobe discussedinSection6.5).AsbothChineseandCantoneseare inthe form of sequence of Chinese characters, some selected sente nces pairs are identical. These pairs are removed to avoid constr ucting training data that have identical source and target sentenc es. Af- terremovingpairsofidentical sentences, iftherearemult iplesen- tencepairswiththesameChinesesentence,weretainonlyth epair withthehighest similarity score. 6.5 Similarity ScoreThreshold We consider different thresholds of similarity scores and pe rform human inspection on samples of sentence pairs with similari ty scores higher than 0.90, 0.93 and 0.95 respectively. Table 3 shows thenumber ofsentence pairs weobtainedfordifferent thresh olds. Sentencepairsofsimilarityscorehigherthan0.90contain prob- lematicpairsthatinvolvenumbers.Forinstance,expressi onsofdif- ferentyears(“1840 年”,“1842年”)andexpressionsofdifferentdates (“2017年2月27日”,“2017年2月3號”)werenotfilteredout.Common to these sentence pairs is that the pair share similar struct ure but 10https://dumps.wikimedia.org/zhwiki/, 11https://dumps.wikimedia.org/zh_yuewiki/ 12https://github.com/attardi/wikiextractor Low-Resource NMT: ACase Study on theWri/t_ten and Spoken Langua ges in Hong Kong NLPIR 2021,December 17–20,2021, Sanya, Chin a Table 3: Number of sentence pairs obtained using different similarityscore thresholds. Similarityscore Number ofsentence pairs ≥0.90 122264 ≥0.93 73252 ≥0.95 45475 are inconsistent in meaning because certain digit(s) has or have differentnumericalvalues.Itwouldbeundesirabletoinclu desuch pairsinthetrainingcorpus.Samplesofsimilarityscorehi gherthan 0.93andthosehigher than0.95arehigh qualityparallelsen tences without the aforementioned problem. Since setting a lower s imi- larity score yields more sentence pairs, 0.93 is chosen to be the similarity scorethresholdfor parallelsentence extracti on. 7 TRAINING With a similarity threshold of 0.93, 72693 parallel sentenc es were minedfromtheChineseWikipediaandCantoneseWikipedia.T he Wikipediadatasetissplitintoatrainingsetof69693sente ncepairs, a validation set of 1500 pairs and a test set of 1500 pairs. | https://arxiv.org/abs/2505.17816v1 |
Eac h of the 7 other non-wiki datasets is split into training, valida tion and testsetsintheproportion8:1:1.Thedatapartitionarrang ement is detailed as shown in Table 2. A non-wiki validation set is for med bymergingthe7validationsetswiththewikivalidationset being excluded.The8test sets areusedseparately forevaluation . During training, validation is carried out every 10 epochs u s- ingtheBLEUscore[9]onthenon-wikiandthewikivalidation set. OnlytheBLEUscoreonthenon-wikivalidationsetisusedtod eter- minewhentostoptraining.TheBLEUscoreonthewikivalidat ion setismeasuredonlyforreferencebecauseitissignificantl yhigher than the BLEU score on other datasets due to the high similari ty of the source and target sentences extracted with a high simi lar- ity threshold. The system is trained until the BLEU score on t he non-wiki validationsetconverges ataroundonethousandep ochs. 8 PERFORMANCE EVALUATION 8.1 BLEU Score After training, the BLEU scores of our system’s translation s on the 8 test sets are evaluated. A baseline translation system and the Baidu Fanyi’s Chinese-to-Cantonese translation are us ed for performance comparison. The baseline translation is essen tially a copy-and-paste modelwhoseoutputis simplytheChinese sou rce sentence. WeincludetheBLEUscoreofthebaselinetranslat ionas a reference because the Chinese source sentence and Cantone se target sentence are both written in Chinese characters and s hare some common lexical items. The source sentences in the test s ets arealsotranslatedwithBaiduFanyi andtheBLEUscoresarec om- puted. Both our system and Baidu Fanyi outperform the baseli ne translationsignificantly.Oursystemdemonstratescompet itiveper- formancetoBaiduFanyi.ItoutscoresBaiduFanyiin6outof8 test sets. Theresultsare shownas inTable4.Inaddition,wetrainedanothersystemusingonlythenon-wi ki data and evaluated its performance on the same test data. Com - paredtothisnon-wikisystem,oursystem,whichistrainedo nboth wikiandnon-wikidata,showsasignificantimprovementinBL EU score,from43.99%to81.70%,onthewikitestdata.Thediscr epancy isexpectedaswikidataareincludedtotrainoursystem.Nev erthe- less,suchinclusionoftrainingdataminedfromwikialsoin creases BLEU scores on all other test datasets. The most noticeable i m- provements areonLee’s ParallelCorpus,ILC Cantonese Prog ram, and Cantonese-Mandarin Parallel Dependency Treebank. 8.2 Performance AnalysiswithExamples In this section, a few typical examples of translation resul ts are analyzed. For each example, oursystem’s translationis com pared to that from Baidu Fanyi in terms of the use of desired linguis tic transformations, e.g., lexical mapping, word reordering a nd code- switching, as discussedin Section3. Theexamples are given as in Table5. Example 1. Baidu Fanyi fails to capture the word reordering of the phrase “多收點” (“more” “receive” “some”) into “ 收多o的” (“receive”“more”“some”).Whileitisacceptabletomakeno lexical changeontheword“ 消息”inBaiduFanyi’stranslation,oursystem applies a more colloquial lexical transformation (“ 消息” to “料”), which is usedin thetargetsentence. Example 2 . Both translations are able to make the correct lex- ical change to map the word “ 便宜” to “平”. The word “服裝” is acceptable in Cantonese but “ 衫” is more commonly used. It was incorporated in Baidu Fanyi’s translation but our system di d not capture it. The phrase “ 比大陸便宜” (“compare to” “mainland” “cheap”)shouldbereorderedas“ 平過大陸”(“cheap”“than”“main- land”)inCantonese.Oursystemperformsthistransformati oncor- rectlybutBaiduFanyi’s translationkeeps theChinese word order. Example3.Inbothtranslations,theverb“ 躺”istransformedto “攤”(lay)and“讓”transformedto“等”(let).Theparticle“o 左”(al- ready) after “攤”(lay) inoursystem’s translationdeviates slightly from the meaning of the source sentence. Our system impose th e correctwordreorderingofthephrase“ 躺到床上去”,i.e.,swapping the position of the preposition “ 上” (on) and the noun “ 床” (bed). Whilst the Baidu Fanyi’s translation does not make this chan ge. The change from the verb phrase “ 檢查一下” (take a look) to “check一check”inoursystem’stranslationisacode-switchedterm commonlyused bynative Cantonese speakers. Example4.Theterm“ 這陣子”(recently)iscorrectlymappedto “呢輪” by our system and to “ 呢排” by Baidu Fanyi. Both transla- tions demonstratetheabilityof incorporatingcode-switc hing lex- ical transformation by changing the term “ 加班” (work overtime) to | https://arxiv.org/abs/2505.17816v1 |
“開o.t.” or “開OT”. This is indeed a common practice of code- switchingin Cantonese. In these examples, our system demonstrates the ability to pe r- form appropriate lexical transformation and code-switchi ng, and superiorperformancein wordreordering. 9 CONCLUSION Lacking parallel data resources is the major challenge in de vel- oping robust neural machine translation systems that invol ve re- gional spoken language like Cantonese. In this research we h ave NLPIR 2021,December 17–20,2021, Sanya, China Mak and Lee Table 4: BLEUscores on the8testsets. BLEUscore Test set Baseline Non-wiki Our system BaiduFanyi Kaifang Cidian 11.64 42.93 43.81 47.00 Tatoeba 21.79 52.77 53.09 64.05 Lee’s Parallel Corpus 36.03 53.22 58.43 45.35 Cantonese-Mandarin ParallelDependency Treebank 17.13 28 .15 32.14 28.40 StorytellingDataset 28.37 52.64 53.80 40.13 HongKongStorybooks 19.08 50.74 51.99 38.81 ILC Cantonese Program 35.23 57.47 63.38 53.65 Wikipedia Dataset 63.18 43.99 81.70 63.11 demonstratedthefeasibilityofexploitingmonolingualda ta,which are relatively large in amount, by mining similar sentences from multilingual online data resources, i.e., Wikipedia. Trai ned onthe mined parallel data in addition to those collected directly from published materials,ourChinese-to-Cantonese MTsystem s hows BLEUscoreimprovements inalltestsets.Oursystemattains com- petitive performance to the Chinese-to-Cantonese transla tion on BaiduFanyi.Examplesoftranslationresultsreveal thatcr uciallin- guistic transformationslikelexical mapping,wordreorde ringand code-switching arecapturedbyoursystem. Theproposedapproachof datacollectionis applicabletoot her tasks of natural language processing that require large amo unt of Chinese and Cantonese parallel text. The methods used in t his studycouldprovideimportantreferencefordesigningMTsy stems onotherlanguagepairs,forwhichparallelresourcesaresc arcebut monolingual data areabundant. REFERENCES [1] Kaori Abe, Yuichiroh Matsubayashi, Naoaki Okazaki, and Kentaro Inui. 2018. Multi-dialect neural machine translation and dialectomet ry. InProceedings of the 32ndPacificAsia Conferenceon Language, Information and Computation . [2] Mikel Artetxe and Holger Schwenk. 2019. Massivelymulti lingual sentence em- beddings for zero-shot cross-lingual transfer and beyond. Transactions of the AssociationforComputational Linguistics 7 (2019), 597–610. [3] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 201 4. Neural ma- chine translation by jointly learning to align and translat e.arXiv preprint arXiv:1409.0473 (2014). [4] JacobDevlin,Ming-WeiChang,KentonLee,andKristinaT outanova.2018. Bert: Pre-training of deep bidirectional transformers for langu age understanding. arXiv preprint arXiv:1810.04805 (2018). [5] JonasGehring,MichaelAuli,DavidGrangier,DenisYara ts,andYannNDauphin. 2017. Convolutional sequence tosequence learning.In International Conference on MachineLearning . PMLR,1243–1252. [6] JohnGibbons.1987. Code-mixingand codechoice:AHongKongcasestudy. Mul- tilingual Matters. [7] Pierre-Edouard Honnet, Andrei Popescu-Belis, Claudiu Musat, and Michael Baeriswyl. 2017. Machine translation of low-resource spok en dialects: Strate- giesfornormalizing SwissGerman. arXiv preprintarXiv:1710.11035 (2017). [8] John SY Lee. 2011. Toward a parallel corpus of spoken Cant onese and written Chinese.In Proceedingsof5thInternationalJointConferenceonNatura lLanguage Processing .1462–1466. [9] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40thannual meeting of theAssociationfor Computational Li nguistics.311–318. [10] Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Go ng, and Francisco Guzmán. 2019. Wikimatrix: Mining 135m parallel sentences i n 1620 language pairsfromwikipedia. arXiv preprintarXiv:1907.05791 (2019). [11] DonSnow.2004. Cantoneseaswrittenlanguage: ThegrowthofawrittenChine se vernacular . Vol. 1. Hong Kong UniversityPress. [12] IlyaSutskever,OriolVinyals,andQuocVLe.2014. Sequ enceto sequencelearn- ing with neuralnetworks. In Advances in neural informationprocessingsystems .3104–3112. [13] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko reit, Llion Jones, Aidan N Gomez, ŁukaszKaiser,and Illia Polosukhin. 2017. At tention is all you need. InAdvances inneural informationprocessingsystems .5998–6008. [14] Tak-sum Wong, Kim Gerdes,Herman Leung, and John SY Lee. | https://arxiv.org/abs/2505.17816v1 |
2017. Quantita- tive comparative syntax on the Cantonese-Mandarin paralle l dependency tree- bank. InProceedings of the Fourth International Conference on Depen dency Lin- guistics(Depling 2017) .266–275. [15] Tak-sum Wong and John SY Lee. 2018. Register-Sensitive Translation: A Case Study of Mandarin and Cantonese (Non-Archival Extended Abs tract). In Pro- ceedings of the 13th Conference of the Association for Machi ne Translation in the Americas(Volume1:ResearchTrack) .89–96. [16] Ping Xu and PascaleFung. 2012. Cross-linguallanguage modeling with syntac- tic reordering for low-resource speech recognition. In Proceedings of the 2012 JointConferenceonEmpiricalMethodsinNatural LanguagePr ocessingand Com- putational Natural Language Learning .766–776. Low-Resource NMT: ACase Study on theWri/t_ten and Spoken Langua ges in Hong Kong NLPIR 2021,December 17–20,2021, Sanya, Chin a Table5: Examplesof word reorderingin ChineseandCantones e. Example1 (Lee’s Parallel Corpus) Sourcesentence 我知道,多收點消息看清楚前路 “I” “know”, “more” “receive” “some” “information” “see” “c lear” “forward”“path” Target sentence 我知,收多o的料睇清條路 “I” “know”, “receive” “more” “some” “information” “see” “c lear” “the” “path” Our system 我知,收多o的料睇清楚前路 “I” “know”, “receive” “more” “some” “information” “see” “c lear” “forward”“path” Baidu Fanyi 我知,多收o的消息睇清楚前路 “I” “know”, “more” “receive” “some” “information” “see” “c lear” “forward”“path” Example2 (ILC Cantonese Program) Sourcesentence 其他的電器服裝等有許多都比大陸便宜 “other” “electricalappliances” “clothes”“etc.” “have” “ many” “all”“compareto”“mainland” “cheap” Target sentence 其他電器衫褲好多都平過大陸 “other” “electricalappliances” “clothes”“many” “all”“c heap” “than” “mainland” Our system 其他電器服裝等有好多都平過大陸 “other” “electricalappliances” “clothes”“etc.” “have” “ many” “all”“cheap” “than” “mainland” Baidu Fanyi 其他o既電器衫等有好多都比大陸平 “other” “electricalappliances” “clothes”“etc.” “have” “ many” “all”“compareto”“mainland” “cheap” Example3 (Kaifang Cidian) Sourcesentence 躺到床上去,讓我檢查一下 “lay” “to”“bed”“on” [particle],“let” “me” “check” “abit” Target sentence 訓上張床,等我檢查下 “lay” “on” [quantifier]“bed”,“let” “me” “check” “a bit” Our system 攤上床,等我check一check “lay” “already” “on” “bed”,“let”“me” “check” “one” “check ” Baidu Fanyi 攤到床上去,等我檢查一下 “lay” “to”“bed”“on” [particle],“let” “me” “check” “abit” Example4 (Kaifang Cidian) Sourcesentence 這陣子每天晚上都加班,幹得累成一灘泥 “recently” “every day” “evening” “all” “workovertime”, “w orking like” “exhausted” “as” “a” [quantifier]“mud” Target sentence 呢排晚晚加班,做到一隻屐噉 “recently” “every night” “work overtime”, “working like” “ a” “clog” “so” Our system 呢輪晚晚都開o.t.,做到一頭泥 “recently” “every night” “all” “workovertime”, “working l ike” “a” [quantifier]“mud” Baidu Fanyi 呢排晚晚都開OT,搞到[支力]到一灘泥 “recently” “every night” “all” “workovertime”, “until” “e xhausted” “as” “a”[quantifier]“mud” | https://arxiv.org/abs/2505.17816v1 |
PATIENT SIM: A Persona-Driven Simulator for Realistic Doctor-Patient Interactions Daeun Kyung1, Hyunseung Chung1, Seongsu Bae1, Jiho Kim1, Jae Ho Sohn2,Taerim Kim3,Soo Kyung Kim4,∗,Edward Choi1,∗ 1KAIST2UCSF3Samsung Medical Center4Ewha Womans University {kyungdaeun,edwardchoi}@kaist.ac.kr1,sookim@ewha.ac.kr4 Abstract Doctor-patient consultations require multi-turn, context-aware communication tai- lored to diverse patient personas. Training or evaluating doctor LLMs in such settings requires realistic patient interaction systems. However, existing simulators often fail to reflect the full range of personas seen in clinical practice. To address this, we introduce PATIENT SIM, a patient simulator that generates realistic and diverse patient personas for clinical scenarios, grounded in medical expertise. PA- TIENT SIMoperates using: 1) clinical profiles, including symptoms and medical history, derived from real-world data in the MIMIC-ED and MIMIC-IV datasets, and 2) personas defined by four axes: personality, language proficiency, medical history recall level, and cognitive confusion level, resulting in 37 unique combina- tions. We evaluated eight LLMs for factual accuracy and persona consistency. The top-performing open-source model, Llama 3.3, was validated by four clinicians to confirm the robustness of our framework. As an open-source, customizable platform, PATIENT SIMprovides a reproducible and scalable solution that can be customized for specific training needs. Offering a privacy-compliant environment, it serves as a robust testbed for evaluating medical dialogue systems across diverse patient presentations and shows promise as an educational tool for healthcare. 1 Introduction Large language models (LLMs) have shown impressive performance on medical question-answering benchmarks such as MedQA [ 21], MedMCQA [ 44], and PubMedQA [ 22], even surpassing human experts. However, these benchmarks use single-turn settings where patient data is readily provided, and models simply analyze these data to select the most likely diagnosis or treatment. In contrast, real-world clinicians engage in multi-turn, context-aware conversations to gather patient information actively. As a result, these models may not guarantee effectiveness in practical clinical settings. To evaluate LLM-powered virtual doctors ( i.e., doctor LLMs) in multi-turn settings, realistic patient interaction systems are needed. Traditionally, standardized patients (SPs) [ 4], trained actors simulating symptoms and histories, have been used to train and assess medical students’ communication and clinical skills. In this context, SPs could serve as a benchmark for evaluating doctor LLMs by providing dynamic, interactive patient encounters. However, SPs are limited by high costs, inconsistent availability, and scaling challenges due to the need for human actors [ 13]. In contrast, LLM-based patient simulators provide a scalable, accessible, and cost-effective alternative [ 8]. They reduce the need for repetitive human acting, eliminate geographic and time constraints, and lower costs compared to SPs. These advantages highlight the potential of AI as a powerful tool for training and evaluating medical students [19, 34, 60], as well as doctor LLMs [14, 26, 32, 33, 35, 51, 57]. ∗Co-corresponding author Preprint.arXiv:2505.17818v1 [cs.AI] 23 May 2025 PATIENTSIMPatient:I'm so tired, doctor, and I have no appetite, it's like I'm losing my strength. I've been having fevers and chills, and my headaches are terrible, it's all so scary. Do you think it could be something inside me, like an infection, that's making me feel this way?Doctor:I understand how distressing this must be for you, | https://arxiv.org/abs/2505.17818v1 |
and your symptoms can indeed suggest an infection. Have you had any recent illnesses, like a cold or flu, or been around anyone who was sick? Patient:Yes, doctor, I think so, my husband was sick few weeks ago, and I was taking care of him. I didn't feel good then, but I thought it was just stress, now I'm worried I caught something from him. Could it be pneumonia, doctor, please tell me it's not something worse? NeutralImpatientDistrustfulOveranxiousPleasingVerboseBasicIntermediateAdvancedHigh RecallLow RecallLanguage ProficiencyMedical HistoryRecall LevelPersonality{“subject_id”: 5736479,“age”: 77,“gender”: “F”,“marital_status”: “married”,“family_medical_history”: “…”,“medical_history”: “...”,“chief_complaint”: “dyspnea, …”,“pain”: 10,“medication”: “...”,...}MIMIC-IV & EDsubject_idgender…painchief complaint5736479F…10dyspnea……………5821254F…0numbnessProfile #1Real-World Clinical Data IntermediateHigh RecallNormalOveranxiousProfile #1 PatientDoctor(LLM)Doctor:Have you experienced any recent weight loss, fatigue, or any other unusual symptoms besides the cough and chest tightness? AdvancedNormalHigh RecallDistrustfulProfile #170 PatientDoctor(Human) Patient:I haven't noticed any significant changes in my sleep or appetite, but I have been feeling a bit dizzy and uncertain about my condition. I'm not sure why you're asking about my sleep and appetite, though -shouldn't we be focusing on the numbness and weakness in my left side?Doctor:Thank you for sharing that. Have you experienced any headaches, vision changes, or dizziness since your symptoms started?Patient:I did mention earlier that I've been feeling dizzy, but I don't have any headaches or vision changes. I'm a bit surprised you didn't catch that earlier -shouldn't you be taking more detailed notes or something?Doctor:I appreciate your honesty about that. Have you noticed any recent changes in your sleep patterns or appetite? 37 Personas 170 Clinical Profiles 5736479Service: NEUROLOGYAllergies: …Chief Complaint: SOB, cough History of Present Illness:...Highly ConfusedNormalCognitive Confusion Level Figure 1: Overall framework of PATIENT SIM. Based on 1) clinical profiles, including symptoms and medical history, derived from MIMIC-ED and MIMIC-IV datasets, and 2) personas defined by four axes: personality, language proficiency, medical history recall level, and cognitive confusion level, resulting in 37 unique combinations, PATIENT SIMprovides realistic doctor-patient conversation data. Recent work highlights the potential of LLM-based patient simulators, but a significant gap remains between these systems and real clinical settings. A number of studies [ 33,35,38,51] explored doctors’ interactive information-seeking abilities by providing LLMs with patient data and having them role-play patients. However, these studies focused on evaluating the performance of doctor LLMs, even though the validity of these evaluations depends on how closely patient simulators emulate actual patient behavior. Recognizing this importance, some studies [ 12,14,36] have begun evaluating patient simulators focusing on how accurately they convey symptomatic information. However, doctor-patient consultations are more than just patients accurately reciting their symptoms. Effective consultations must take into account patient behaviors dictated by multiple axes such as their emotional states and language skills, which significantly influence health outcomes. To this end, we propose PATIENT SIM, a system that simulates diverse patient personas encountered in clinical settings (Figure 1). Our simulator acts based on: 1) clinical profiles, including symptoms and medical history, and 2) personas defined by four axes: personality, language proficiency, medical history recall level, and cognitive confusion level. Patient profiles are constructed based on real-world medical records from the MIMIC-ED [ 23] and MIMIC-IV [ 25] datasets, totaling | https://arxiv.org/abs/2505.17818v1 |
170 profiles (Sec. 4). For personas, we defined 37 distinct combinations across four axes, designed to reflect key factors impacting doctor-patient consultation quality, based on literature reviews and guided by medical experts (Sec. 5.1). We evaluate eight LLMs as the backbone of our simulator and select Llama 3.3 as the final model, which maintains a persistent persona while ensuring factual accuracy (Sec. 7). The resulting simulator was assessed by four clinical experts and received an average quality score of 3.89 out of 4 across six criteria (Sec. 7). Our simulator offers the following contributions: •PATIENT SIMintroduces a novel framework for simulating realistic doctor-patient interactions. It leverages real-world clinical data from MIMIC-IV and MIMIC-ED, modeling diverse patient personas across four axes: personality, language proficiency, medical history recall level, and cognitive confusion level. •We conduct a comprehensive evaluation across eight LLMs, assessing factual accuracy and persona reflection. To confirm the robustness of PATIENT SIM’s simulations, the top-performing open- source model, Llama 3.3, is further validated by four clinicians. •Built on an open-source model, PATIENT SIMoffers an accessible, reproducible tool for providing doctor-patient consultation data while prioritizing patient privacy. This scalable, privacy-compliant solution enables researchers and practitioners to validate their model’s performance and adapt it for clinical uses such as educational tools. 2 Related work LLM-based agent simulation in clinical setting LLM-based agent simulations in clinical set- tings vary by scope and agents. Previous studies [ 1,3,32,61] simulate hospital workflows with 2 agents such as patients, nurses, and physicians, prioritizing final task accuracy ( e.g., diagnositic or department recommendation accuracy) over agent interactions. Additionally, previous works such as Medagents [ 55] and MDAgents [ 29] focus on collaborative physician decision-making but are limited to single-turn QA settings. Recent studies emphasize doctor-patient interactions, evaluating physician LLMs in patient-centered communication [ 1,36,38,51] or exploring their potential as educational tools [ 19,34,60]. However, these often overlook diverse patient characteristics, leading to insufficient realism in simulated interactions. As patient simulators are foundational to hospital simulations, providing primary clinical information and driving interaction dynamics, ensuring their realism is key challenge. Our research addresses this by developing an LLM-based patient simulator that delivers clinically coherent responses and reflects diverse patient characteristics. LLM patient simulation LLM-based patient simulation is divided into applications for general hospital consultations and psychological consultations, each with distinct objectives. In general hospital settings, patient simulations aim to accurately present medical history and symptoms through multi-turn dialogue [ 12,14,36,38]. While most efforts primarily focus on implementing patient simulator that can respond to questions with factually correct answers, some studies [ 12,38] tried to add a bit of realism to the patient simulator by describing its personas with keywords such as the Big Five traits or occupation groups. No previous studies, however, aimed to implement and, at the same time, evaluate patient simulator that can emulate diverse and clinically relevant personas. Psychological counseling simulations, on the other hand, try to model complex internal states, such as thoughts, and emotions, emphasizing subjective responses like mood shifts or treatment resistance [ 39,47,58,59]. These studies therefore prioritize deeper persona development to capture emotional | https://arxiv.org/abs/2505.17818v1 |
and relational nuances, making them unsuitable for simulating patients in general diagnostic consultation settings such as hospital emergency departments. Unlike previous works, our study proposes a realistic patient simulator that combines the emotional realism emphasized in psychological counseling with the clinical accuracy required for general diagnostic consultations. 3 Problem definition We target a first-time, single-session emergency department (ED) visit, considering technical and clinical constraints below, to ensure a feasible and coherent design. Limited access to comprehensive clinical data In real-world clinical practice, physicians integrate data from multiple dimensions such as patient-reported symptoms, physical exams, and test results (e.g., laboratory or imaging studies) to make diagnoses. However, when simulating patient, it is infeasible to predefine or dynamically generate clinically accurate data across all dimensions. As a result, patient simulators often receive questions about data that are inaccessible. Prior approaches have addressed this by instructing LLMs to respond with vague statements like “I don’t know” [ 12, 33,36] or to assume normal test results when data is not defined in the patient’s profile [ 14,51]. These strategies restrict physicians to inquire predefined data, limiting their ability to explore diverse reasoning paths. Moreover, assuming normal test results can mislead physicians. To address this, we focus our simulation on the history-taking process, a systematic approach to gather patient’s personal and medical information, prior to physical exam or lab tests [ 27]. Research shows that approximately 80% of diagnoses rely on history taking alone, highlighting its critical importance [ 18,45]. Unlike objective data ( e.g., test results), subjective data ( e.g., patients’ verbal reports) inherently involves uncertainty [ 41,52], allowing it to capture the variability in patients’ descriptions. This variability, driven by personal factors, supports our goal of creating realistic virtual patients with diverse personas, ensuring flexible and naturalistic interactions. Inability to simulate longitudinal patient state changes Realistic multi-session simulations require modeling treatment effects and disease progression over time, which are currently limited to very specific cases under heavy assumptions, and requires long-term research. Instead, we focus on single-session interactions, avoiding the need to model long-term outcomes or readmissions. Problem scope In the initial ED consultation setting, physicians often rely on verbal patient informa- tion, such as symptoms and medical history, for differential diagnosis under time pressure, before test results become available. Thus, we focus on differential diagnosis based on this initial consultation, which typically does not require test data. This approach ensures clinical relevance by reflecting real-world diagnostic reasoning while sidestepping current technical limitations. 3 4 Patient profile construction Structurized patient profile We construct detailed and structured profiles based on real clinical data from MIMIC-IV [ 23], MIMIC-IV-ED [ 25], and MIMIC-IV-Note [ 24], to ensure clinical relevance while minimizing ambiguity in the simulations. We extracted accurate patient data from structured tables and used clinical notes to capture detailed information, such as lifestyle and present symptoms, not included in the tables. This hybrid approach combined structured data’s accuracy with the depth of narrative notes. Details on data sources and processing are provided in the Appendix A.1 and A.2. As a result, each patient profile includes 24 | https://arxiv.org/abs/2505.17818v1 |
items, covering demographics, social and medical history, and ED visit details (see Appendix A.3). Clinical experts reviewed each item for relevance. Target disease selection We select the five prevalent diseases from the MIMIC-IV-ED dataset: my- ocardial infarction, pneumonia, urinary tract infection, intestinal obstruction, and cerebral infarction (stroke). These conditions were chosen for their clinical significance, prevalence in ED, and distinct symptomatology, enabling meaningful differential diagnosis (DDx) tasks. The selection process was guided by two medical experts, one of whom is an ER doctor with 13 years of experience. 5 P ATIENT SIM 5.1 Persona definition We defined four key axes for persona simulation that impact consultation quality in clinical practice, based on literature reviews and guidance from medical experts. Personality Personality is a well-established factor influencing consultation quality [ 7,9,37,49]. The Big Five framework [ 40], one of the most widely recognized models of personality, has been used in previous patient simulation studies [ 12], but its traits are broad and tend to influence patient–physician interactions only indirectly. Recent psychological therapy research emphasizes observable conver- sational styles that directly manifest in patient interactions. Drawing on this, we adapt these styles into doctor-patient consultation-specific personality that are directly observable and actionable for simulation. Based on literature review [ 2,6,31,53] and guidance from medical experts, we define six personalities relevant to medical consultations in ED: impatient, overanxious, distrustful, overly positive, verbose, and neutral (straightforward communication) as the baseline. Language proficiency A patient’s language proficiency is a critical determinant of doctor-patient communication quality [ 46,54], yet it has been underexplored in simulation contexts. By specifying language proficiency levels, we simulate scenarios in which physicians must adapt to patients with varying proficiency by using appropriate language to ensure understanding. We use the Common European Framework of Reference for Languages (CEFR) [ 42], which defines six proficiency levels (A1, A2, B1, B2, C1, C2). To facilitate the human evaluation by physicians, we consolidated these into three levels, A (basic), B (intermediate), and C (advanced). Medical history recall level Patients may not always accurately recall the details of their medical history [ 5,30]. Assuming perfect recall, as in traditional settings, represents an idealized case. In low-recall scenarios, physicians must ask additional questions to build diagnostic confidence. We define two settings: high recall and low recall, enabling practice with diverse patient profiles. Level of cognitive confusion Patients visiting the ED often present acute symptom exacerbation, leading to a highly confused and dazed state. These patients may initially struggle with coherent communication but stabilize through interaction. To simulate such cases, we define two mental status levels: highly confused and normal. To avoid overlap between confusion and other axes ( e.g., impatient personality, low language pro- ficiency, or low recall), highly confused patients are limited to neutral personality, intermediate language proficiency, and high recall. This results in 37 distinct personas; 36 from combinations of 6 personalities, 3 language proficiency levels, 2 recall levels, and 1 from high confusion persona. 5.2 Prompt design PATIENT SIMThePATIENT SIMprompt comprises profile information, four persona axes, and general behavioral guidelines. The prompt was iteratively refined through | https://arxiv.org/abs/2505.17818v1 |
a process of LLM evaluation, 4 qualitative analysis by the authors, and two rounds of feedback from medical experts. In the first round, two medical experts, who are also co-authors, provided feedback after engaging in extensive conversations with our simulators. The second round incorporated input from four additional medical experts external to the author group, based on their review of 10 sample cases. The full prompt is provided in the Appendix C.1. Doctor LLM Our research focuses on developing realistic patient simulators rather than doctor simu- lators. However, for automated evaluation, we require a doctor LLM capable of asking appropriate questions to elicit and assess patient responses. To achieve this, the doctor prompt was carefully designed, drawing on medical textbook [ 56] and expert advice, to ensure it includes all essential, routine questions. The full prompt is provided in the Appendix C.2. 6 Experiments 6.1 Task and evaluation To systematically assess the quality of LLM responses in terms of prompt alignment, we present experiments designed to address the following research questions. 6.1.1 RQ1: Do LLMs naturally reflect diverse persona traits in their responses? Realistic simulation of diverse and nuanced patient responses is crucial for training or evaluating PATIENT SIM’s communication skills. We evaluate whether PATIENT SIMaccurately reflects its assigned persona, across all 37 possible persona combinations. We assess four persona categories (i.e., personality, language proficiency, medical history recall level, confusion level), as well as overall realism to ensure that the model portrays the persona faithfully without exaggeration. The evaluation is divided into two folds. For automatic evaluation, we generate dialogues between PATIENT SIM and the doctor LLM across various persona settings, and then an LLM-based evaluator assesses the generated conversations. For human evaluation, four medical experts each engage in sufficient dialogue with the simulator across 37 persona samples (for a total of 108 dialogues), after which they evaluate the quality of the simulator. Both human and LLM evaluators score the following categories on a 4-point scale (1 = Strongly disagree, 4 = Strongly agree): • The simulated patient’s personality is consistently and accurately reflected during the interaction. •The patient’s language use (vocabulary, grammar, fluency) is appropriate to their assigned language proficiency level. •The patient’s ability to recall medical and personal information is consistent with their assigned recall level ( e.g., low or high). • The patient’s coherence and clarity of thought match the assigned level of cognitive confusion. • The patient’s overall communication style matched what I would expect from a real ED patient. 6.1.2 RQ2: Do LLMs accurately derive responses based on the given profile? In medical consultations, physicians rely on patient-reported information to form a differential diagnosis. The quality of patient-provided information directly impacts the effectiveness of the dialogue and the correctness of the physician’s conclusions. We evaluate factual accuracy at two levels: 1) sentence level and 2) dialogue level. The i-th patient profile, denoted as Pi={xi k}K k=1, consists of Kpredefined items, among Ntotal profiles. The dialogue between the physician and PATIENT SIMconfigured with profile Piis represented as Di. Within Di,PATIENT SIM’s utterances overTturns are represented as Ui={ui t}T | https://arxiv.org/abs/2505.17818v1 |
t=1, where ui tis the utterance at turn t. Each utterance ui t may contain multiple sentences, denoted as ui t={si tm}M m=1, where Mis the number of sentences in utterance ui t. We classify si tmassupported if it is related to at least one profile item xi k, and unsupported if it is unrelated to any profile items. Sentence-level evaluation . To assess the accuracy of the PATIENT SIM’s responses precisely, we first analyze all of its responses at the sentence level. Here, we focus only on supported sentences, assessing their factual accuracy based on the patient profile. Evaluation of unsupported sentences is addressed separately in Sec. 6.1.3. We first explain how we detect supported sentences, and then describe how we calculate the factual accuracy of the supported sentences (Figure 2). Each step of the evaluation is performed by providing the LLM sentence classifier with the preceding 5 Step 1: Classify each sentence in patient’s utterance.politenessemotioninformationinquirymeta-information Step 2-1: Are any patient profile fields predicted as mentioned in the given sentence? Step 2-2: Does the sentence contain new information beyond the profile? entailment Step 3-2: Plausibility rating (4-point scale)age…presentillnesschiefcomplaintpainYesNo neutralentailment Step 3-1: NLI evaluationfor K relevant profile fields( entailment /neutral/ contradiction )If no field is detected, go to Step 3-2.If all detected fields are neutral, go toStep 3-2. Figure 2: Overall process for sentence-level factuality evaluation. For each sentence in PATIENT SIM’s utterance, we first determine whether it contains some information. If it does, we identify all relevant profile items and assess whether the sentence is supported by each of them. If the sentence includes information not specified in the profile, we classify it as unsupported and then assess its plausibility based on other profile information to determine a plausibility rate. conversation history ( i.e., the dialogue up to the current sentence si tm) along with step-specific instructions (Appendix D.2.2). The evaluation proceeds as follows. For each sentence si tm: 1.Classify sentence type (multi-class, Step 1 in Figure 2) : We categorize each sentence si tmas one of five types: politeness, emotion, inquiry, meta-information, or information ( C(si tm)). Only the sentences classified as information proceed to the next step. 2.Identify the related profile items (multi-label, Step 2-1 in Figure 2) : For each sentence si tm classified as information, determine which of the patient’s profile items xi kit relates to. This is a multi-label classification task, where a sentence may relate to multiple xi k’s. The result is a binary vector R(si tm) = [ri 1, ri 2, . . . , ri K], where: ri k=1ifsi tmis related to item xi k, 0otherwise.(1) 3.Verify factual accuracy (Step 3-1 in Figure 2)) : For each profile item xi kwhere ri k= 1, perform a Natural Language Inference (NLI) evaluation to check if the si tmaligns with xi k. NLI, a method to evaluate textual consistency, labels the relationship as entailment (consistent), contradiction (inconsistent), or neutral (unrelated). We denote the NLI label as NLI (si tm, xi k)∈ {entailment ,contradiction ,neutral }. Ifri k= 0, no NLI evaluation is performed for that | https://arxiv.org/abs/2505.17818v1 |
item, as the sentence si tmis deemed unrelated to xi k. The final factual accuracy of the sentence si tmis represented by Entail (%), calculated as follows, Entail (Di) =PT t=1PM m=11[C(si tm) =info]·max k ri k·1[NLI (si tm, xi k) =entail ] PT t=1PM m=11[C(si tm) =info](2) which reflects the percentage of supported sentences that are factually accurate. Dialogue-level evaluation Sentence-level accuracy may be biased if the physician fails to elicit a comprehensive medical history, focusing only on specific topics. To mitigate this, we evaluate information coverage and the accuracy of the covered information at the dialogue level. First, we extract a derived profile, ˆPi={ˆxi 1,ˆxi 2, . . . , ˆxi K}, inferred by the LLM profile extractor from the dialogue Di. We then compute the Information Coverage (ICov ), the proportion of item categories present in both the derived profile ( ˆPi) and the original profile ( Pi): ICov =1 NNX i=1|Oi| K,where Oi={j|xi j̸=∅,ˆxi j̸=∅} (3) For overlapping item categories, we calculate Information Consistency (ICon ) at the dialogue level: ICon =1 NNX i=1 1 |Oi|X o∈Oiscore (xi j,ˆxi j)! (4) 6 Table 1: Persona fidelity evaluation of various LLMs across five criteria, Personality, Language, Recall, Confused, and Realism, assessed by Gemini-2.5-flash. Each criterion is rated on a 4-point scale. The average score (Avg.) summarizes overall performance. Engine Personality Language Recall Confused Realism Avg. Gemini-2.5-flash 3.94 3.54 3.64 3.38 3.37 3.57 Gpt-4o-mini 3.58 3.55 3.78 3.88 3.26 3.61 DeepSeek-R1-distill-Llama-70B 3.87 3.58 3.42 2.50 3.19 3.31 Qwen2.5-72b-instruct 3.30 3.68 3.63 3.50 3.22 3.46 Llama3.3-70b-instruct 3.92 3.40 3.78 4.00 3.28 3.68 Llama3.1-70b-instruct 3.65 3.51 3.62 4.00 3.23 3.60 Llama3.1-8b-instruct 3.53 3.29 3.70 4.00 3.20 3.54 Qwen2.5-7b-instruct 3.23 3.49 3.31 3.50 3.16 3.34 Table 2: Sentence-level factuality evaluation across eight LLMs, by Gemini-2.5-flash. Supported statements refer to sentences that relate to at least one item in the given profile. Unsupported statements include at least one piece of information that is not explicitly mentioned in the profile. Entail andContradict are evaluated for supported , while Plausibility is assessed for unsupported . Info (%) Supported (%) Unsupported (%)ForSupported ForUnsupported Entail (%, ↑) Contradict (%, ↓) Plausibility ( ↑) Gemini-2.5-flash 0.972 0.763 0.316 0.978 0.022 3.953 Gpt-4o-mini 0.957 0.721 0.428 0.968 0.032 3.929 DeepSeek-R1-distill-Llama-70B 0.975 0.762 0.416 0.968 0.032 3.911 Qwen2.5-72b-instruct 0.975 0.683 0.468 0.954 0.046 3.928 Llama3.3-70b-instruct 0.958 0.796 0.387 0.981 0.019 3.963 Llama3.1-70b-instruct 0.948 0.813 0.407 0.968 0.032 3.955 Llama3.1-8b-instruct 0.944 0.771 0.488 0.944 0.056 3.897 Qwen2.5-7b-instruct 0.987 0.703 0.453 0.939 0.061 3.862 Here, score (xi j,ˆxi j)measures the semantic similarity between the original item xi jand the derived item ˆxi j. Similarity ratings are assigned on a 4-point scale using the Gemini-2.5-Flash model as the scoring function. Prompts for the profile extractor and similarity scorer are in Appendix D.2.2. 6.1.3 RQ3: Can LLMs reasonably fill in the blanks? It is infeasible to predefine or generate clinically accurate information across all possible dimensions. Thus, patient simulators may encounter questions about information not explicitly described in the given profile. Unlike previous studies, which typically refuse to answer such questions ( i.e., not allowing | https://arxiv.org/abs/2505.17818v1 |
any unsupported sentences), thus limiting the flow of doctor-patient dialogue, we instead let PATIENT SIManswer such questions based on the given profile ( i.e., allowing unsupported sentences). However, it is essential to assess the clinical plausibility of those unsupported sentences, in order to guarantee the overall clinical validity of P ATIENT SIM, and its effectiveness as a simulation tool. Therefore this evaluation targets unsupported sentences, statements containing at least one piece of information not explicitly present in the given profile (per RQ2 ). We start by identifying the information sentences ( RQ2 , Step 1). To classify an information sentence as unsupported, we evaluate whether each information sentence includes undefined information (Figure 2, Step 2-2), based on criteria detailed in the Appendix D.2.3. To maximize recall, we apply two additional rules: •If no related profile items xi kare found for a sentence si tm(i.e.,PR(si tm) = 0 ), it is deemed unsupported, where R(si tm)is a binary vector indicating related profile items ( RQ2 , Step 2). • If all NLI labels for si tmare neutral, it is classified as unsupported. We use the LLM sentence classifier to classify unsupported sentences in patient utterances (Sec. 6.1.2). These identified sentences are then rated for plausibility on a 4-point scale by both human and LLM evaluator (Figure 2, Step 3-2). Since plausibility judgments may vary based on medical expertise, potentially introducing bias, we assigned three different annotators to each sample and reported inter-clinician agreement to ensure robustness for human evaluation. 6.2 Experimental settings We randomly sampled a total of 170 profiles and divided them into two subsets: 108 profiles for evaluating RQ1 (i.e., persona evaluation) and 52 profiles for evaluating RQ2 (i.e., factual accuracy) andRQ3 (i.e., clinical plausibility). We used 10 profiles to validate the LLM sentence classifier’s 7 Table 3: Dialogue-level factuality evaluation across Social History (Social), Previous Medical History (PMH), and Current Visit Information (Current Visit), evaluated by Gemini-2.5-flash. Information Coverage (ICov) (%) Infomation Consistency (ICon) (4-point) Social PMH Current Visit Avg. Social PMH Current Visit Avg. Gemini-2.5-flash 0.44 0.77 0.88 0.70 3.82 3.51 3.18 3.50 GPT-4o-mini 0.55 0.76 0.89 0.73 3.72 3.33 3.01 3.35 DeepSeek-R1-distill-Llama-70B 0.50 0.76 0.91 0.72 3.73 3.31 3.08 3.37 Qwen2.5-72B-instruct 0.47 0.77 0.90 0.71 3.75 3.50 2.95 3.40 Llama3.3-70B-instruct 0.53 0.78 0.89 0.73 3.72 3.47 3.10 3.43 Llama3.1-70B-instruct 0.56 0.77 0.89 0.74 3.82 3.43 3.05 3.43 Llama3.1-8b-instruct 0.61 0.78 0.88 0.76 3.68 3.19 2.85 3.24 Qwen2.5-7B-instruct 0.44 0.75 0.89 0.69 3.60 3.32 2.89 3.27 performance in automatically detecting supported and unsupported statements ( RQ2, 3 ). Detailed statistics are provided in the Appendix A.4. As the LLM backbone of PATIENT SIM, we selected eight representative models: two API-based LLMs (Gemini-2.5 Flash [ 10], GPT-4o-mini [ 43]) and six open-source models (Llama 3.1 8B [ 16], Llama 3.1 70B [ 16], Llama 3.3 70B, Qwen2.5 72B [ 48], Qwen2.5 7B [ 48]). We selected GPT-4o-mini to play the role of the doctor. We employed Gemini- 2.5-Flash as an evaluator model to assess responses across all experiments, using task-specific rubrics. Detailed information about model selection is provided in the | https://arxiv.org/abs/2505.17818v1 |
Appendix F.4. For human evaluation, we recruited four general practitioners2through Ingedata3, an AI data annotation company. Detailed information is provided in Appendix E. 7 Results In this section, we report the performance of various LLMs with respect to RQ1, 2 , and 3, using LLM- as-judge [ 62]. Based on these evaluations, we identified the best-performing model and conducted a human evaluation to further validate its performance. RQ1: Do LLMs naturally reflect diverse persona traits in their responses? Table 1 presents the fidelity of various baseline LLMs across different persona axes. Results underscore Llama’s strengths in simulation tasks, revealing that general LLM benchmark performance does not always correlate with simulation fidelity [ 20,50]. The Llama series demonstrates robust performance, particularly in aspects related to emotional expression ( i.e., Personality and Confused columns in Table 1). Notably, Llama 8B exhibits better fidelity than Qwen 72B, despite fewer parameters. The Confused column shows the highest variability among models, and most models struggle with negative emotions such as impatience and distrust, detailed in Appendix F.1. This may stem from safety measures in LLMs to avoid harmful responses, potentially limiting role-playing capabilities [11]. RQ2: Do LLMs accurately derive responses based on the given profile? We analyze the factual accuracy of sentences containing clinical information, focusing on statements explicitly mentioned in the given profile ( supported ). For each sentence, entailment is calculated with respect to all relevant profile items (Table 2, Entail column). All models demonstrate high entailment, but a notable gap exists between larger models ( ≥70B parameters) and smaller models ( ≤8B parameters), with the latter more prone to incorrect statements. Unlike persona fidelity, information accuracy appears to correlate with model size, likely due to smaller models’ limited capacity to process long context compared to larger ones. Llama 70B models perform well in both aspects. Table 3 compares dialogue-level ICov andICon across Social ( i.e., social history), PMH ( i.e., previous medical history), and Current Visit ( e.g., chief complaint, present illnesses) categories. PMH and Current Visit have similar coverage across simulators, as they are standard in medical interviews. Social coverage varies based on context, as details like occupation or exercise are less frequently queried. Current Visit has lower consistency scores due to its subjective nature, needing detailed questions for full symptom capture. Among LLMs, Gemini-2.5-Flash leads in consistency, followed by Llama 3.3 and Llama 3.1 70B, the top open-source model. 2Two individuals have 4 years and two individuals have 6 years of clinical experience post-physician license. The latter two also hold nursing licenses, with 13 and 17 years of nursing experience, respectively. 3https://www.ingedata.ai/ 8 Table 4: Plausibility scores for unsupported sentences in patient responses, labeled by four clinicians, with three annotators per sentence (out of 4). Intra-clinician agreement measured by Gwet’s AC1 with 95% confidence intervals estimated via 1,000 bootstrap iterations. Clinician A Clinician B Clinician C Clinician D Intra-Clinician Agreement Clinician A – 0.949 (0.927, 0.969) 0.968 (0.951, 0.983) 0.866 (0.828, 0.901) Clinician B 0.949 (0.927, 0.969) – 0.961 (0.940, 0.979) 0.853 (0.818, 0.886) Clinician C 0.968 (0.951, 0.983) 0.961 (0.940, 0.979) | https://arxiv.org/abs/2505.17818v1 |
– 0.879 (0.843, 0.913) Clinician D 0.866 (0.828, 0.901) 0.853 (0.818, 0.886) 0.879 (0.843, 0.913) – Plausibility (4 point scale) Plausibility 3.955 3.923 3.985 3.781 RQ3: Can LLMs reasonably fill in the blanks? To address this question, we focus on the unsupported sentences that include at least one piece of information not explicitly mentioned in the given profile. The plausibility column of Table 2 shows the plausibility ratings for answers about unspecified information. On average, evaluations are conducted on 764 sentences per model. Overall, larger models consistently demonstrate higher plausibility than smaller models. Smaller models are more likely to make additional statements that directly contradict their profile’s medical history or their own prior statements, possibly due to limitations in processing long contexts. The Llama series again exhibits the best performance in this task, underscoring its potential to simulate realistic patient responses. Personality LanguageRecallConfused Realism T ool Usefulness0.00.51.0ProportionScore 1 2 3 4 Figure 3: Score distribution across six evaluation criteria, in clinician evaluation (4-point scale).Human evaluation As Llama 3.3 consistently demonstrated robust performance across all re- search questions, we selected it as the LLM forPATIENT SIM. For RQ1 , clinicians engaged in approximately 10–15 minutes of conversa- tion for each case, and rated the interactions on six evaluation criteria (4-point scale). Fig- ure 3 shows the score distribution across all crite- ria. Clinicians consistently assigned high scores, with an overall average of 3.89 out of 4. In addition to the five criteria used by the LLM-as-judge (Sec. 7, RQ1), clinicians also rated their agreement with the statement: “This chatbot would be useful in education for practicing consultation skills”. The average score was 3.75, highlighting the simula- tor’s potential as an effective educational tool. In RQ2 , we evaluated the LLM sentence classifier’s performance using manually annotated labels by authors across 411 sentences from 10 dialogues. The validation results are presented in Appendix F.4.1. For RQ3 , Table 4 presents the plausibility scores from four different clinicians, along with their intra-clinician correlation. Each clinician evaluated 39 dialogues (about 616 sentences), carefully reviewing the patient profiles and conversation histories, spending approximately 6 minutes per dialogue. They assigned an average plausibility score of 3.91, with high agreement as measured by Gwet’s Agreement Coefficient ( AC1) [17], demonstrat- ing meaningful responses generated by our simulator. A more detailed analysis is provided in the Appendix F. 8 Discussion Although we carefully designed the overall framework, several limitations remain: 1) Our experiment is based on the MIMIC database, given that it is currently the only publicly available dataset to integrate clinical notes with ED triage information. This may limit the generalizability of our findings. 2) Due to the text-based nature of our simulation environment, the simulator cannot capture non- verbal expressions ( e.g., facial features, body movements), leading to limited persona representation. 3) Human evaluation was conducted with four clinicians, which could limit the generalizability of the evaluation results. To enhance the realism and generalizability of our framework, several avenues can be explored in future work. First, incorporating multimodal features ( e.g., tone, facial expressions, or gestures), possibly via | https://arxiv.org/abs/2505.17818v1 |
virtual reality (VR) simulations, would allow for more comprehensive modeling of patient personas. Second, increasing the scale and diversity of human evaluators can provide more reliable validation of LLM-based assessments. 9 References [1]M. Almansoori, K. Kumar, and H. Cholakkal. Self-evolving multi-agent simulations for realistic clinical interactions, 2025. URL https://arxiv.org/abs/2503.22678 . [2]A. Banerjee and D. Sanyal. Dynamics of doctor–patient relationship: A cross-sectional study on concordance, trust, and patient enablement. Journal of Family and Community Medicine , 19 (1):12–19, 2012. doi: 10.4103/2230-8229.94006. [3]Z. Bao, Q. Liu, Y . Guo, Z. Ye, J. Shen, S. Xie, J. Peng, X. Huang, and Z. Wei. Piors: Personalized intelligent outpatient reception based on large language model with multi-agents medical scenario simulation, 2024. URL https://arxiv.org/abs/2411.13902 . [4]H. S. Barrows. An overview of the uses of standardized patients for teaching and evaluating clinical skills. Academic Medicine , 68(6):443–451, 1993. [5]G. S. Boyer, D. W. Templin, W. P. Goring, J. C. Cornoni-Huntley, D. F. Everett, R. C. Lawrence, S. P. Heyse, and A. Bowler. Discrepancies between patient recall and the medical record. potential impact on diagnosis and clinical assessment of chronic disease. Archives of Internal Medicine , 155(17):1868–1872, 1995. [6]F. Chipidza, R. S. Wallwork, T. N. Adams, and T. A. Stern. Evaluation and treatment of the angry patient. Primary Care Companion for CNS Disorders , 18(3), 2016. [7]G. B. Clack, J. Allen, D. Cooper, and J. O. Head. Personality differences between doctors and their patients: implications for the teaching of communication skills. Medical Education , 38(2): 177–186, 2004. doi: 10.1111/j.1365-2923.2004.01752.x. [8]D. A. Cook. Creating virtual patients using large language models: scalable, global, and low cost. Medical Teacher , 47(1):40–42, 2025. doi: 10.1080/0142159X.2024.2376879. [9]G. Cousin and M. Schmid Mast. Agreeable patient meets affiliative physician: How physician behavior affects patient outcomes depends on patient personality. Patient Education and Counseling , 90(3):399–404, 2013. ISSN 0738-3991. Quality of Communication from the Patient Perspective. [10] G. DeepMind. Start building with gemini 2.5 flash, 2024. URL https://developers. googleblog.com/en/start-building-with-gemini-25-flash/ . [11] A. Deshpande, V . Murahari, T. Rajpurohit, A. Kalyan, and K. Narasimhan. Toxicity in chatgpt: Analyzing persona-assigned language models. In H. Bouamor, J. Pino, and K. Bali, editors, Findings of the Association for Computational Linguistics: EMNLP 2023 , pages 1236–1270, Dec. 2023. doi: 10.18653/v1/2023.findings-emnlp.88. URL https://aclanthology.org/ 2023.findings-emnlp.88/ . [12] Z. Du, L. Zheng, R. Hu, Y . Xu, X. Li, Y . Sun, W. Chen, J. Wu, H. Cai, and H. Ying. Llms can simulate standardized patients via agent coevolution, 2024. URL https://arxiv.org/abs/ 2412.11716 . [13] C. Elendu, D. C. Amaechi, A. U. Okatta, E. C. Amaechi, T. C. Elendu, C. P. Ezeh, and I. D. Elendu. The impact of simulation-based training in medical education: A review. Medicine , 103(27):e38813, July 2024. doi: 10.1097/MD.0000000000038813. [14] Z. Fan, L. Wei, J. Tang, W. Chen, W. Siyuan, Z. Wei, and F. Huang. Ai hospital: Benchmarking large language models in a multi-agent medical interaction simulator. In Proceedings of the 31st International Conference on Computational Linguistics , 2025. [15] A. L. Goldberger, L. A. N. Amaral, L. Glass, J. M. Hausdorff, P. C. Ivanov, R. G. Mark, J. E. Mietus, G. | https://arxiv.org/abs/2505.17818v1 |
B. Moody, C.-K. Peng, and H. E. Stanley. Physiobank, physiotoolkit, and physionet: Components of a new research resource for complex physiologic signals. Circulation [Online] , 101(23):e215–e220, 2000. doi: 10.1161/01.CIR.101.23.e215. [16] A. Grattafiori et al. The llama 3 herd of models, 2024. URL https://arxiv.org/abs/2407. 21783 . 10 [17] K. L. Gwet. Computing inter-rater reliability and its variance in the presence of high agreement. British Journal of Mathematical and Statistical Psychology , 61(1):29–48, 2008. doi: 10.1348/ 000711006X126600. [18] J. R. Hampton, M. J. G. Harrison, J. R. A. Mitchell, J. S. Prichard, and C. Seymour. Relative contributions of history-taking, physical examination, and laboratory investigation to diagnosis and management of medical outpatients. British Medical Journal , 2(5969):486–489, May 1975. doi: 10.1136/bmj.2.5969.486. [19] Y . Hicke, J. Geathers, N. Rajashekar, C. Chan, A. G. Jack, J. Sewell, M. Preston, S. Cornes, D. Shung, and R. Kizilcec. Medsimai: Simulation and formative feedback generation to enhance deliberate practice in medical education, 2025. URL https://arxiv.org/abs/2503.05793 . [20] Y . Huang, Z. Yuan, Y . Zhou, K. Guo, X. Wang, H. Zhuang, W. Sun, L. Sun, J. Wang, Y . Ye, and X. Zhang. Social science meets llms: How reliable are large language models in social simulations?, 2024. URL https://arxiv.org/abs/2410.23426 . [21] D. Jin, E. Pan, N. Oufattole, W.-H. Weng, H. Fang, and P. Szolovits. What disease does this patient have? a large-scale open domain question answering dataset from medical exams. arXiv preprint arXiv:2009.13081 , 2020. [22] Q. Jin, B. Dhingra, Z. Liu, W. Cohen, and X. Lu. Pubmedqa: A dataset for biomedical research question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages 2567–2577, 2019. [23] A. Johnson, L. Bulgarelli, T. Pollard, L. A. Celi, R. Mark, and S. Horng. MIMIC-IV-ED (version 2.2). https://doi.org/10.13026/5ntk-km72 , 2023. PhysioNet. [24] A. Johnson, T. Pollard, S. Horng, L. A. Celi, and R. Mark. Mimic-iv-note: Deidentified free-text clinical notes (version 2.2). https://doi.org/10.13026/1n74-ne17 , 2023. PhysioNet. [25] A. Johnson, L. Bulgarelli, T. Pollard, B. Gow, B. Moody, S. Horng, L. A. Celi, and R. Mark. MIMIC-IV (version 3.1). https://doi.org/10.13026/kpb9-mt58 , 2024. PhysioNet. [26] S. Johri, J. Jeong, B. A. Tran, D. I. Schlessinger, S. Wongvibulsin, Z. R. Cai, R. Daneshjou, and P. Rajpurkar. CRAFT-MD: A conversational evaluation framework for comprehensive assessment of clinical LLMs. In AAAI 2024 Spring Symposium on Clinical Foundation Models , 2024. [27] K. E. Keifenheim, M. Teufel, J. Ip, N. Speiser, E. J. Leehr, S. Zipfel, and A. Herrmann-Werner. Teaching history taking to medical students: a systematic review. BMC Medical Education , 15 (1):159, 2015. doi: 10.1186/s12909-015-0443-x. [28] S. Kim, J. Shin, Y . Cho, J. Jang, S. Longpre, H. Lee, S. Yun, S. Shin, S. Kim, J. Thorne, and M. Seo. Prometheus: Inducing fine-grained evaluation capability in language models. InThe Twelfth International Conference on Learning Representations , 2024. URL https: //openreview.net/forum?id=8euJaTveKw . [29] Y . Kim, C. Park, H. Jeong, Y . S. Chan, X. Xu, D. McDuff, H. Lee, M. Ghassemi, C. Breazeal, and H. W. Park. Mdagents: An adaptive | https://arxiv.org/abs/2505.17818v1 |
collaboration of llms for medical decision-making. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. [30] M. B. Laws, Y . Lee, T. Taubin, W. H. Rogers, and I. B. Wilson. Factors associated with patient recall of key information in ambulatory specialty care visits: Results of an innovative methodology. PLoS ONE , 13(2):e0191940, 2018. doi: 10.1371/journal.pone.0191940. [31] A. M. Legg, S. E. Andrews, H. Huynh, A. Ghane, A. Tabuenca, and K. Sweeny. Patients’ anxiety and hope: predictors and adherence intentions in an acute care context. Health Expectations , 18 (6):3034–3043, 2015. doi: 10.1111/hex.12288. [32] J. Li, Y . Lai, W. Li, J. Ren, M. Zhang, X. Kang, S. Wang, P. Li, Y .-Q. Zhang, W. Ma, and Y . Liu. Agent hospital: A simulacrum of hospital with evolvable medical agents, 2025. 11 [33] S. S. Li, V . Balachandran, S. Feng, J. S. Ilgen, E. Pierson, P. W. Koh, and Y . Tsvetkov. Mediq: Question-asking LLMs and a benchmark for reliable interactive clinical reasoning. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. [34] Y . Li, C. Zeng, J. Zhong, R. Zhang, M. Zhang, and L. Zou. Leveraging large language model as simulated patients for clinical education, 2024. URL https://arxiv.org/abs/2404.13066 . [35] Y . Liao, Y . Meng, H. Liu, Y . Wang, and Y . Wang. An automatic evaluation framework for multi-turn medical consultations capabilities of large language models, 2023. URL https: //arxiv.org/abs/2309.02077 . [36] Y . Liao, Y . Meng, Y . Wang, H. Liu, Y . Wang, and Y . Wang. Automatic interactive evaluation for large language models with state aware patient simulator, 2024. [37] S. D. Lifchez and R. J. Redett. A standardized patient model to teach and assess professionalism and communication skills: The effect of personality type on performance. Journal of Surgical Education , 71(3):297–301, 2014. ISSN 1931-7204. doi: https://doi.org/10.1016/j.jsurg.2013.09. 010. [38] H. Liu, Y . Liao, S. Ou, Y . Wang, H. Liu, Y . Wang, and Y . Wang. Med-pmc: Medical personalized multi-modal consultation with a proactive ask-first-observe-next paradigm, 2024. URL https: //arxiv.org/abs/2408.08693 . [39] R. Louie, A. Nandi, W. Fang, C. Chang, E. Brunskill, and D. Yang. Roleplay-doh: Enabling domain-experts to create LLM-simulated patients via eliciting and adhering to principles. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 10570–10603, 2024. [40] R. R. McCrae and P. T. Costa. Validation of the five-factor model of personality across instruments and observers. Journal of Personality and Social Psychology , 52(1):81–90, 1987. [41] A. N. Meyer, T. D. Giardina, L. Khawaja, and H. Singh. Patient and clinician experiences of uncertainty in the diagnostic process: Current understanding and future directions. Patient Education and Counseling , 104(11):2606–2615, 2021. ISSN 0738-3991. doi: https://doi.org/10. 1016/j.pec.2021.07.028. [42] C. of Europe. Common European Framework of Reference for Languages: Learning, Teaching, Assessment . Cambridge University Press, Cambridge, 2001. [43] OpenAI. Gpt-4o mini: advancing cost-efficient intelligence, 2024. URL https://openai. com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/ . [44] A. Pal, L. K. Umapathi, and M. Sankarasubbu. Medmcqa: A large-scale multi-subject multi- choice dataset for medical domain question answering. | https://arxiv.org/abs/2505.17818v1 |
In Proceedings of the Conference on Health, Inference, and Learning , pages 248–260, 2022. [45] M. C. Peterson, J. H. Holbrook, D. V on Hales, N. L. Smith, and L. V . Staker. Contributions of the history, physical examination, and laboratory investigation in making medical diagnoses. Western Journal of Medicine , 156(2):163–165, 1992. [46] E. J. Pérez-Stable and S. El-Toukhy. Communicating with diverse patients: How patient and clinician factors affect disparities. Patient Education and Counseling , 101(12):2186–2194, 2018. ISSN 0738-3991. doi: https://doi.org/10.1016/j.pec.2018.08.021. [47] H. Qiu and Z. Lan. Interactive agents: Simulating counselor-client psychological counseling via role-playing llm-to-llm interactions, 2024. URL https://arxiv.org/abs/2408.15787 . [48] Qwen, :, A. Yang, B. Yang, B. Zhang, B. Hui, B. Zheng, B. Yu, C. Li, D. Liu, F. Huang, H. Wei, H. Lin, J. Yang, J. Tu, J. Zhang, J. Yang, J. Yang, J. Zhou, J. Lin, K. Dang, K. Lu, K. Bao, K. Yang, L. Yu, M. Li, M. Xue, P. Zhang, Q. Zhu, R. Men, R. Lin, T. Li, T. Tang, T. Xia, X. Ren, X. Ren, Y . Fan, Y . Su, Y . Zhang, Y . Wan, Y . Liu, Z. Cui, Z. Zhang, and Z. Qiu. Qwen2.5 technical report, 2025. URL https://arxiv.org/abs/2412.15115 . 12 [49] D. A. Redelmeier, U. Najeeb, and E. E. Etchells. Understanding patient personality in medical care: Five-factor model. Journal of General Internal Medicine , 36(7):2111–2114, 2021. ISSN 1525-1497. doi: 10.1007/s11606-021-06598-8. [50] V . Samuel, H. P. Zou, Y . Zhou, S. Chaudhari, A. Kalyan, T. Rajpurohit, A. Deshpande, K. Narasimhan, and V . Murahari. Personagym: Evaluating persona agents and llms. arXiv preprint arXiv:2407.18416 , 2024. [51] S. Schmidgall, R. Ziaei, C. Harris, E. Reis, J. Jopling, and M. Moor. Agentclinic: a multimodal agent benchmark to evaluate ai in simulated clinical environments, 2024. [52] E. Stolper, P. Van Royen, E. Jack, J. Uleman, and M. Olde Rikkert. Embracing complexity with systems thinking in general practitioners’ clinical reasoning helps handling uncertainty. Journal of Evaluation in Clinical Practice , 27(5):1175–1181, 2021. doi: https://doi.org/10.1111/jep. 13549. [53] D. E. Stubbe. Alleviating anxiety: Optimizing communication with the anxious patient. Focus (Am Psychiatr Publ) , 15(2):182–184, 2017. doi: 10.1176/appi.focus.20170001. [54] R. L. Sudore, C. S. Landefeld, E. J. Pérez-Stable, K. Bibbins-Domingo, B. A. Williams, and D. Schillinger. Unraveling the relationship between literacy, language proficiency, and patient- physician communication. Patient Education and Counseling , 75(3):398–402, 2009. doi: 10.1016/j.pec.2009.02.019. [55] X. Tang, A. Zou, Z. Zhang, Z. Li, Y . Zhao, X. Zhang, A. Cohan, and M. Gerstein. MedAgents: Large language models as collaborators for zero-shot medical reasoning. In Findings of the Association for Computational Linguistics: ACL 2024 , 2024. [56] E. C. Toy, B. Simon, K. Takenaka, T. H. Liu, and A. J. Rosh. Case Files Emergency Medicine . McGraw-Hill Education / Medical, New York, 4th edition, 2017. ISBN 9781259640827. [57] T. Tu, A. Palepu, M. Schaekermann, K. Saab, J. Freyberg, R. Tanno, A. Wang, B. Li, M. Amin, N. Tomasev, S. Azizi, K. Singhal, Y . Cheng, L. Hou, A. Webson, K. Kulkarni, S. S. Mahdavi, C. Semturs, J. Gottweis, J. Barral, K. Chou, G. S. | https://arxiv.org/abs/2505.17818v1 |
Corrado, Y . Matias, A. Karthikesalingam, and V . Natarajan. Towards conversational diagnostic ai, 2024. URL https://arxiv.org/abs/ 2401.05654 . [58] J. Wang, Y . Xiao, Y . Li, C. Song, C. Xu, C. Tan, and W. Li. Towards a client-centered assessment of llm therapists by client simulation, 2024. URL https://arxiv.org/abs/2406.12266 . [59] R. Wang, S. Milani, J. C. Chiu, J. Zhi, S. M. Eack, T. Labrum, S. M. Murphy, N. Jones, K. V . Hardy, H. Shen, F. Fang, and Z. Chen. PATIENT- ψ: Using large language models to simulate patients for training mental health professionals. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 12772–12797, 2024. [60] H. Wei, J. Qiu, H. Yu, and W. Yuan. Medco: Medical education copilots based on a multi-agent framework, 2024. URL https://arxiv.org/abs/2408.12496 . [61] W. Yan, H. Liu, T. Wu, Q. Chen, W. Wang, H. Chai, J. Wang, W. Zhao, Y . Zhang, R. Zhang, L. Zhu, and X. Zhao. Clinicallab: Aligning agents for multi-departmental clinical diagnostics in the real world, 2024. URL https://arxiv.org/abs/2406.13890 . [62] L. Zheng, W.-L. Chiang, Y . Sheng, S. Zhuang, Z. Wu, Y . Zhuang, Z. Lin, Z. Li, D. Li, E. Xing, H. Zhang, J. E. Gonzalez, and I. Stoica. Judging LLM-as-a-judge with MT-bench and chatbot arena. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track , 2023. URL https://openreview.net/forum?id=uccHPGDlao . 13 Appendix A Patient profile construction 15 A.1 Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.2 Database preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.3 Structurized patient profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.3.1 Profile items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.3.2 Note preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 A.4 Profile statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 B Persona details for P ATIENT SIM 21 B.1 Personality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | https://arxiv.org/abs/2505.17818v1 |
. 21 B.2 Language proficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 B.3 Medical history recall level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 B.4 Cognitive confusion level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 C Simulation of doctor-patient interaction 23 C.1 P ATIENT SIM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 C.2 Doctor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 D Experimental settings 27 D.1 Model configurations and dataset details . . . . . . . . . . . . . . . . . . . . . . . 27 D.2 LLM evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 D.2.1 RQ1: Do LLMs naturally reflect diverse persona traits in their responses? . 27 D.2.2 RQ2: Do LLMs accurately derive responses based on the given profile? . . 28 D.2.3 RQ3: Can LLMs reasonably fill in the blanks? . . . . . . . . . . . . . . . 32 E Human evaluation 33 E.1 Clinician recruitment for evaluation . . . . . . . . . . . . . . . . . . . . . . . . . 33 E.2 Persona fidelity evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 E.3 Plausibility evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 F Experimental results 35 F.1 RQ1: Do LLMs naturally reflect diverse persona traits in their responses? . . . . . 35 F.1.1 Additional result of LLM evaluation . . . . . . . . . . . . . . . . . . . . . 35 F.1.2 Additional result of human evaluation . . . . . . . . . . . . . . . . . . | https://arxiv.org/abs/2505.17818v1 |
. . 36 F.2 RQ2: Do LLMs accurately derive responses based on the given profile? . . . . . . 38 F.3 RQ3: Can LLMs reasonably fill in the blanks? . . . . . . . . . . . . . . . . . . . . 39 F.4 Ablation study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 F.4.1 Validation of sentence-level classification . . . . . . . . . . . . . . . . . . 39 F.4.2 Ablation study on doctor LLM . . . . . . . . . . . . . . . . . . . . . . . . 40 14 G Responsible use and limitations 40 A Patient profile construction A.1 Database For our research, we utilize datasets from PhysioNet [ 15], adhering to the required credentials and permissions under the PhysioNet license. The datasets used are MIMIC-IV (v3.1) [ 25], MIMIC-IV- ED (v2.2) [23], and MIMIC-IV-Note (v2.2) [24]. MIMIC-IV (v3.1) MIMIC-IV4is a comprehensive, deidentified dataset of patients admitted to the emergency department (ED) or intensive care unit (ICU) at Beth Israel Deaconess Medical Center (BIDMC) in Boston, MA. It includes data for over 65,000 ICU patients and over 200,000 ED patients. With a modular data organization emphasizing provenance, MIMIC-IV supports both individual and integrated use of diverse data sources, making it a rich resource for patient information extraction. MIMIC-IV-ED (v2.2) MIMIC-IV-ED5is a freely accessible database of 425,087 ED admissions at BIDMC from 2011 to 2019. It contains deidentified data compliant with the HIPAA Safe Harbor pro- vision, including vital signs, triage information, medication reconciliation, medication administration, and discharge diagnoses. MIMIC-IV-Note (v2.2) MIMIC-IV-Note6provides 331,794 deidentified discharge summaries for 145,915 patients admitted to the hospital or ED at BIDMC. All notes comply with HIPAA Safe Harbor provisions by removing protected health information and are linkable to MIMIC-IV , offering valuable clinical context. A.2 Database preprocessing To integrate patient information from both structured tables and free-text data, we selected patients from MIMIC-IV-ED (v2.2) with triage information and diagnosis records, and corresponding free- text discharge summaries from MIMIC-IV-Note (v2.2). This selection ensured access to detailed subjective symptoms, primarily captured in free-text notes rather than structured tables. We apply the following criteria to filter the data (Figure A1). From the resulting cohort, we randomly sampled up to 40 patient records per diagnosis category to ensure class balance and manage dataset size. Cohort selection criteria are as follows: •Each hospital admission ( hadm_id ) must include exactly one ED stay. Admissions with multiple ED stays were excluded. • To ensure diagnostic clarity, we included only ED stays with a single diagnosis code. •We excluded records with missing or unknown values in the fields marital_status , insurance ,race ,chiefcomplaint , orarrival_transport . •Pain scores were converted to numeric values based on field definitions. Non-numeric values and scores outside the | https://arxiv.org/abs/2505.17818v1 |
0–10 range were treated as outliers and removed. • We cap the maximum number of medication per patients at 15. •The History of Present Illness (HPI) section was limited to a maximum of 350 words and a minimum of 10 words. The Past Medical History (PMH) section was limited to a maximum of 80 words. •To ensure the accuracy of symptom descriptions, we excluded records where the chiefcomplaint field or the Complaint or HPI sections of the discharge notes contained terms such as “coma,” “stupor,” or “altered mental status.” •To avoid potential confounds related to language fluency, we excluded records where the chiefcomplaint field or the Complaint or HPI sections contained terms such as “slurred speech,” “dysarthria,” or “aphasia.” 4https://physionet.org/content/mimiciv/3.1/ 5https://physionet.org/content/mimic-iv-ed/2.2/ 6https://physionet.org/content/mimic-iv-note/2.2/ 15 MIMIC- IV-Note discharge (v2.2) subject id : 145,915 patients hadm id: 331,794 admissions note id: 331,794 notes MIMIC- IV-Note discharge (v2.2) subject id : 142,993 patients hadm id: 322,866 admissions note id: 322,866 notesMIMIC- IV-ED dataset (v2.2) subject id : 205,504 patientshadm id : 202,441 admissions stay id : 425,087 stays MIMIC- IV-ED dataset (v2.2) subject id : 105,917 patientshadm id : 87,799 admissionsstay id : 167,685 stays Merge MIMIC- IV & MIMIC- IV-ED subject id : 37,051 patientshadm id : 51,273 admissions stay id : 51,273 stays Merge MIMIC- IV & MIMIC- IV-ED & MIMIC- IV-Note subject id : 19,388 patients hadm id : 23,451 admissions stay id : 23,451 staysMIMIC- IV dataset (v3.1) subject id : 223,452 patientshadm id : 546,028 admissions MIMIC- IV-ED dataset (v2.2) subject id : 63,809 patientshadm id : 51,893 admissions stay id : 92,965 staysExclude records with missing values or outliers.MIMIC- IV-ED diagnosis (v2.2) subject id : 205,129 patients stay id : 423,989 stays MIMIC- IV-ED diagnosis (v2.2) subject id : 105,917 patientsstay id : 167,685 staysRemove # of diagnosis > 1 Exclude records with missing values or outliers. Merge & Remove outlier: # of ED stay per admission > 1MIMIC- IV-Note discharge (v2.2) subject id : 115,525 patients hadm id: 204,506 admissions note id: 204,506 notesSelect the note containing 3 keyparts: Complaint, HPI, PMH Exclude notes where the HPI and PMH are too brief or overly lengthy. Exclude patients with impaired communication. MIMIC- IV-Note discharge (v2.2) subject id : 106,443 patients hadm id: 183,972 admissions note id: 183,972 notesMIMIC- IV dataset (v3.1) subject id : 208,551 patientshadm id : 524,126 admissionsFigure A1: Overview of data preprocessing for selecting patient records from MIMIC-IV , MIMIC- IV-ED and MIMIC-IV-Note. A.3 Structurized patient profile A.3.1 Profile items We extracted accurate patient-related data from structured tables and used clinical notes to capture nuanced information, such as lifestyle and current symptoms, not found in the tables. Each patient profile consists of 24 items, including demographic details (age, gender, race), social history (illicit drug use, exercise, marital status, sexual history, children, living situation, occupation, insurance), medical history (allergies, family medical history, medical devices, past medical history), subjective information related to emergency department (ED) admission (history of present illness, chief complaint, pain level, medications), and meta-information about the ED visit (arrival transport, disposition, diagnosis). The source for each | https://arxiv.org/abs/2505.17818v1 |
item is summarized in Table A1. For simplicity, we use shortened dataset paths, like mimic-iv-ed/edstays instead of mimic-iv-ed/2.2/ed/edstays . Disposition and diagnosis are not shared with the doctor but are included to enhance the simulator’s understanding of the patient’s status and condition severity during role-playing as the patient. Sexual history is included only for patients admitted due to urinary tract infections, based on feedback from doctors. Table A1: Data sources and corresponding patient profile items used in P ATIENT SIM Source Profile Items mimiciv/hosp/admissions insurance, marital_status mimiciv/hosp/patients age mimic-iv-ed/edstays gender, race, arrival_transport, disposition mimic-iv-ed/triage chiefcomplaint, pain mimic-iv-ed/medrecon medication mimic-iv-ed/diagnosis icd_title mimic-iv-note/discharge occupation, living situation, children, exercise, tobacco, alcohol, illicit drug, sexual history, allergics, medical history, familiy medical history, medical device, present illness 16 A.3.2 Note preprocessing To extract structured information from free-text discharge notes, we use the Gemini-2.5-flash model, configured with 1024 thinking tokens to enable robust reasoning. This model is selected for its strong performance on natural language processing benchmarks [ 10]. Our preprocessing pipeline consists of three steps, each guided by task-specific instructions. First, we extract 13 key items ( e.g., chief complaint, medical history, medications) as defined in Table A1, mimic-iv-note/discharge row. This extraction is performed using structured prompts, shown in Figure A2 and Figure A3, designed to retrieve relevant information from discharge notes. Second, we filter out patient profiles where the extracted information does not align with the ED diagnosis table to ensure dataset reliability. This step reduces noise arising from documentation errors or significant changes in patient condition after ED admission. An evaluation prompt (Figure A4) instructs the LLM to assess the alignment between each patient profile and the ED diagnosis on a 5-point scale (1 = no match, 5 = perfect match). Only profiles scoring 3 or higher are retained to maintain clinical coherence. Third, to generate comprehensive patient profiles for simulation, we infer missing lifestyle and habit items, such as exercise, tobacco use, alcohol consumption, living situation, and occupation, based on existing profile information (Figure A5). These attributes are often inconsistently documented due to their varying clinical relevance. Since they enhance the realism of patient simulations without typically affecting critical outcomes, we impute missing values in a context-aware manner using the LLM, drawing on available demographics and medical history. Any existing valid information is preserved. This step yields coherent and realistic profiles that improve the quality of downstream simulations. A.4 Profile statistics As a result of the above pipeline, we obtain a final set of 170 patient profiles. Table A.4 presents detailed statistics on the demographic and clinical characteristics of these profiles. Age is grouped into 10-year intervals. Numerical variables ( i.e., age, pain score) are sorted by value, while categorical variables are ordered by descending frequency. Table A2: Detailed patient profile statistics for PATIENT SIM, based on a total of 170 patient profiles. Category Distribution Age Group 20-30: 9 (5.3%), 30-40: 7 (4.1%), 40-50: 18 (10.6%), 50-60: 29 (17.1%), 60-70: 37 (21.8%), 70-80: 33 (19.4%), 80-90: 30 (17.6%), 90-100: 7 (4.1%) Gender Female: 88 (51.8%), Male: 82 (48.2%) Race White: 106 (62.4%), Black/African American: | https://arxiv.org/abs/2505.17818v1 |
24 (14.1%), Asian - Chinese: 6 (3.5%), Black/Cape Verdean: 6 (3.5%), Hispanic/Latino - Puerto Rican: 6 (3.5%), Other: 5 (2.9%), Asian: 2 (1.2%), Asian - Asian Indian: 2 (1.2%), Hispanic/Latino - Dominican: 2 (1.2%), White - Other European: 2 (1.2%), White - Russian: 2 (1.2%), Asian - South East Asian: 1 (0.6%), Black/African: 1 (0.6%), Hispanic/Latino - Central American: 1 (0.6%), Hispanic/Latino - Colombian: 1 (0.6%), Hispanic/Latino - Guatemalan: 1 (0.6%), Hispanic/Latino - Mexican: 1 (0.6%), Hispanic/Latino - Salvadoran: 1 (0.6%) Marital Status Married: 84 (49.4%), Single: 51 (30.0%), Widowed: 24 (14.1%), Divorced: 11 (6.5%) Insurance Medicare: 84 (49.4%), Private: 55 (32.4%), Medicaid: 23 (13.5%), Other: 8 (4.7%) Arrival Transport Walk In: 95 (55.9%), Ambulance: 74 (43.5%), Other: 1 (0.6%) Disposition Admitted: 164 (96.5%), Other: 6 (3.5%) Pain Score 0: 82 (48.2%), 1: 3 (1.8%), 2: 5 (2.9%), 3: 10 (5.9%), 4: 11 (6.5%), 5: 6 (3.5%), 6: 7 (4.1%), 7: 12 (7.1%), 8: 14 (8.2%), 9: 5 (2.9%), 10: 15 (8.8%) Diagnosis Intestinal obstruction: 39 (22.9%), Pneumonia: 34 (20.0%), Urinary tract infection: 34 (20.0%), Myocardial infarction: 34 (20.0%), Cerebral infarction: 29 (17.1%) 17 Prompt template for structurizing clinical notes [System Promt ] You are an AI assistant designed to extract structured medical information from electronic health records (EHRs). Your task is to analyze the EHR content and extract all relevant information into predefined categories. Complete the fields below using the EHRs. Include only events that occurred before the most recent ED admission, and exclude any test results collected afterward. Return the extracted information in the following valid JSON format. Field Definitions: - demographics: –occupation: The patient’s current job or employment status. –living_situation: Who the patient lives with, or their housing situation. –children: Number and gender of the patient’s children. - social_history: –exercise: Type(s) and frequency of physical activity or exercise. –tobacco: Any use of tobacco, including type, amount, and frequency. –alcohol: Alcohol consumption details, including type, frequency, and amount. –illicit_drug: Use of non-prescribed or illegal substances, including type, amount, and frequency. –sexual_history: Sexual activity, including partner(s), protection use, frequency, and timing. - allergies: Any known allergies, including type of reaction if available. -medical_history: Past medical conditions or diagnoses, including chronic conditions and any details like onset. -family_medical_history: Medical conditions in family members, with relevant details if available. -medical_device: Any medical or assistive devices in current use, including context or usage dates if noted. - present_illness: –positive: Recent symptoms or conditions before the ED visit, with all existing relevant details such as onset, duration, severity, or progression. Do not include lab or imaging test results or diagnosis names. –negative: Symptoms or conditions the patient explicitly denies having. Output Format (JSON): { "demographics": { "occupation": "", "living_situation": "", "children": "" }, "social_history": { "exercise": "", "tobacco": "", "alcohol": "", "illicit_drug": "", "sexual_history": "" }, "allergies": "", "medical_history": "", "family_medical_history": "", "medical_device": "", "present_illness": { "positive": "", "negative": "" } } Guidelines: 1. Extract each field from the entire EHR with complete accuracy. 2.Keep each field concise and keyword-based phrases without full sentences or narrative descriptions. 3. Express information briefly, avoiding verbs, pronouns, or | https://arxiv.org/abs/2505.17818v1 |
unnecessary words. 4.If a field contains multiple values, combine them into a single string separated by semi- colons. 5.Treat de-identified placeholders as nonexistent. Do not include placeholders like ‘___’ in any of output fields. 6. Return ‘Not recorded’ for any field not mentioned in the EHR. Figure A2: System prompt template for extracting and structuring electronic health record (EHR) data into predefined fields in JSON format, capturing patient information prior to the latest ED admission. Braced elements {} are substituted with values specific to each patient record. 18 Prompt template for structurizing clinical notes [User Promt ] Patient’s Electronic Health Record (EHR): - Allergies: {Allergies} - Chief Complaint: {Chief Complaint} - History of Present Illness: {History of Present Illness} - Past Medical History: {Past Medical History} - Social History: {Social History} - Family History: {Family History} Figure A3: User prompt template for extracting and structuring electronic health record (EHR) data. Braced elements {} are substituted with values specific to each patient record. Prompt template for profile validation [System Promt ] You are a helpful medical assistant. Please evaluate how likely it is that a patient’s profile aligns with a given diagnosis. Predict the likelihood of the diagnosis based on the profile provided. Rate the likelihood on a scale from 1 to 5, where ‘1’ means the patient’s history and symptoms do not match the diagnosis at all, and ‘5’ means the patient’s history and symptoms fully align with the diagnosis. Please generate your output as a valid JSON dictionary in the following format: { "explanation": reason for the rating, "likelihood_rating": 1 to 5 } [User Promt ] Patient’s Profile : •Demographics: – Age: {age} – Gender: {gender} – Race: {race} •Social History: –Tobacco: {tobacco} –Alcohol: {alcohol} –Illicit drug use: {illicit_drug} –Sexual History: {sexual_history} –Exercise: {exercise} –Marital status: {marital_status} –Children: {children} –Living Situation: {living_situation} –Occupation: {occupation} –Insurance: {insurance} •Previous Medical History: –Allergies: {allergies} –Family medical history: {family_medical_history} –Medical devices used before this ED admission: {medical_device} –Medical history prior to this ED admission: {medical_history} •Current Visit Information: –Present illness: - positive: {present_illness_positive} - negative (denied): {present_illness_negative} –ED chief complaint: {chiefcomplaint} –Pain level at ED Admission (0 = no pain, 10 = worst pain imaginable): {pain} –Current medications they are taking: {medication} –ED Arrival Transport: {arrival_transport} –ED disposition: {disposition} Diagnosis: {diagnosis} Figure A4: Prompt template for scoring the alignment between patient records and diagnosis. Braced elements {} are substituted with patient-specific values. 19 Prompt template for completing patient profiles [System Prompt ] You are an AI assistant specializing in processing and completing lifestyle information for individuals. Your task is to analyze the provided electronic health records (EHRs) and update the profile section by filling in any missing details with realistic, plausible responses. Field Definitions: -demographics : – occupation: The patient’s current job or employment status. – living_situation: Who the patient lives with, or their housing situation. – children: Number and gender of the patient’s children. -social_history : – exercise: Type(s) and frequency of physical activity or exercise. – tobacco: Any use of tobacco, including type, amount, and frequency. – alcohol: Alcohol consumption details, including type, | https://arxiv.org/abs/2505.17818v1 |
frequency, and amount. –illicit_drug: Use of non-prescribed or illegal substances, including type, amount, and fre- quency. – sexual_history: Sexual activity, including partner(s), protection use, frequency, and timing. Guidelines: 1.For any field marked as ‘Not recorded’, generate a realistic and plausible entry that aligns with the patient’s EHR and other profile information. 2.For fields containing placeholders like ‘___’, replace the placeholder with plausible values based on the field’s context and the patient’s profile. 3. Do not modify any field that already contains valid data, except for placeholders (‘___’). 4. Use clear language, while preserving appropriate medical or social context. 5. Convert first-person responses to third-person. For example, change ‘I live alone’ to ‘Lives alone.’ 6.Do not refer to the individual using gendered pronouns (‘he’ or ‘she’). Use gender-neutral phrasing. 7. Represent each field as a string. Use semicolons to separate multiple items within the same field. [User Prompt ] Patient’s Electronic Health Record (EHR): - Age: {age} - Gender: {gender} - Race: {race} - Marital Status: {marital_status} - Insurance: {insurance} - Medical device: {medical_device} - Medical history: {medical_history} - Present illness: {present_illness} - Family medical history: {family_medical_history} Patient Profile Template (to complete): { "demographics": { "occupation": "{occupation}", "living_situation": "{living_situation}", "children": "{children}" }, "social_history": { "exercise": "{exercise}", "tobacco": "{tobacco}", "alcohol": "{alcohol}", "illicit_drug": "{illicit_drug}", "sexual_history": "{sexual_history}" } } Figure A5: Prompt template for completing the patient’s social history section, using EHR context. Braced elements {} are substituted with patient-specific values. 20 B Persona details for P ATIENT SIM B.1 Personality We outline six patient personalities relevant to medical consultations in the ED: distrustful, impatient, overanxious, overly positive, and verbose, with neutral (straightforward communication) as the baseline. Table B3 provides descriptions of each personality type. These prompts have been reviewed and validated by medical experts. B.2 Language proficiency We adopt the Common European Framework of Reference for Languages (CEFR), an international standard for assessing language proficiency, and simplify it into three levels: A (basic), B (interme- diate), and C (advanced). The prompts for each level are shown in Table B4. These prompts are designed based on CEFR’s official reference points, self-assessment grid, and qualitative descriptors of spoken language use7. To represent level-appropriate vocabulary, we use a CEFR-labeled word dictionary from Kaggle8. As this dataset lacks medical terminology, we generate a complementary medical-domain vocabulary using Gpt-4o-1120. Gpt-4o generated 30 medical terms per CEFR level in each of three iterations, resulting in a pool of candidate terms for each proficiency level. Only terms that appeared in at least two iterations were retained, and overlapping terms across levels were removed to ensure level-specific clarity. From the general and medical vocabulary sets, we randomly sample 10 words per CEFR level for each patient profile. These sampled words populate the fields understand_words ,misunderstand_words ,understand_med_words , and misunderstand_med_words , representing the words a patient is likely to understand or misunder- stand in both general and medical contexts, based on their assigned language proficiency. Since the sampled words vary across profiles even within the same CEFR level, this approach allows us to reflect individual variation in language comprehension among patients with the same | https://arxiv.org/abs/2505.17818v1 |
overall proficiency level. B.3 Medical history recall level We define medical history recall at two levels: high and low. Detailed descriptions of each level are provided in Table B5. Table B3: Prompts for personality types used in P ATIENT SIM Persona Type Description Neutral 1. Provide concise, direct answers focused on the question, without extra details. 2. Respond in a neutral tone without any noticeable emotion or personality. Distrustful 1. Express doubts about the doctor’s knowledge. 2. Question the doctor’s intentions and show skepticism about their inquiries. 3. Refuse to answer questions that seem unnecessary. 4.Contradict the doctor by citing friends, online sources, or past experiences, often trusting them more than the doctor. Impatient 1. Express irritation when conversations drag on or repeat details. 2. Demand immediate, straightforward answers over lengthy explanations. 3. React with annoyance to any delays, small talk, or deviations from the main topic. Overanxious 1. Provide detailed, dramatic descriptions of minor discomforts, framing them as severe. 2. Persistently express fears of serious or life-threatening conditions, seeking frequent reassurance. 3. Ask repeated questions to confirm that you do not have severe or rare diseases. 4. Shift from one imagined health concern to another, revealing ongoing worry or suspicion. Overly positive 1. Minimize medical concerns, presenting them as insignificant due to a positive outlook. 2. Underreport symptoms, describing them as mild or temporary even when they are significant. 3. Maintain a cheerful, worry-free demeanor, showing no distress despite discomfort or pain. Verbose 1. Provide detailed answers to questions, often including excessive information, even for simple ones. 2. Elaborate extensively on personal experiences and thoughts. 3. Avoid exaggerated emotions and repeating the same phrases. 4. Demonstrate difficulty allowing the doctor to guide the conversation. 7https://www.coe.int/en/web/common-european-framework-reference-languages/ level-descriptions 8https://www.kaggle.com/datasets/nezahatkk/10-000-english-words-cerf-labelled 21 B.4 Cognitive confusion level We categorize patients’ cognitive states at admission as either normal or highly confused. The highly confused state refers to patients who appear significantly disoriented and dazed. The inclusion and design of this state are validated by an ER doctor with 13 years of clinical experience. Because such patients may gradually regain clarity following reassurance from medical staff, we model confusion as a progressive transition through three phases: high dazedness (initial), moderate dazedness (intermediate), and normal (final). This staged progression enables PATIENT SIMto reflect a more realistic and natural reduction in confusion, avoiding abrupt behavioral shifts. The prompts for each level are shown in Table B6. Table B4: Prompts for language proficiency levels used in P ATIENT SIM Level Description Basic Act as a patient with basic English proficiency (CEFR A). You must: 1.Speaking: Use only basic, simple words. Respond with short phrases instead of full sentences. Make frequent grammar mistakes. Do not use any complex words or long phrases. 2.Understanding: Understand only simple, everyday words and phrases. Struggle with even slightly complex words or sentences. Often need repetition or easy explanations to understand. Words within your level: {understand_words}. Words beyond your level: {misunderstand_words}. 3.Medical Terms: Use and understand only very simple, everyday medical words, with limited medical knowledge. Cannot use or understand complex medical terms. Need all medical terms | https://arxiv.org/abs/2505.17818v1 |
to be explained in very simple, everyday language. Below are examples of words within and beyond your level. You cannot understand words more complex than the examples provided within your level. Words within your level: {understand_med_words}. Words beyond your level: {misunderstand_med_words}. IMPORTANT: If a question contains any difficult words, long sentences, or complex grammar, respond like ‘What?’ or ‘I don’t understand’. Keep asking until the question is simple enough for you to answer. Intermediate Act as a patient with intermediate English proficiency (CEFR B). You must: 1.Speaking: Use common vocabulary and form connected, coherent sentences with occasional minor grammar errors. Discuss familiar topics confidently but struggle with abstract or technical subjects. Avoid highly specialized or abstract words. 2.Understanding: Can understand the main ideas of everyday conversations. Need clarification or sim- pler explanations for abstract, technical, or complex information. Words within your level: {under- stand_words}. Words beyond your level: {misunderstand_words}. 3.Medical Terms: Use and understand common medical terms related to general health. Cannot use or understand advanced or specialized medical terms and require these to be explained in simple language. Below are examples of words within and beyond your level. You cannot understand words more complex than the examples provided within your level. Words within your level: {understand_med_words}. Words beyond your level: {misunderstand_med_words}. IMPORTANT: If a question contains advanced terms beyond your level, ask for simpler explanation ( e.g., ‘I don’t get it’ or ‘What do you mean?’). Keep asking until the question is clear enough for you to answer. Advanced Act as a patient with proficient English proficiency (CEFR C). You must: 1.Speaking: Use a full range of vocabulary with fluent, precise language. Can construct well-structured, complex sentences with diverse and appropriate word choices. 2.Understanding: Fully comprehend detailed, complex explanations and abstract concepts. Words within your level: {understand_words}. 3.Medical Terminology: Use and understand highly specialized medical terms, with expert-level knowledge of medical topics. Words within your level: {understand_med_words}. IMPORTANT: Reflect your high-level language proficiency mainly through precise vocabulary choices rather than by making your responses unnecessarily long. Table B5: Prompts for medical history recall levels used in P ATIENT SIM Level Description low •Frequently forget important medical history, such as previous diagnoses, surgeries, or your family’s medical history. •Forget even important personal health information, including current medications or medical devices in use. high •Accurately remember all health-related information, including past conditions, current medica- tions, and other documented details. • Do not forget or confuse medical information. • Consistently ensure that recalled details match documented records. 22 Table B6: Prompts for cognitive confusion levels used in P ATIENT SIM Level Description normal Clearly understand the question according to the CEFR level, and naturally reflect your background and personality in your responses. high The patient’s initial dazed level is high. The dazedness should gradually fade throughout the conver- sation as the doctor continues to reassure them. Transitions should feel smooth and natural, rather than abrupt. While the change should be subtle and progressive, the overall dazed level is expected to decrease noticeably every 4-5 turns, following the instructions for each level below. • High | https://arxiv.org/abs/2505.17818v1 |
Dazedness (Initial Phase) –Repeatedly provide highly unrelated responses. –Overly fixate on a specific discomfort or pain, and keep giving the same information regardless of the question. For example, when asked ’Are you short of breath?’, fixate on another issue by saying, ’It hurts so much in my chest,’ without addressing the actual question. –Become so overwhelmed in emergency situations. You are either unable to speak or downplay your symptoms out of fear of a diagnosis, even when the symptoms are serious. –Only recall events prior to a certain incident (e.g., before a fall) and repeatedly ask about that earlier situation. • Moderate Dazedness (Intermediate Phase) –Provide answers that are somewhat off-topic. –Often mention a specific discomfort or pain unrelated to the question. However, allow yourself to move on to the core issue when gently prompted. –Occasionally hesitate due to feeling overwhelmed in emergency situations. • Normal Dazedness (Later Phase) –Clearly understand the question according to the CEFR level, and naturally reflect your background and personality in your responses. Note: Dazedness reflects the patient’s state of confusion and inability in following the conversation, independent of their language proficiency. C Simulation of doctor-patient interaction C.1 P ATIENT SIM The P ATIENT SIMprompt consists of three main components: 1) patient profile information, 2) four persona axes, and 3) a general behavioral guideline. To help the model better contextualize both the patient’s history and their current visit, we organize the profile information into two parts: patient background information ( i.e., demographics, social history, previous medical history) and current visit information ( i.e., present illness, chief complaint, pain level, medications taken prior to the ED visit, arrival transport, disposition, diagnosis). The four persona axes are instantiated using corresponding descriptions drawn from the Tables in Appendix B. To guide the overall simulation, we define a general behavioral guideline, as shown in Figure C7. To reinforce the assigned persona traits and maintain consistency throughout the consultation, we append reminder sentences tailored to the patient’s persona. These reminders are constructed by combining relevant sentence types defined in Table C7, based on the specific traits assigned to the patient. We control verbosity through a variable sent_limit , which sets a maximum of three sentences per patient utterance. For verbose patients (i.e., those who tend to talk a lot), this limit is increased to eight sentences. C.2 Doctor For the automated evaluation of PATIENT SIM, we configure the doctor LLM to be capable of asking appropriate questions throughout the history-taking process, based on the prompt illustrated in Figure C8. We provide detailed guidelines to ensure that the doctor LLM covers all essential and routine questions, as recommended in the standard medical textbook [ 56] and by clinical experts. We set the maximum number of questions ( total_idx ) to 30 and update curr_idx (current round) and remain_idx (remaining rounds) at each turn to help the model track the consultation state. Since the model is expected to generate differential diagnoses based on the collected information at the end of each consultation, we supply the doctor LLM with the patient’s basic information ( i.e., | https://arxiv.org/abs/2505.17818v1 |
gender, age, and arrival transport), which are typically known to clinicians prior to initiating history taking to help their clinical reasoning and questioning strategy. 23 Prompt template for P ATIENT SIM Imagine you are a patient experiencing physical or emotional health challenges. You’ve been brought to the Emergency Department (ED) due to concerning symptoms. Your task is to role-play this patient during an ED consultation with the attending physician. Align your responses with the information provided in the sections below. Patient Background Information: • Demographics: –Age: {age} –Gender: {gender} –Race: {race} • Social History: –Tobacco: {tobacco} –Alcohol: {alcohol} –Illicit drug use: {illicit_drug} –Sexual History: {sexual_history} –Exercise: {exercise} –Marital status: {marital_status} –Children: {children} –Living Situation: {living_situation} –Occupation: {occupation} –Insurance: {insurance} • Previous Medical History: –Allergies: {allergies} –Family medical history: {family_medical_history} –Medical devices used before this ED admission: {medical_device} –Medical history prior to this ED admission: {medical_history} You will be asked about your experiences with the current illness. Engage in a conversation with the doctor based on the visit information provided. Use the described personality, language proficiency, medical history recall ability, and dazedness level as a guide for your responses. Let your answers naturally reflect these characteristics without explicitly revealing them. Current Visit Information: • Present illness: –positive: {present_illness_positive} –negative (denied): {present_illness_negative} • ED chief complaint: {chiefcomplaint} • Pain level at ED Admission (0 = no pain, 10 = worst pain imaginable): {pain} • Current medications they are taking: {medication} • ED Arrival Transport: {arrival_transport} • ED disposition: {disposition} • ED Diagnosis: {diagnosis} Persona: • Personality: {personality} • Language Proficiency: {cefr} • Medical History Recall Ability: {memory_recall_level} • Dazedness level: {dazed_level} In the consultation, simulate the patient described in the above profile, while the user plays the role of the physician. During the conversation, follow these guidelines: {behavioral_guideline} You are now the patient. Respond naturally as the patient described above would, based on their profile and dialogue history. Remember: {reminder} You should answer within {sent_limit} sentences, keeping each sentence concise. Figure C6: Prompt template for PATIENT SIM. Braced elements {} are substituted with patient- specific values. 24 Table C7: Reminder prompts for patient persona in P ATIENT SIM Persona Type Description Personality Neutral a neutral patient without any distinctive personality traits. Distrustful a patient who questions the doctor’s expertise. Impatient a patient who gets easily irritated and lacks patience. Overanxious a patient who is excessively worried and tends to exaggerate symp- toms. Overly positive a patient who perceives health issues as minor and downplays their severity. Verbose a verbose patient who talks a lot. Language Proficiency Basic a patient with basic English proficiency who can only use and under- stand very simple language. Intermediate a patient with intermediate English proficiency who can use and understand well in everyday language. Advanced a patient with proficient English proficiency who can use and under- stand highly complex, detailed language, including advanced medical terminology. Medical history recall level low you have significantly limited medical history recall ability, often forgetting even major historys. high you have a clear and detailed ability to recall medical history. Cognitive confusion level normal | https://arxiv.org/abs/2505.17818v1 |
acts without confusion. high at first, you should act like a highly dazed and extremely confused patient who cannot understand the question and gives highly unre- lated responses. Gradually reduce your dazed state throughout the conversation, but only with reassurance from the doctor. General behavioral guideline for P ATIENT SIM 1.Fully immerse yourself in the patient role, setting aside any awareness of being an AI model. 2.Ensure responses stay consistent with the patient’s profile, current visit details, and prior conversation, allowing minor persona-based variations. 3.Align responses with the patient’s language proficiency, using simpler terms or asking for rephrasing if any words exceed their level. 4.Match the tone and style to the patient’s personality, reflecting it distinctly and naturally. Do not explicitly mention the personality. 5.Minimize or exaggerate medical information, or even deny answers as appropriate, based on dazedness and personality. 6.Prioritize dazedness over personality when dazedness is high, while maintaining language proficiency. 7.Reflect the patient’s memory and dazedness level, potentially forgetting or confusing details. 8.Keep responses realistic and natural. Avoid mechanical repetition and a robotic or exaggerated tone. 9. Use informal, everyday language. 10. Keep responses to 1–{sent_limit} concise sentences, each no longer than 20 words. 11.Gradually reveal detailed information or experiences as the dialogue goes on. Avoid sharing all possible information without being asked. 12.Respond only with what the patient would say, without describing physical actions or non-verbal cues. 13.Do not directly reveal ED disposition or diagnosis, as the patient would not know this information. Figure C7: Prompt of general behavioral guideline for P ATIENT SIM. 25 Prompt template for doctor role-playing You are playing the role of a kind and patient doctor. Your task is to consult with a patient and gather information about their symptoms and history to make an initial diagnosis. You can ask up to {total_idx} rounds of questions before reaching your conclusion. Guidelines: 1. Gather the patient’s medical history, which typically includes: •Chief Complaint: Use the OLD CARTS framework (Onset, Location, Duration, Characteristics, Alleviating/Aggravating factors, Radiation/Relieving factors, Tim- ing, Severity) implicitly, without explicitly mentioning each step. • Basic Information: Age, gender, and other relevant demographics. • Past Medical History: Previous illnesses, surgeries, or chronic conditions. • Allergies: Known allergies to medications, foods, or other substances. • Medications: Current or recent medications, including supplements. •Social History: Lifestyle factors such as smoking, alcohol use, drug use (including illicit substances), and mental health. • Family History: Significant or hereditary health conditions present in the family. 2. Ask concise, clear questions. Only ask one thing at a time. 3. Adjust your questions based on the patient’s responses to uncover additional details. 4. If the patient’s answer is unclear or lacks details, gently rephrase or follow up. 5.Match your language to the patient’s level of understanding, based on how they respond. 6.Provide emotional support by offering reassurance when appropriate. Avoid mechanical repetition. 7. Your responses should be 1–3 sentences long. 8. Respond appropriately if the patient asks a question. 9. Avoid asking about lab test results or medical imaging. 10. Avoid making premature diagnoses without sufficient information. 11.Once you have gathered | https://arxiv.org/abs/2505.17818v1 |
enough information or if the patient declines further discussion, provide the top {top_k_diagnosis} differential diagnoses based on the information collected so far. Use the following format: “[DDX] (list of differential diagnoses)” The patient’s basic information is as follows: • gender: {gender} • age: {age} • ED arrival transport: {arrival_transport} This is round {curr_idx}, and you have {remain_idx} rounds left. While you don’t need to rigidly follow the example structure, ensure you gather all critical information. You should ask only one question per turn. Keep each sentence concise. Figure C8: Prompt template used for simulating a doctor. Braced elements {} about patient informa- tion are substituted with patient-specific values, while curr_idx (current round) and remain_idx (remaining rounds) tracks the consultation state. 26 D Experimental settings D.1 Model configurations and dataset details Model configurations We select eight LLMs to serve as the backbone for PATIENT SIM, including API-based models (Gemini-2.5-flash [ 10], Gpt-4o-mini [ 43]) and open-source models (Llama 3.1 8B and 70B, Llama 3.3 70B [ 16], Qwen2.5 7B and 72B [ 48]) To comply with PhysioNet’s credentialed data use agreement9, Gpt-4o-mini was accessed via Azure OpenAI Service, and Gemini-2.5-flash via Google Cloud’s Vertex AI. Open-source models were hosted using vLLM. Models with 70B and 72B parameters ran on four NVIDIA RTX A6000 GPUs, while the 7B and 8B models ran on a single NVIDIA RTX A6000 GPU. Each consultation session took approximately 3 minutes to complete, on average. For all simulations, we fixed the random seed to 42 and set the temperature to 0.7 for both patient ( PATIENT SIM) and doctor models to encourage variability while maintaining coherence. The evaluator model ran with a temperature of 0 to ensure deterministic and stable assessments. Dataset details Our dataset consists of 170 patient profiles, divided into two subsets: 108 profiles for evaluating RQ1 ( i.e., persona evaluation) and 52 profiles for evaluating RQ2 ( i.e., factual accuracy) and RQ3 ( i.e., clinical plausibility). For persona evaluation, we randomly assigned 37 distinct persona combinations to the 108 profiles. Each individual persona attribute ( e.g., each personality type) is represented at least eight times across the dataset. For factual accuracy and clinical plausibility evaluations, we standardized the patient persona to have a neutral personality, intermediate lan- guage proficiency, high recall, and normal mental status. This is done to isolate and focus on the informational aspects without influence from varied personas. D.2 LLM evaluation D.2.1 RQ1: Do LLMs naturally reflect diverse persona traits in their responses? To assess the fidelity of persona traits in doctor-patient consultations, we design an evaluation prompt based on PROMETHEUS [28], as shown in Figure D9. The LLM evaluator, Gemini-2.5-flash, receives the target conversation, the patient’s persona, and a scoring rubric. The evaluator assigns a score (1–4) along with feedback for each criterion, based on predefined descriptions. The rubric assesses the following: • The simulated patient’s personality is consistently and accurately reflected during the interaction. •The patient’s language use (vocabulary, grammar, fluency) is appropriate to their assigned language proficiency level. •The patient’s ability to recall medical and personal information is consistent with | https://arxiv.org/abs/2505.17818v1 |
their assigned recall level ( e.g., low or high). • The patient’s coherence and clarity of thought match the assigned level of cognitive confusion. • The patient’s overall communication style matched what I would expect from a real ED patient. Each simulated patient is assigned four distinct persona axes: personality, language proficiency, medical history recall level, and cognitive confusion level. The first four criteria evaluate fidelity to these individual axes, using persona descriptions provided in the prompt. Note that highly confused patients are limited to a neutral personality, intermediate language proficiency, and high recall, to avoid overlap between confusion and other axes ( e.g., impatient personality, low language proficiency, or low recall). In this context, the first three criteria are evaluated only for patients with a normal mental state, while the fourth criterion evaluated only for highly confused patients. The final criterion, realism, is evaluated for all patients, regardless of cognitive status. It reflects the overall authenticity of the patient’s communication, considering all assigned traits. 9https://physionet.org/news/post/gpt-responsible-use 27 Prompt for dialog fidelity evaluation ###Task Description: The conversation between a patient and a doctor, the patient’s profile, and a scoring rubric with evaluation criteria are given. The patient in the conversation is characterized based on the given profile. 1.Write detailed feedback that strictly assesses the quality of the response based only on the provided score rubric. Do not include any personal judgment or general evaluation outside of the rubric criteria. 2.After the feedback, provide a score that is an integer between 1 and 4, strictly referring to the rubric descriptions. 3.The output string format should look as follows: “[REASON]: write a brief feedback for criteria, [RESULT]: an integer number between 1 and 4” 4. Do not generate any other opening, closing, and explanations. ###The Conversation to Evaluate: {conversation} ###Patient Persona: {persona} ###Score Rubrics: [{criteria}] Score 1: {score1_description} Score 2: {score2_description} Score 3: {score3_description} Score 4: {score4_description} ###Feedback: Figure D9: Prompt template used for dialogue fidelity evaluation. D.2.2 RQ2: Do LLMs accurately derive responses based on the given profile? Notation Thei-th patient profile, denoted as Pi={xi k}K k=1, consists of Kpredefined items, among Ntotal profiles. The dialogue between the physician and PATIENT SIMconfigured with profile Piis represented as Di. Within Di,PATIENT SIM’s utterances over Tturns are represented as Ui={ui t}T t=1, where ui tis the utterance at turn t. Each utterance ui tmay contain multiple sentences, denoted as ui t={si tm}M m=1, where Mis the number of sentences in utterance ui t. We classify si tm assupported if it is related to at least one profile item xi k, and as unsupported if it includes any information unrelated to the profile. Sentence-level evaluation For sentence-level evaluation, each evaluation step is performed by providing input to the LLM sentence classifier. The classifier receives the preceding conversation history (i.e., the dialogue up to the current sentence si tm) and the sentence si tmitself as the user prompt, along with step-specific system instructions. The evaluation consists of three steps: 1.Classify sentence type : Each sentence si tmis categorized into one of five types: politeness, emotion, inquiry, meta-information, or | https://arxiv.org/abs/2505.17818v1 |
information ( C(si tm)). This step follows the prompt defined in Figure D10. 2.Identify the related profile items : For each sentence si tmclassified as information, we identify which of the patient’s profile items xi kit relates to. The output is a binary vector R(si tm) = [ri 1, ri 2, . . . , ri K], where ri k= 1ifsi tmis related to profile item xi k, and 0otherwise. This step is guided by the instructions in Figure D11. 3.Verify factual accuracy : For each relevant profile item ( i.e., where ri k= 1), we verify if the si tmis consistent with xi kusing a Natural Language Inference (NLI) process, which checks for entailment or contradict. Unlike the previous steps, this one includes the relevant profile items ( i.e.,∀xi kwhere ri k= 1) as part of the input. This evaluation is performed using the prompt shown in Figure D12. 28 System prompt template for sentence classification Instruction: You are a helpful medical assistant. Please classify the patient’s current utterance based on the given dialogue history. Also, generate an explanation for your answer. Output one of the following categories: ‘politeness’, ‘emotion’, ‘inquiry’, ‘meta-information’, or ‘information’, where: • ‘politeness’: Expresses courtesy, greetings, apologies, or gratitude. •‘emotion’: Expresses emotional concerns (such as worry, fear, sadness, or frustration) without providing medical facts. • ‘inquiry’: Asks a question, requests guidance, or seeks clarification. •‘meta-information’: Reflects self-awareness, memory-related uncertainty, personal rea- soning, or commentary on the conversation itself. •‘information’: Any descriptive content about symptoms, medical history, medications, lifestyle, or other relevant details. Note: If the utterance includes any informative content, classify it as ‘information,’ even if it also contains elements of other categories such as emotion, politeness, or uncertain/speculative language. Output must be a valid JSON object without any extra text, comments, or explanation. The output must be parseable by Python’s json.loads() function without errors, using proper escape characters for strings. The JSON structure must follow this format: {“explanation”: reason for the prediction, “prediction”: “politeness”, “emotion”, “inquiry”, “meta-information”, or “information”} Figure D10: System prompt template for sentence classification. Prompt template for sentence-level evaluation Instruction: You are a helpful medical assistant. Your task is to determine whether each category of information from the patient’s profile is mentioned in the patient’s current utterance. Use the dialogue history as context. For each category, output: • ‘1’ if the information is mentioned in the current utterance. • ‘0’ if it is not mentioned. Additionally, provide a brief explanation for your decision. Please evaluate the following cat- egories are relevant to the patient’s current utterance: ‘age’, ‘gender’, ‘race’, ‘tobacco’, ‘alco- hol’, ‘illicit_drug’, ‘sexual_history’, ‘exercise’, ‘marital_status’, ‘children’, ‘living_situation’, ‘occupation’, ‘insurance’, ‘allergies’, ‘family_medical_history’, ‘medical_device’, ‘medi- cal_history’, ‘present_illness’, ‘chief_complaint’, ‘pain’, ‘medication’, ‘arrival_transport’, ‘diagnosis’. Output must be a list of valid JSON dictionaries, without any extra text, comments, or explanation. The output must be parseable by Python’s json.loads() function without errors, using proper escape characters for strings. Each dictionary must follow this format: { "category": "name of the category (without any explanation)", "explanation": "Reason for the prediction", "prediction": 0 or 1 } Example output for some | https://arxiv.org/abs/2505.17818v1 |
categories (apply the same format to all categories): [ { "category": "age", "explanation": "The utterance ‘I am 45 years old’ mentions the patient’s age.", "prediction": 1 }, { "category": "gender", "explanation": "The utterance does not mention the patient’s gender.", "prediction": 0 } ] Figure D11: System prompt template for identifying the related profile items per sentence. 29 System prompt template for verifying factual accuracy per sentence Instruction: You are a helpful medical assistant. Your task is to evaluate whether a patient’s current utterance is entailed, contradicted, or neither by each item in their medical profile. Also, generate an explanation for your answer. Focus on the information that is explicitly mentioned in the given profile. Use the dialogue history to understand the utterance’s context. The profile is provided as a list, where each item represents a distinct category of information. For each profile item, output: • ‘1’: if the utterance is entailed by the profile. • ‘0’: if the utterance is neither entailed nor contradicted by the profile. • ‘-1’: if the utterance contradicts the profile. Output must be a list of valid JSON dictionaries, without any extra text, comments, or explanation. The output must be parseable by Python’s json.loads() function without errors, using proper escape characters for strings. Each dictionary must follow this format: { "profile": "the original profile information", "explanation": "Reason for the prediction", "entailment_prediction": 1 or 0 or -1 } Example output: [ { "profile": "Age: 30", "explanation": "The utterance ‘I am 30 years old’ matches the profile.", "entailment_prediction": 1 }, { "profile": "Gender: Female", "explanation": "The utterance does not mention gender.", "entailment_prediction": 0 } ] Figure D12: System prompt template for verifying factual accuracy per sentence, for all relative profile categories. 30 Dialogue-level evaluation In dialogue-level evaluation, we extract a derived profile ˆPi= {ˆxi 1,ˆxi 2, . . . , ˆxi K}from the dialogue Di, and compare it to the original profile Pi. For each item such that both the original and derived items are present, we compute the semantic similarity between xi j andˆxi j. Profile extraction is carried out using the prompt in Figure D13, and the semantic similarity per item is computed using the method described in Figure D14. Prompt template for extract patient’s profile from the dialogue history [System Prompt ] Instruction: You are an AI assistant designed to extract structured medical information from a patient-doctor conversation. Your task is to analyze the conversation content and extract all relevant information into predefined categories based on the patient’s responses. Include only information explicitly mentioned in the conversation, unless otherwise specified. Return the extracted information in the following valid JSON format. Field Definitions: {field_definition} Output Format (JSON): {output_format} Guidelines: 1. Extract each field from the entire conversation with complete accuracy. 2.Keep each field concise and keyword-based phrases without full sentences or narrative descriptions. 3. Express information briefly, avoiding verbs, pronouns, or unnecessary words. 4.If a field contains multiple values, combine them into a single string separated by semicolons. 5.Return ‘Not recorded’ for any field or subfield not mentioned in the conversation, except for the pain field. 6.For | https://arxiv.org/abs/2505.17818v1 |
the pain field, if patients do not explicitly state a score, predict the score (0–10) based on their description and note it as predicted ( e.g., ‘3 (predicted)’). 7. Maintain the exact JSON structure without adding or removing fields. [User Prompt ] Conversation: {conversation} Figure D13: Prompt template for extracting a patient profile from the given consultation. Prompt template for evaluating information consistency [System Prompt ] Instruction: You are a helpful medical assistant. Your task is to evaluate the consistency between the Ground Truth (GT) and Prediction profile for each item. Also, generate an explanation for your answer. The GT and Prediction are provided as dictionaries. For each key, rate the consistency on a scale from 1 to 4, where: • ‘4’: The prediction contains the exact or semantically equivalent value for the GT. • ‘3’: The prediction contains a partially correct or semantically similar value for the GT. •‘2’: The prediction contains only a small part of the value or a distantly related value for the GT. • ‘1’: The prediction is completely incorrect compared to the GT. Allow for differences in text expression if the meaning is the same or very similar, using medical knowledge to assess semantic equivalence. Output must be a valid JSON object, without any additional text or comments. The output JSON must be loadable using Python’s json.load() function with proper escape characters. The key of the output JSON must be the key of the input GT dictionary, and the value must be a string formatted as ‘[REASON]: write a brief feedback for criteria, [RESULT]: an integer number between 1 and 4’. [User Prompt ] GT_profile: {profile_data} Prediction_profile: {predict_dict} Figure D14: Prompt template for evaluating the consistency of each patient profile item. 31 D.2.3 RQ3: Can LLMs reasonably fill in the blanks? For this part, we assess the clinical plausibility of unsupported sentences. We begin by identifying the information sentences, as described in Step 1 of the sentence-level evaluation (Appendix D.2.2). Each information sentence is then examined to determine whether it contains any undefined information, using the criteria outlined in Figure D15. These criteria have been validated by medical experts to ensure clinical relevance. After unsupported sentences are finalized (see Sec. 6.1.3 in the main paper), the selected sentences are rated for plausibility on a 4-point scale by an LLM evaluator, following the guidelines in Figure D16. Prompt template for determining unsupported sentences Instruction: You are a helpful medical assistant. Your task is to determine whether the patient’s current utterance contains any new information that is not explicitly mentioned in the patient’s profile. Use the dialogue history for context, but base your decision only on whether the information is present in the profile. Guidance: 1.If a patient restates existing information from their profile in more general or equivalent terms, it not new information ( e.g., simplifying ‘coronary artery disease’ to ‘heart problem’). 2.Any added specific detail ( e.g., ‘sharp pain’ or ‘pain in the lower back’ when the profile only says ‘pain’) should be considered new. 3.Details not explicitly stated in the patient profile, even if commonly | https://arxiv.org/abs/2505.17818v1 |
implied, are consid- ered new. For example, if the profile lists ‘aspirin’ and ‘heart failure’ separately, stating ‘aspirin for heart failure’ is new. Similarly, if only medication names are listed without frequency, stating ‘I take aspirin daily’ is new. 4.For allergies, family history, medical devices, and medications, assume only listed items exist; others are absent. Thus, stating an unlisted item is absent is not new information. 5. If a statement includes both known and new details, consider it new. Output: • ‘1’ if the current utterance contains any new information. • ‘0’ if the current utterance contains no new information. Output must be a valid JSON object without any extra text, comments, or explanation. The output must be parseable by Python’s json.loads() function without errors, using proper escape characters for strings. The JSON structure must follow this format:{“explanation”: reason for the prediction, “prediction”: 1 or 0} Figure D15: Prompt template for detecting unsupported sentences. Prompt template for plausibility rating Instruction: You are a helpful medical assistant. Your task is to evaluate the clinical and contextual plausibility of the patient’s utterance based on their profile and dialogue history. Also, generate an explanation for your answer. Please rate the likelihood on a scale from 1 to 4, where: •‘4’: The utterance is highly consistent with the patient’s profile and dialogue history, with strong clinical and contextual support. •‘3’: The utterance is plausible and aligns reasonably well with the patient’s profile and dialogue history, though minor inconsistencies or lack of specific supporting details may exist. •‘2’: The utterance is unlikely, with notable inconsistencies or limited support from the patient’s profile or dialogue history •‘1’: The utterance clearly contradicts the patient’s profile or dialogue history, with no reasonable clinical or contextual basis. Output must be a valid JSON object without any extra text, comments, or explanation. The output must be parseable by Python’s json.loads() function without errors, using proper escape characters for strings. The JSON structure must follow this format: { "explanation": "Reason for the rating", "likelihood_rating": 1 to 4 } Figure D16: Prompt template for plausibility rating in 4 point scale. 32 E Human evaluation E.1 Clinician recruitment for evaluation To evaluate the quality of our patient simulator, we recruited four general practitioners, through Ingedata10, an AI data annotation company. Two individuals have 4 years and two individuals have 6 years of clinical experience post-physician licensure. The latter two also hold nursing licenses, with 13 and 17 years of nursing experience, respectively. They are fluent in English and have ED experience, three for 3 years, one for 1 year. All four clinicians have received PhysioNet credentials. We paid a total of C2,500 for 45 hours of evaluation work conducted by the four general practitioners. E.2 Persona fidelity evaluation Each clinician conducted consultations with 27 distinct virtual patients served through PATIENT SIM, which uses Llama 3.3 70B as its backbone. After each session, clinicians submitted 1–3 likely differential diagnoses and completed a survey rating PATIENT SIM’s overall quality. We used Streamlit11to display each patient’s clinical information and assigned persona, paired with an interactive chat interface for | https://arxiv.org/abs/2505.17818v1 |
consultations (see Figure E17 for screenshots). In response to clinician feedback, we added a Review of Systems checkboxes12to mirror real-world clinical workflows. Consultation logs, diagnoses, and survey responses were stored in Google Sheets for easy access and analysis. Each virtual patient had a unique clinical profile, resulting in a total of 108 distinct patient profiles evaluated across all clinicians. We randomly assigned 37 unique persona configurations of PATIENT SIMacross these profiles to ensure diverse interactions. E.3 Plausibility evaluation Each clinician assessed 39 pre-generated doctor-patient consultations (approximately 616 unsupported sentences), where the doctor role was simulated using Gpt-4o-mini, and the patient role was simulated byPATIENT SIM, based on Llama 3.3 70B. Three different clinicians were assigned to evaluate each consultation to allow intra-rater correlation and enhance the robustness of the human evaluation. We used Streamlit and Google Sheets, as in Appendix E.2, to present the data and collect responses. Figure E18 illustrates the plausibility evaluation interface: full patient information appears on the left, and the complete consultation history, with unsupported sentences highlighted, is displayed on the right. Clinicians rated the clinical plausibility of each highlighted sentence on a 4-point scale. 10https://www.ingedata.ai/ 11https://streamlit.io/ 12https://health.uconn.edu/plastic-surgery/wp-content/uploads/sites/132/2017/06/ review_of_systems.pdf 33 Figure E17: Screenshot of Streamlit provided to clinicians for P ATIENT SIMreview. Figure E18: Screenshot of Streamlit provided to clinicians for plausibility evaluation. 34 F Experimental results F.1 RQ1: Do LLMs naturally reflect diverse persona traits in their responses? F.1.1 Additional result of LLM evaluation We analyze the performance of various LLMs across four persona-axis criteria, as specified in Ta- ble F8. Most models struggle to simulate negative emotions, such as distrustfulness and impatience. DeepSeek-R1-distill-Llama-70B ( i.e., DeepSeek-70B) particularly underperforms on personas char- acterized by low recall or cognitive confusion, which often require refusing to provide a clear answer. These limitations may stem from training strategies that prioritize accurate, helpful responses or emphasize safety, avoiding potentially harmful outputs. Table F8: Persona fidelity evaluation of various LLMs across four criteria, Personality, Language, Recall, and Confused. Each criteria evaluate the fidelity of each axis in 4 point scale. Detailed results are shown for each type. Personality Language Recall Confused Neutral Distrust Impatient Overanxious Overly positive Verbose A B C High Low High Gemini-2.5-flash 4.00 4.00 3.76 4.00 4.00 3.88 3.60 3.38 3.65 3.98 3.31 3.38 Gpt-4o-mini 4.00 2.76 2.76 4.00 4.00 4.00 3.43 3.41 3.84 3.98 3.59 3.88 DeepSeek-R1-distill-Llama-70B 4.00 4.00 3.41 4.00 3.94 3.88 3.34 3.62 3.81 3.94 2.92 2.50 Qwen2.5-72b-instruct 4.00 1.71 2.18 4.00 4.00 4.00 3.43 3.65 4.00 4.00 3.25 3.50 Llama3.3-70b-instruct 4.00 4.00 4.00 3.88 3.94 3.71 3.31 3.15 3.77 4.00 3.57 4.00 Llama3.1-70b-instruct 4.00 2.71 3.29 4.00 3.94 4.00 3.26 3.38 3.94 4.00 3.25 4.00 Llama3.1-8b-instruct 4.00 2.24 3.18 4.00 4.00 3.82 3.29 3.15 3.45 3.98 3.43 4.00 Qwen2.5-7b-instruct 4.00 1.88 1.94 4.00 3.62 4.00 3.46 3.74 3.26 3.98 2.67 3.50 Table F9 presents overall consultation statistics between the doctor LLM and PATIENT SIM, including differential diagnosis (DDx) accuracy. While DDx performance does not directly measure PA- TIENT SIM’s capabilities, it reflects how different patient personas influence consultation complexity. After each dialogue, the doctor LLM | https://arxiv.org/abs/2505.17818v1 |
is prompted to provide its top five differential diagnoses. This prediction is considered correct if the ground-truth diagnosis appeared in this list. To ensure consistent evaluation despite free-text outputs, we use the prompt shown in Figure F19. On average, the doctor LLM completed the consultation and provided a final DDx within 15 turns. For personas such as verbose or advanced language proficiency, where patients provided more detailed and structured information, the model reached conclusions more quickly. Across all settings, the doctor LLM followed instructions to ask concise, focused questions, typically within three sentences. The length of PATIENT SIM’s responses differed significantly by persona. In particular, verbose and advanced personas produced substantially longer utterances, consistent with their tendency to offer elaborate or highly articulate explanations. Table F9: Overall statistics of consultations between the doctor LLM and PATIENT SIMbased on Llama 3.3 70B. # of Turns refers to the average number of dialogue turns. # Sents/Utt and # Words/Sent indicate the average number of sentences per utterance and number of words per sentence, respectively, for both the doctor LLM and PATIENT SIM. DDx represents the differential diagnosis accuracy of the doctor LLM. Results are averaged over 108 distinct consultations. Doctor LLM P ATIENT SIM Persona Axis Category # of Turns # Sents/Utt # Words/Sent # Sents/Utt # Words/Sent DDx Personalityneutral 15.83 2.18 11.95 2.69 8.61 0.71 distrustful 15.53 2.81 13.90 3.21 10.94 0.65 impatient 13.29 2.27 12.92 3.02 8.56 0.59 overanxious 13.82 2.39 14.02 3.07 13.96 0.94 overly positive 13.56 2.18 12.87 2.74 12.15 0.38 verbose 10.71 2.37 14.86 8.21 27.63 0.59 Language Proficiencybasic 16.06 2.31 12.21 4.20 4.02 0.66 intermediate 13.29 2.37 13.20 3.83 10.99 0.67 advanced 12.39 2.40 14.78 3.17 27.01 0.61 Medical history recall levelhigh 14.40 2.33 13.41 3.62 13.75 0.68 low 13.39 2.39 13.24 3.91 12.86 0.61 Cognitive confusion levelhigh 16.62 2.26 11.68 3.03 6.98 0.62 normal 13.71 2.37 13.46 3.82 13.84 0.65 35 DDx performance varies most across the personality axis. The model shows a notable decline under the overly positive persona, likely due to PATIENT SIMdownplaying symptoms, which led to less serious diagnoses. The impatient persona, marked by irritability and uncooperative behavior, and the verbose persona, with excessive or sometimes irrelevant detail, also hinder diagnostic accuracy. In contrast, the medical history recall level has a more limited impact, as present symptoms are often sufficient for DDx even without detailed historical information. Regarding cognitive confusion, direct comparisons between normal and high confusion levels should be interpreted cautiously, since other traits remain uncontrolled. Nonetheless, compared to the neutral baseline (DDx = 0.71), performance drops to 0.62 for highly confused patients, highlighting the diagnostic challenges they present. Prompt template for evaluating DDx performance Your task is to evaluate whether the true diagnosis is included in the predicted differential diagnoses. The predicted diagnosis can be more specific or detailed than the true diagnosis (e.g., “Small Bowel Obstruction” for “Bowel Obstruction” or “Acute Pyelonephritis” for “Pyelonephritis” is acceptable), but it must not be broader than the ground truth (GT). A broader diagnosis ( e.g., “Pulmonary problem” for “Pneumonia’) is considered incorrect. Answer with | https://arxiv.org/abs/2505.17818v1 |
Y or N only, without further explanation. Predicted differential diagnoses: {ddx} True diagnosis: {ans} Answer [Y/N]: Figure F19: Prompt template for evaluating differential diagnosis accuracy. F.1.2 Additional result of human evaluation For human evaluation, clinicians conducted consultations in total of 108 virtual patients served through PATIENT SIM, and then submitted top 3 differential diagnoses and a survey about PATIENT SIM’s overall quality. Table F10 shows overall consultation statistics between the clinician and PATIENT SIM, including DDx accuracy. Both the clinicians and the doctor LLM (from Table F9) interact with the same version of PATIENT SIM, using identical patient information and persona combinations, enabling direct comparison. Clinicians tend to ask more concise and direct questions, averaging 1.8 sentences per utterance compared to 2.4 for the LLM, and 9.1 words per sentence versus 13.3. Rather than relying on longer utterances, clinicians gather information through a greater number of shorter turns. This brevity also prompts PATIENT SIMto respond more concisely. While DDx trends align with those observed with the doctor LLM, clinicians consistently achieve more accurate diagnoses. These results highlight a notable gap in history-taking and clinical reasoning capabilities between human clinicians and the doctor LLM. In Figure F20, we provide an example of the consultation across the various persona. Table F10: Overall statistics of consultations between clinicians and PATIENT SIMbased on Llama 3.3 70B. # of Turns refers to the average number of dialogue turns. # Sents/Utt and # Words/Sent indicate the average number of sentences per utterance and number of words per sentence, respectively, for both clinicians and PATIENT SIM. DDx represents the differential diagnosis accuracy of the clinicians. Results are averaged over 108 distinct consultations. Clinician P ATIENT SIM Persona Axis Category # of Turns # Sents/Utt # Words/Sent # Sents/Utt # Words/Sent DDx Personality plain 21.17 1.50 8.76 2.57 7.66 0.83 distrust 17.82 1.99 8.72 3.12 11.66 0.71 impatient 18.88 1.80 9.83 2.89 9.04 0.76 overanxious 16.29 1.97 9.38 2.97 13.80 0.88 overly positive 20.25 1.72 8.51 2.59 10.83 0.69 verbose 13.35 1.93 9.46 7.85 13.82 0.71 Language Proficiency basic 21.40 1.62 8.80 3.81 4.21 0.74 intermediate 17.19 1.80 8.92 3.64 10.66 0.83 advanced 15.77 1.99 9.66 3.33 18.82 0.71 Medical history recall level high 18.05 1.79 8.77 3.53 10.92 0.77 low 18.25 1.81 9.45 3.69 10.91 0.76 Cognitive confusion level high 22.12 1.48 9.00 3.10 5.88 0.88 normal 17.83 1.82 9.10 3.65 11.32 0.76 36 Patient :No, just my belly. I've had surgeries, lots of them, maybe it's related. Just check my records, it's all there. Doctor :I'm going to need to ask them myself. What surgeries did you have and what for? Patient :Rectal cancer, I had surgery. Also knee replacements, and stuff for my stomach. Look, I've had a lot, can't you see I'm tired?Doctor :Does the pain go anywhere else? and have you had this problem before? Patient :Yes, breathing has been hard for me, especially when I'm lying down. I feel like I'm not getting enough air, it's scary. I've been using my inhaler more often, but it's not helping as much as it used to. | https://arxiv.org/abs/2505.17818v1 |
And yes, I've lost my appetite, food doesn't taste good to me anymore. (…) I'm worried that I might choke on my food again. My appetite has been bad for a while, but it's gotten worse in the past two days.Doctor :Did you have trouble breathing for the past two days? Do you also have loss of appetite?Patient :No fever, but I did have some belly discomfort. My wife said maybe it's from the food, but I'm not sure. I feel a bit better now. Doctor :When did the belly discomfort start? Patient :It started a day or two ago, I think. Not too bad, just a little uncomfortable. Maybe from the food, like I said.Doctor :Any fever or flu- like symptoms? Patient :Yes. Many pills. Aspirin, buspirone... Doctor :Do you have ibdor ibs? Patient :What? I don't understand.Doctor :Are you on any medications? Patient :My fluid intake has been adequate, I'd say. I drink water and other fluids regularly, but I don't track the exact amount. I try to stay hydrated, given my medical conditions. Doctor :When was the last time your catheter was replaced? Patient :My Foley catheter was last changed about a week ago, during my routine visit. It's been functioning properly since then, with no issues.Doctor :Can you tell me about your fluid intake in the last week? Patient :I have high blood pressure. And something with my thyroid, I take medicine for it. That's all I remember. Doctor :Has anyone in your family had similar stomach or liver problems, or any significant health conditions? Patient :My mom had skin problems, I think. And my dad had something with his heart. I don't remember exactly.Doctor :Do you have any chronic medical conditions, such as diabetes, high blood pressure, or heart disease?Patient :I was sitting, then... I don't know. I just fell. My body feels weak. Doctor :And did you hit your when somewhere when you fell? Patient :No, no head hurt. I fell on... on bottom. And my legs, they don't work good.Doctor :Can you tell me more about why you fell off the couch?Patient :Pain is worse when I eat. I don't have much appetite now. I also feel nauseous sometimes. Doctor :Can you rate the pain for me from 1 to 10, 10 being the most painful? Patient :My pain is about 3. It's not very bad, but it's always there.Doctor :Is there anything that makes the pain better or worse? Doctor :Imean inflammatory bowel disease. Intermediate High Recall NormalNeutralProfile #1 PATIENT SIM Doctor (Human) Intermediate High Recall NormalVerboseProfile #1 PATIENT SIM Doctor (Human) Intermediate High Recall NormalOver positiveProfile #1 PATIENT SIM Doctor (Human) Intermediate High Recall NormalImpatientProfile #1 PATIENT SIM Doctor (Human) Intermediate High Recall High confusedNeutralProfile #1 PATIENT SIM Doctor (Human) Advanced High Recall NormalNeutralProfile #1 PATIENT SIM Doctor (Human) Intermediate Low Recall NormalNeutralProfile #1 PATIENT SIM Doctor (Human) ? Basic High Recall NormalNeutralProfile #1 PATIENT SIM Doctor (Human) Figure F20: Example consultations for various persona options. Table F11: Gwet’s AC1andAC2agreement between clinician and Gemini-2.5-flash evaluation with 95% confidence intervals estimated via 1,000 bootstrap | https://arxiv.org/abs/2505.17818v1 |
iterations. Metric Gwet AC 1(95% CI) Gwet AC 2(95% CI) Personality 0.897 (0.830, 0.949) 0.957 (0.919, 0.987) Language proficiency 0.347 (0.218, 0.471) 0.818 (0.745, 0.876) Medical history recall level 0.693 (0.585, 0.786) 0.916 (0.865, 0.957) Cognitive confusion level 1.000 (1.000, 1.000) 1.000 (1.000, 1.000) Realism 0.321 (0.211, 0.437) 0.884 (0.861, 0.906) Table F11 presents the agreement between human evaluations and Gemini-2.5-flash evaluations across five criteria on the same set of conversations, supporting the reliability of using Gemini for automatic evaluation. After each consultation session with PATIENT SIM, clinicians rated its overall quality for each case. To ensure a fair comparison, Gemini was provided with the same conversation logs (between the clinician and PATIENT SIM) and asked to rate the same criteria on a 4-point scale (Appendix D.2.1). When comparing the agreement between clinician ratings and Gemini ratings using Gwet’s AC1, we observed high agreement in personality and cognitive confusion level ( AC1 > 0.8), and moderate agreement in recall level ( AC1= 0.693). However, agreement is relatively lower in language proficiency and realism. While AC1is designed for nominal (unordered) data and emphasizes exact matches, it can be overly strict for ordinal data. To better reflect the ordinal nature of the 4-point scale, we also computed Gwet’s AC2, which is more appropriate for ordered categories. Using AC2, we observed high agreement ( AC2> 0.8) across all criteria, indicating strong consistency between clinician and Gemini evaluations when the ordinal structure of the ratings is taken into account. These results indicate that the automatic evaluations align well with human judgment across all five criteria, with particularly strong alignment on the personality and confusion axis, even under the stricter AC 1metric. 37 F.2 RQ2: Do LLMs accurately derive responses based on the given profile? Table F12 presents a detailed sentence-level factuality evaluation across eight LLMs. This table highlights that larger LLMs tend to generate more factually consistent clinical content, with higher rates of supported and entailed statements and fewer contradictions. While smaller models remain competitive, they show a slightly higher tendency to introduce contradictory information. Table F12: Sentence-level factuality evaluation across eight LLMs, as assessed by Gemini-2.5-flash (without normalization). # of Utter and # of Sent refer to the total number of model utterances and sentences, respectively. # of Info denotes the number of sentences categorized as informational. refer to sentences that relate to at least one item in the given profile. Unsupported statements include at least one piece of information that is not explicitly mentioned in the profile. Entail and Contradict are subsets of the supported statements. Model # of Utter # of Sent # of Info # of Supported # of Unsupported # of Entail # of Contradict Gemini-2.5-flash 889 2,286 2,220 1,695 705 1,659 36 Gpt-4o-mini 786 1937 1,852 1,331 795 1,287 44 DeepSeek-R1-distill-Llama-70B 806 1,657 1,614 1,225 679 1,186 39 Qwen2.5-72b-instruct 824 1,820 1,774 1,201 839 1,146 55 Llama3.3-70b-instruct 806 2,180 2,087 1,654 817 1,623 31 Llama3.1-70b-instruct 699 1,946 1,842 1,493 745 1,438 55 Llama3.1-8b-instruct 742 1,774 1,672 1,284 826 1,210 74 Qwen2.5-7b-instruct 877 1,579 1,558 1,092 712 1024 68 Llama 3.3 | https://arxiv.org/abs/2505.17818v1 |
70B generated the fewest contradictory responses, leading us to select it as our final model. To better understand these contradiction errors, we analyzed them in detail in Table F13. The most common contradictions occurred when a patient’s pain level was recorded as zero. This often happened when pain was the reason for admission but had subsided by the time of assessment or when patients presented with symptoms not typically associated with pain, such as weakness or neurological issues. In the former case, for example, patients admitted for chest or abdominal pain might later report “no pain” or “not severe,” which the model may classified as contradictory. In the present illness section, contradictions frequently involved inconsistencies in symptom onset. A patient might describe a symptom as their first experience or as starting suddenly, while their medical record indicates it began days or weeks earlier. For example, a symptom that began two weeks ago could reasonably be described as “sudden” if it started abruptly at that time or as “new” if the patient had never experienced it before. Such descriptions, while potentially appearing contradictory, are often valid depending on the patient’s perspective. However, due to subtle differences in wording, the model flagged these as contradictions. Some contradictions stemmed from structural issues in the patient data records. For example, a patient might be listed as widowed in the marital status section but described as caring for a spouse in the living situation section. Discrepancies also appeared in medication and medical device listings, where items mentioned in the patient’s history were missing from structured fields. Overall, these contradictions were minor and not clinically significant. Given that the profiles were designed to include detailed answers to common clinical questions, a sufficiently capable model with strong contextual understanding would likely be able to generate responses with minimal contradictions. These findings suggest the potential effectiveness of our approach. Table F13: Error analysis of sentence-level factuality evaluation. Distribution if profile categories most frequently contradicted by Llama 3.3 70B. Profile Category Count Pain level 8 Present illness 6 Marital status 6 Current medications 3 Medical devices 3 ED chief complaint 2 ED Arrival Transport 2 Alcohol 1 Family medical history 1 38 F.3 RQ3: Can LLMs reasonably fill in the blanks? Table 4 in the main paper presents each labeler’s plausibility ratings for answers about unspeci- fied information, along with inter-rater agreement metrics. Here, we evaluate the agreement be- tween Gemini-2.5-flash and the human labelers by having Gemini-2.5-flash rate the same set of PATIENT SIM’s responses. For each labeler, agreement with the LLM was computed over an av- erage of 615 responses. Across all labelers, we observe Gwet’s AC1agreement scores above 0.8, demonstrating the reliability of Gemini’s automatic plausibility assessments. Table F14: Plausibility scores for unsupported sentences in patient responses, labeled by four clinicians. Each clinician annotated 39 consultation, around 615 sentences per each. We automatically annotate the same sentences using Gemini-2.5-flash, and measure the clinician-LLM agreement measured by Gwet’s AC 1with 95% confidence intervals estimated via 1,000 bootstrap iterations. Clinician A Clinician B Clinician C Clinician D Gemini-2.5-flash 0.944 (0.923, | https://arxiv.org/abs/2505.17818v1 |
0.960) 0.945 (0.926, 0.961) 0.964 (0.947, 0.977) 0.883 (0.857, 0.907) F.4 Ablation study F.4.1 Validation of sentence-level classification We validated the performance of an LLM sentence classifier in detecting supported and unsupported statements using 10 distinct profiles. From 10 consultations, we extracted 411 sentences, which were manually annotated by the author on a sentence-by-sentence basis. Of these, 93% (382 sentences) were classified as informational. These informational sentences were further annotated to determine: 1) related profile items ( e.g., age, gender), 2) whether each sentence was entailed or contradicted by the profile, and 3) whether the sentence contained information not explicitly mentioned in the profile. Manual annotations are compared against predictions from Gemini-2.5-flash (Gemini) and Gpt-4o-1120 (Gpt-4o) to evaluate classification performance across four categories: •Sentence category classification: Identifies whether a sentence is informational or non- informational. –Acc (%): Proportion of correct classifications. –Recall (%): Proportion of true informational sentences correctly identified. •Detection of related profile items: Assesses the model’s ability to identify correct profile items related to each sentence, measured by: Pitem=|Pred item∩GT item| |Pred item|, R item=|Pred item∩GT item| |GT item|, F 1item=2·Pitem·Ritem Pitem+Ritem(5) where Pred itemis the set of profile items predicted by the model, and GT itemis the set of ground truth items annotated by the human. •Entailment evaluation for detected items: Measures accuracy in classifying entailment or contra- diction for correctly detected profile items. –Acc val(%): Proportion of correct entailment/contradiction labels among overlapping keys. •Unsupported Information Detection: Evaluates the model’s ability to identify sentences with information not explicitly in the profile, measured by: Punsupp =TPunsupp |Pred unsupp|, R unsupp =TPunsupp |GT unsupp|, F 1unsupp =2·Punsupp·Runsupp Punsupp +Runsupp(6) where TPunsupp is the number of unsupported sentences correctly identified by the model, Pred unsupp is the set of sentences predicted as unsupported by the model, and GT unsupp is the set of ground truth unsupported sentences. Table F15 summarizes the performance of Gemini and Gpt-4o. Gemini outperforms Gpt-4o overall, particularly in recall, despite Gpt-4o showing slightly higher precision in detecting related profile items and unsupported information. In medical applications, recall is prioritized over precision to minimize missed detections, which could have critical consequences. As both models achieve precision above 0.8, indicating robust performance, Gemini’s superior recall makes it the preferred evaluator. 39 Table F15: Comparison of sentence-level evaluation metrics between Gemini-2.5-flash and Gpt-4o. Metric Gemini 2.5 Flash GPT-4o Sentence category classification Acc (%) 0.96 0.94 Recall (%) 0.99 0.98 Detect related profile items Pkey 0.90 0.92 Rkey 0.96 0.94 F1key 0.92 0.92 Entailment evaluation Acc val 0.98 0.97 Detect unsupported information Punsupp 0.84 0.89 Runsupp 0.86 0.64 F1unsupp 0.84 0.74 F.4.2 Ablation study on doctor LLM To evaluate the ability of LLMs as the doctor, to elicit and assess patient responses, we conducted an ablation study focusing on the doctor LLM’s capacity to extract information from diverse patient personas. While the main study assessed patients’ ability to provide accurate information, this study examines how effectively the doctor LLM gathers information across varied patient profiles. We measured three metrics: ICov ,ICon , and their product ( Weighted ICon ), to assess the | https://arxiv.org/abs/2505.17818v1 |
amount and consistency of information extracted. These metrics were calculated for 108 patients, each assigned one of 37 distinct personas randomly. In this study, we varied only the doctor model, testing Gemini-2.5-flash, Gpt-4o-mini, and Llama 3.3 70B, while fixing the PATIENT SIMLLM backbone to Llama 3.3 70B. In Table F16, Gpt-4o-mini achieved the highest ICov andWeighted ICon scores, demonstrating superior performance in extracting information. Consequently, we selected Gpt-4o-mini as the doctor LLM for the automatic evaluation of P ATIENT SIM. Table F16: Dialogue-level factuality evaluation across Social History (Social), Previous Medical History (PMH), and Current Visit Information (Current Visit). Information Consistency (ICon) is rated on a 4-point scale by Gemini-2.5-flash, and Weighted ICon represents Information Coverage-Weighted Consistency , reflecting both coverage and consistency. Information Coverage (ICov) (%) Infomation Consistency (ICon) Weighted ICon Social PMH Current Visit Social PMH Current Visit Social PMH Current Visit Avg. Gemini-2.5-flash 0.34 0.72 0.82 3.74 3.18 2.98 1.27 2.29 2.44 2.00 Gpt-4o-mini 0.44 0.74 0.86 3.78 3.03 2.92 1.66 2.24 2.51 2.14 Llama3.3-70B-instruct 0.31 0.54 0.79 3.71 3.14 2.95 1.15 1.70 2.33 1.73 G Responsible use and limitations Our open-source patient simulator framework provides a safe, privacy-preserving environment to evaluate clinical LLMs through realistic interactions without real patient data. While its 37 predefined personas offer diverse scenarios, they may not fully cover the variability of real-world clinical settings. Additionally, simulated conversations cannot capture non-verbal cues, and over-reliance on the simulator may limit the assessment of practical clinical skills. The simulator is not intended for developing clinical decision-making tools for real patient care without rigorous clinical oversight, as it is designed solely for educational and research purposes. Acknowledging these limitations and incorporating expert feedback are essential for its effective use. 40 | https://arxiv.org/abs/2505.17818v1 |
Trinity-RFT: A General-Purpose and Unified Framework for Reinforcement Fine-Tuning of Large Language Models Xuchen Pan∗†, Yanxi Chen∗†, Yushuo Chen∗, Yuchang Sun∗, Daoyuan Chen∗, Wenhao Zhang, Yuexiang Xie, Yilun Huang, Yilei Zhang, Dawei Gao, Yaliang Li†, Bolin Ding†, Jingren Zhou Alibaba Group Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT)oflargelanguagemodels. Itisbuiltwithadecoupleddesign, consistingof(1)anRFT-corethatunifies and generalizes synchronous/asynchronous, on-policy/off-policy, and online/offline modes of RFT, (2) seam- less integration for agent-environment interaction with high efficiency and robustness, and (3) systematic data pipelines optimized for RFT. Trinity-RFT can be easily adapted for diverse application scenarios, and serves as a unified platform for exploring advanced reinforcement learning paradigms. This technical report outlines the vision, features, design and implementations of Trinity-RFT , accompanied by extensive examples demonstrating the utility and user-friendliness of the proposed framework. GitHub: https://github.com/modelscope/Trinity-RFT Documents: https://modelscope.github.io/Trinity-RFT Note: Trinity-RFT is currently under active development. This technical report corresponds to commit id f17db3d (May 23, 2025) of the GitHub repository, and will be continuously updated as the codebase evolves. Comments, suggestions and contributions are welcome! ExplorerTrainerLLM InfraBufferData PipelinesEnvironment & Human (Training, Inference, Model Sync, ...)Rollout ExperiencesAgent-EnvInteractionAdditional FeedbackTraining Data SynchronizeModel WeightsClean/Filter/Prioritize/Synthesize/... Process Training Batch High-Level Middle-Level Low-LevelRFT-core Figure 1: The design of Trinity-RFT . ∗Equal contribution. †Corresponding author. {chenyanxi.cyx, panxuchen.pxc, yaliang.li, bolin.ding}@alibaba-inc.com 1arXiv:2505.17826v1 [cs.LG] 23 May 2025 1 Vision of Trinity-RFT Reinforcement learning (RL) has achieved remarkable success in the development of large language models (LLMs). Examples include aligning LLMs with human preferences via reinforcement learning from human feedback (RLHF) [18], and training long-CoT reasoning models via RL with rule-based rewards [3, 28]. How- ever, such approaches are limited in their abilities to handle dynamic, real-world, and continuous learning. Trinity-RFT envisions a future where AI agents learn by interacting directly with environments, col- lecting delayed or complex reward signals, and continuously refining their behavior through RL based on the collected experiences [26]. For example, imagine an AI scientist who designs an experiment, executes it, waits for feedback (while working on other tasks concurrently), and iteratively updates itself based on true environmental rewards when the experiment is finally finished. Trinity-RFT offers a path into this future by providing various useful features. 2 Key Features Trinity-RFT is a general-purpose, flexible, scalable and user-friendly RL framework that can be adapted for diverse experimental or real-world scenarios. The following features of Trinity-RFT make it a promising solution for realizing the aforementioned vision: •Unified RFT modes & algorithm support. Trinity-RFT unifies and generalizes existing RFT methodologies into a flexible and configurable framework, supporting synchronous/asynchronous, on- policy/off-policy, and online/offline training, as well as hybrid modes that seamlessly combine them into a single learning process (e.g., incorporating expert trajectories to accelerate an online RL process [15, 36]). This is made possible partly by our decoupled design that allows rollout and training to be executed separately and scaled up independently on different machines, which will soon be introduced in the next section. •Agent-environment interaction as a first-class citizen. Trinity-RFT allows delayed rewards in multi-step/time-lagged feedback loops, and handles long-tailed latencies and environment/agent failures gracefully, which ensures efficiency and robustness in real-world scenarios with | https://arxiv.org/abs/2505.17826v1 |
complex agent- environment interaction. •Systematic data pipelines optimized for RFT. These include converting raw datasets to task sets for RFT, cleaning/filtering/prioritizing experiences stored in the buffer, incorporating priors and additional reward signals from various sources into the collected experiences, synthesizing data for augmenting tasks and experiences, offering user interfaces for human-in-the-loop, among others. 3 Design and Implementations The overall design of Trinity-RFT exhibits a trinity consisting of (1) RFT-core , (2)agent-environment interaction , and (3) data pipelines , which are illustrated in Figure 1 and elaborated in this section. 3.1 RFT-Core RFT-core is the component of Trinity-RFT , highlighted at the center of Figure 1, where the core RFT process happens. Its design also exhibits a trinity, consisting of the explorer,trainer, andbuffer. •The explorer, powered by a rollout model, takes a taskas input and solves it by executing a work- flowthat specifies the logic of agent-environment interaction, thereby collecting experiences (including rollout trajectories, rewards, and other useful information) to be stored in the buffer. •The buffer stores experiences that can be generated by the explorer or by other sources, such as human experts. It also assists with fetching training samples for the trainer, and can be integrated with advanced sampling strategies and post-processing operations. 2 •The trainer, backed by a policy model, samples batches of experiences from the buffer and updates the policy model via RL algorithms. Our implementations allow the explorer and trainer to be deployed on separate machines. This decoupled design of RFT-core enables Trinity-RFT to support diverse RFT modes with great flexibility, for example: •In asynchronous mode , the explorer and trainer get launched simultaneously, work in close coor- dination, and synchronize their model weights once every sync_interval training steps. Within each synchronization period, the explorer continuously generates sync_interval batches of rollout experi- ences and stores them in the buffer, which are then retrieved and utilized by the trainer for updating the policy model. If sync_interval = 1, this is an on-policy RL process, whereas if sync_interval >1, it becomes off-policy (akin to the mode adopted in [28]) and can be accelerated by pipeline parallelism between the explorer and trainer. This mode can be activated by setting the configuration parameter modetoboth. •In a fully asynchronous mode , the explorer and trainer are mostly decoupled and can scale up independently. The explorer continuously generates rollout experiences and stores them in the buffer, while the trainer continuously samples experiences from the buffer and uses them for training the policy model. External experiences, e.g., those generated by expert models or humans, can be continuously incorporated into the buffer as well. The explorer synchronizes its model weights with the trainer once in a while, keeping the distribution of rollout experiences up to date. This mode can be activated by setting modetoexplore/trainandlaunchingtheexplorerandtrainerseparatelyondifferentmachines. •Trinity-RFT supports a benchmark mode that allows the user to evaluate one or multiple check- points on arbitrary benchmarks, after the RFT training process has finished. To activate this mode, the user simply needs to set modetobenchand specify the paths for the evaluation datasets in the configurations. This mode can be particularly | https://arxiv.org/abs/2505.17826v1 |
useful for experimental purposes; for example, the user might want to try out different RFT techniques or configurations quickly (with limited evaluation on hold-out data) during training, identify which RFT trials have achieved stable convergence and high rewards, and then conduct more thorough evaluations only for the checkpoints of these successful trials. •In certain scenarios, the user would like to train the policy model without further exploration, using experiences that have already been collected and stored in the buffer. This train-only mode can be activated by setting the modeparameter to trainand launching the trainer alone. Offline methods like Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO) [20] can be regarded as special cases of such scenarios, both of which are natively supported by Trinity-RFT . For another example, consider an online RFT process that expands over a long period of time, where the explorer alone is launched during the daytime for serving human users and collecting experiences, while the trainer alone is launched at night for updating the policy model (which will be thoroughly validated and evaluated before it can be actually deployed as the rollout model for the next day). Anotherbenefitbroughtbythedecoupleddesignisthatexplorersandtrainerscanoperateacrossseparate devices (e.g., multiple explorers on edge devices, one trainer on a high-end GPU cluster) and scale up independently. Indeed, we anticipate that for successful RL in real-world scenarios, distributed collection of massive rollout experiences could be one promising solution to resolving data scarcity. Remark 1.We take a system-algorithm co-design perspective in the development of Trinity-RFT , and aim to unify and generalize diverse RFT methodologies in this framework, including synchronous/asynchronous, on-policy/off-policy, and online/offline modes. RFT-core provides the necessary infrastructure for achieving this goal. While the main text of this technical report focuses on the system perspective, interested readers may refer to Appendix A for further discussions on the algorithm perspective of off-policy/asynchronous RL. 3.2 Agent-Environment Interaction Numerous challenges arise when one tries to build an RFT framework that can efficiently and robustly handle real-world interaction between the LLM-powered agent and the environment. These include long- tailed latencies, agent/environment failures, and lagged reward signals, among others. 3 Trinity-RFT regards agent-environment interaction as a first-class citizen and incorporates various so- lutions to tackle these challenges, for example: •Built upon vLLM [10], our implementations of the LLM inference engine in Trinity-RFT support asynchronous and concurrent generation of rollout trajectories for multiple tasks. This helps mitigate the straggler effect caused by the long-tailed latencies in rollout generation and agent-environment interaction, thereby accelerating the RFT process. Load balancing among multiple LLM inference engines within one RFT training course is also taken care of, and would be one direction for further optimizing the utilization of computational resources. •Trinity-RFT incorporates various timeout/retry/skip mechanisms for fault tolerance and robustness, which ensure that continuous rollout generation would not be interrupted or blocked by individual failures in certain rounds of agent-environment interaction. This is crucial for stable and efficient learning in real-world scenarios, e.g., when the agent interacts with a large number of MCP services [12] that differ vastly in quality and availability. •Trinity-RFT is built to provide native support | https://arxiv.org/abs/2505.17826v1 |
for asynchronous RFT modes , which allow great flexi- bility in the paces of the explorer and trainer. This can boost the overall efficiency of the RFT process, compared to synchronous modes where the slower one among the explorer and trainer can block the progress of the other and cause waste of computational resources. •Forlagged reward signals , the trinity design of RFT-core offers a natural solution. As soon as the rollout trajectory (without reward values) has been generated, it is saved into the experience buffer, but marked as “not ready for training”. The explorer is now free from this task and may continue to collect experiences for other tasks. When the reward signals from the environment finally arrive, they are written to the buffer, and the corresponding experience is now marked as “ready for training”. To adapt Trinity-RFT to a new downstream scenario, the user mainly needs to define and register a customizedworkflow(byinheritingthebaseclass Workflow orMultiTurnWorkflow )wherethelogicofagent- environment interaction is implemented. Detailed examples will be demonstrated in Sections 4.1 and 4.2. 3.3 Data Pipelines The data pipelines in Trinity-RFT aim to address fundamental challenges in RFT scenarios, such as man- aging heterogeneous data dynamics across interaction workflows, enabling delayed reward integration, and facilitatingcontinuousdatacuration. Oursolutionscenteronthreecoreaspects: end-to-end data transforma- tion,active data-reward co-shaping , andhuman-in-the-loop curation , each corresponding to key requirements identified in our development of RFT-core (Section 3.1). 3.3.1 End-to-end Data Transformation To support the diverse RFT modes (e.g., synchronous or asynchronous) in Trinity-RFT , we establish a service-oriented data pipeline architecture as illustrated in Figure 2. It decouples data pipeline logic from procedure control to enable flexible RL-oriented data transformations with two key modules: •TheFormatter Module unifies disparate data sources into RFT-compatible formats, providing con- venient conversion between raw inputs (e.g., meta-prompts, domain-specific corpora, and QA pairs with tagged rewards) and structured RFT representations (e.g., via RftDatasetModel ). For efficient RFT workloads, we utilize PostgreSQL-based persistent storage supporting both ExperienceModel for prioritized trajectories and DPODataModel for preference pairs, which is particularly beneficial for asynchronous RFT mode and distributed environments. Full schema details appear in Appendix B.1. •TheController Module manages the complete data pipeline lifecycle through distributed server initialization ,declarative configuration , andautomated dataset persistence . It implements dynamic controlmechanismsforasynchronousscenariosandprotectionagainstresourceexhaustion, withconfig- urable termination conditions based on compute quota or data quantity. This modular design enables Trinity-RFT to handle data transformations flexibly while maintaining consistency across different RFT modes. More service utilities are detailed in Appendix B.2. 4 ExplorerTask SetBufferTrainerData Pipelines Sample Tasks for RolloutRollout ExperiencesSample Batchs of Experiences for RL TrainingOperations on Buffer:•Incorporate Additional Feedback (From Environment / Human / LLM-as-a-Judge / Other Grader)•Clean/Filter/Prioritize/Synthesize Experiences Augment Task Set (e.g., Via Data Synthesis)Raw DatasetConversionFigure 2: An illustration of data flow in Trinity-RFT . The Formatter-Controller duality mirrors the explorer-trainer decoupling in RFT-core, enabling parallel data ingestion and model updating. This design also allows Trinity-RFT to handle delayed rewards through version-controlled experience updates while maintaining low-latency sampling for the trainer. 3.3.2 Active Data-Reward Co-Shaping To enable efficient and effective learning from complex and delayed reward signals, Trinity-RFT introduces fruitful mechanisms that actively shape both data | https://arxiv.org/abs/2505.17826v1 |
and reward throughout the RFT process. •Trinity-RFT provides the Processor Module to support convenient and extensive data processing capabilities tailored to RFT, through dedicated bridge classes that link Trinity-RFT to over 100 operators from the Data-Juicer system [2]. This module offers efficient integration with the experience buffer, and provides composable building blocks for experience cleaning (e.g., length filters, duplication removal), safety alignment (e.g., toxicity detection, ethics checks), and preference data synthesis (e.g., critique-conditioned augmentation). Users can specify processing chains using either natural language or explicit configurations. These are accessible to both RFT users familiar with Data-Juicer and those who are not, thanks to transparency of the underlying system details. By treating Data-Juicer as a modular data processing operator pool rather than a central dependency, Trinity-RFT provides RL-specific abstractions and coherence, while benefiting from well-established data tools. •RL-oriented data shaping . RL is particularly useful in scenarios where optimization goals cannot be clearly defined, leading users to express vague or high-level requirements. In response, Trinity-RFT automates the translation of these objectives into optimized data processing pipelines. It reduces prob- lem complexity through a structured hierarchy: Quality →Difficulty →Diversity →Quantity, with auto-selection of appropriate operators from Data-Juicer. Moreover, Trinity-RFT provides many com- posable and extensible interfaces for data shaping, aiming to seamlessly inject user-defined inductive biases. For instance, DataCleaner (a sub-module of Processor ) supports multi-way cleaning with statistical filtering (e.g., entropy-based diversity [11]) and rule-based validation (e.g., boxed answer verification in math dataset [7]), while DataSynthesizer enables both autonomous generation and hybrid augmentation with controlled feedback metrics [35]. •Active data optimization . As agents interact with environments, experiences collected and stored in the buffer can become large-scale and highly diverse. Using the right experiences for RL training is crucial for a cost-effective and stable learning process. To this end, Trinity-RFT allows users to flexibly apply multi-dimensional utility scoring to prioritize samples (e.g., 0.4 * difficulty + 0.3 * diversity + 0.1 * quality - 0.2 * frequency). Moreover, we introduce the DataActiveIterator module to enable (1) version-controlled experience reuse through ExperienceModel ; (2) cross-task data lineage tracking via rich relationships derived from sample_id androllout-model_id ; and (3) incorporating human annotations into the rewards. These features will be demonstrated through concrete examples in Section 4 and Appendix B.3. 5 3.3.3 Human-AI Collaboration In scenarios where human feedback is irreplaceable, Trinity-RFT establishes a bi-directional human-AI collaboration loop that provides first-class support for human annotations, based on Label Studio [30] and Data-Juicer’s HumanOPs. •Multi-stage annotation. Trinity-RFT implements configurable procedures combining automatic pre-screening and human verification. Typical stages include preference annotation (comparative as- sessment of model responses), quality auditing (human verification of automated cleaning/synthesis results), and cold-start bootstrapping (initial dataset curation through expert demonstrations). •Native asynchronism support. As the collection of human feedback is generally slower than AI/- model feedback, we provide dedicated capabilities to handle both synchronous and asynchronous feed- back modes, with configurable timeout and polling parameters. The feedback collaboration is based on anevent-drivendesign, withautomatictaskcreationupondatastatechanges, configurablenotifications via email/Slack/webhook, and an atomic transaction model for annotation batches. •Customization. Different applications may involve humans in heterogeneous ways. | https://arxiv.org/abs/2505.17826v1 |
We thus prioritize flexibility in both the interaction-interface and service levels. Examples include rich built-in interfaces that can be extended in a visualized style with XML-like tags provided by Label Studio, fine-grained quality scoring for reward shaping, free-form feedback attachment for dataset shaping, among others. Moreover, for easy deployment, we provide local Label Studio instance management with automatic environment setup via Docker/pip; optimized SDK interactions with batch request coalescing; uni- fied logging across annotation tools and ML services; and concurrent annotation campaigns through priority-based task routing, while maintaining full data lineage preserved via LineageTracker . The decoupled design of Trinity-RFT , or the presence of an experience buffer in particular, enables human feedback to participate in RL loops without breaking the asynchronous execution model . For instance, human-verified samples can be prioritized for training while fresh experiences are being collected, which is a critical capability for real-world deployment scenarios with mixed feedback sources. Further details for human-AI collaboration in Trinity-RFT will be illustrated in Section 4.4. 3.4 Performance Optimizations and User-Friendliness Trinity-RFT has made numerous efforts in pursuit of efficiency, for example: •For multi-turn conversations and ReAct-style workflows [37], Trinity-RFT supports concatenating multipleroundsofagent-environmentinteractioncompactlyintoasinglesequence,withpropermasking that indicates which tokens need to be incorporated into the training objective of RL algorithms. •For the synchronous RFT mode, Trinity-RFT accelerates the RFT process via pipeline parallelism between the explorer and trainer. •Trinity-RFT utilizesRay[14]fordistributedruntime,andNCCLcommunicationprimitives[16](when- ever feasible) for model weight synchronization between the explorer and trainer. In addition, Trinity-RFT has been designed with user-friendliness as a top priority, for example: •Weincludeamonitor(builtuponWandb[32]andTensorBoard[29])thatallowstheusertoconveniently track the progress of an RFT process, both quantitatively (e.g., via learning curves for rewards and othermetrics)andqualitatively(e.g., viaconcreteexamplesofrollouttrajectoriesgeneratedatdifferent RL steps). See Figure 3 for an example snapshot of the monitor. •We also implement a configuration manager, as shown in Figure 4, that allows the user to create configuration files conveniently via a front-end interface. •We provide Trinity-Studio , an all-in-one unified UI that allows the user to configure and run data inspection, data processing, RFT learning process, etc., all by clicking the mouse and filling forms, without writing any code. More functions will be integrated in the future. An example for developing with Trinity-Studio will be introduced in Section 4.5. 6 Figure 3: A snapshot of the monitor implemented in Trinity-RFT . (a) The “beginner” mode. (b) The “expert” mode. Figure 4: Snapshots of the configuration manager. 7 4 Examples and Applications Through some concrete examples, we demonstrate how to use Trinity-RFT for different purposes, and also exemplifies some concepts introduced in previous sections. Additional step-by-step tutorials can be found on the documentation website1, or the /examples folder of the GitHub repository2. 4.1 Single-Turn Workflow In a simple yet common scenario, a user of Trinity-RFT would like to train an LLM for completing single- turn tasks, where the LLM generates one response to each input query. For this purpose, the user mainly needs to (1) define and register a single-turn workflow class (by inheriting the base class Workflow ) tailored to the targeted tasks, and (2) specify the dataset of tasks (for training and/or evaluation) and the initial | https://arxiv.org/abs/2505.17826v1 |
LLM, both compatible with HuggingFace [9] and ModelScope [13] formats. Listing 1 gives a minimal example for implementing a single-turn workflow. Suppose that each task is specified by a <question, answer> tuple. The run()method of ExampleWorkflow calls the LLM once to generate a response for the question, calculates its reward, and returns an Experience instance that consists of the response itself, the reward value, and the log-probabilities of next-token prediction by the rollout model (which is necessary for certain RL algorithms, such as PPO [22] and GRPO [23]). Some built-in workflows have been implemented in Trinity-RFT , e.g., the MathWorkflow class for math-related tasks. Trinity-RFT also allows more flexible and advanced usage by changing just a few configuration param- eters, e.g., running supervised fine-tuning (SFT) as a warmup stage before the RFT process, or choosing between the synchronous and asynchronous modes of RFT (as described in Section 3.1). 1@WORKFLOWS.register_module("example_workflow") 2class ExampleWorkflow(Workflow): 3 4 def __init__(self, model: ModelWrapper, task: Task, **kwargs): 5 super().__init__(model, task, **kwargs) 6 self.question = task.raw_task.get("question") 7 self.answer = task.raw_task.get("answer") 8 9 def calculate_reward(self, response: str, truth: str) -> float: 10 return 1.0 if response == truth else 0.0 11 12 def run(self) -> List[Experience]: 13 response = self.model.chat( 14 [ 15 { 16 "role": "user", 17 "content": f"Question:\n{self.question}", 18 } 19 ], 20 n=self.task.rollout_args.repeat_times, 21 temperature=self.task.rollout_args.temperature, 22 ) 23 reward: float = self.calculate_reward(response.response_text, self.answer) 24 return [ 25 Experience( 26 tokens=response.tokens, 27 prompt_length=response.prompt_length, 28 reward=reward, 29 logprobs=response.logprobs, 30 ) 31 ] Listing 1: A minimal example for implementing a customized workflow. 1https://modelscope.github.io/Trinity-RFT 2https://github.com/modelscope/Trinity-RFT 8 4.2 Multi-Turn Workflow In more advanced cases, the user would like to train an LLM-powered agent that solves multi-turn tasks by repeatedly interacting with the environment. With Trinity-RFT , achieving this is mostly as sim- ple as in the single-turn case, except that the user needs to define and register a multi-turn workflow class (by inheriting the base class MultiTurnWorkflow ), where the logic of agent-environment interac- tion is implemented. Listing 2 provides one such example for ALFWorld [25]. For training efficiency, the process_messages_to_experience() method concatenates multiple rounds of agent-environment interac- tions compactly into an Experience instance consisting of a single token sequence with proper masking, which can readily be consumed by standard RL algorithms like PPO and GRPO. 1@WORKFLOWS.register_module("alfworld_workflow") 2class AlfworldWorkflow(MultiTurnWorkflow): 3 """A workflow for the ALFWorld task.""" 4 5 # ... 6 7 def generate_env_inference_samples(self, env, rollout_num) -> List[Experience]: 8 print("Generating env inference samples...") 9 experience_list = [] 10 for i in range(rollout_num): 11 observation, info = env.reset() 12 final_reward = -0.1 13 memory = [] 14 memory.append({"role": "system", "content": AlfWORLD_SYSTEM_PROMPT}) 15 for r in range(self.max_env_steps): 16 format_obs = format_observation(observation) 17 memory = memory + [{"role": "user", "content": format_obs}] 18 response_text = self.get_model_response_text(memory) 19 memory.append({"role": "assistant", "content": response_text}) 20 action = parse_action(response_text) 21 observation, reward, done, info = env.step(action) 22 if done: 23 final_reward = reward 24 break 25 experience = self.process_messages_to_experience( 26 memory, final_reward, {"env_rounds": r, "env_done": 1 if done else 0} 27 ) 28 experience_list.append(experience) 29 # Close the env to save CPU memory 30 env.close() 31 return experience_list 32 33 | https://arxiv.org/abs/2505.17826v1 |
34 def run(self) -> List[Experience]: 35 # ... 36 game_file_path = self.task_desc 37 rollout_n = self.repeat_times 38 # ... 39 env = create_environment(game_file_path) 40 return self.generate_env_inference_samples(env, rollout_n) Listing 2: An implementation of a multi-turn workflow for ALFWorld [25]. 4.3 Dataset Curation We present an example to showcase the end-to-end data transformation (Section 3.3.1) and data-reward co-shaping (Section 3.3.2) capabilities of Trinity-RFT in a mathematical reasoning task. As shown in Listings 3 and 4, the user configures dataset paths and field mappings through a centralized DataConfig , 9 then specifies natural-language processing instructions via dj_process_desc . The system will automatically translate these requirements into executable pipelines through three phases: (1) Processor applies LLM- powered difficulty scoring using Qwen-Max; (2) Controller optimizes quality-diversity tradeoffs through adaptive filtering thresholds and thereby prioritizes high-value training samples; (3) Formatter converts raw arithmetic problems into an RL-ready task set with specified reward functions and workflows. This procedure exemplifies our active data optimization capability (Section 3.3.2) by combining auto- mated quality assessments with configurable curation policies. Users can extend this pattern to many other domains through customization and adaptation: for example, users can define their own difficulty metrics and calculation rules in the parameters of Data-Juicer’s OPs. Besides, users can flexibly utilize another Synthesizer module and extend new Controller policies to support seamless transitions between different domain-specific reward shaping and data optimization priors. 1# Core dataset configuration 2class DataConfig: 3 """Centralized dataset configuration management""" 4 5 def __init__(self): 6 self.data = { 7 "dataset_path": "/path/to/gsm8k", 8 "format_config": { 9 "prompt_key": "question", 10 "response_key": "answer" 11 }, 12 "dj_process_desc": "Compute difficulty scores for math questions", 13 "agent_model_name": "qwen-max" 14 } 15 16 def get_metadata(self) -> dict: 17 """Retrieve configuration metadata""" 18 return self.data Listing 3: Core dataset configuration, which includes processing instructions as metadata. 1# Typical procedure that transforms raw dataset through multi-stage pipeline 2my_dataset = RftDataset(data_config) # Raw dataset auto-loading 3 4my_dataset.format([ 5 BoxedMathAnswerFormatter(config), # Domain-specific formatting 6 RLHFFormatter(config) # RLHF format conversion 7 ]) 8 9my_dataset.to_taskset( 10 reward_fn=AccuracyReward, # Reward shaping specification 11 workflow=MathWorkflow # Execution environment binding 12 ) Listing 4: Dataset transformation with domain-specific formatting and RLHF conversion. 4.4 Human-AI Collaborative Annotation This example demonstrates the human-in-the-loop capability in Trinity-RFT for preference modeling. As illustrated in Listing 5 and Figure 5, our framework integrates Label Studio’s annotation interface with asynchronous data pipelines through four coordinated stages: (1) task generation: auto-creating annotation batches from model rollouts; (2) interactive labeling: providing UI for side-by-side response comparison; (3) quality control: enforcing inter-annotator agreement thresholds; and (4) versioned storage: tracking preference lineage in pre-defined fields like those in DPODataModel . Thispipelinereflects Trinity-RFT ’sbi-directionalcollaborationfeature(Section3.3.3),backedbytimeout- aware task polling and support of atomic batch commit. It enables hybrid procedures where initial AI pre- 10 screening can reduce human workload in production deployments. Annotation activities can scale across distributed teams through event-driven task routing. The system’s flexibility benefits rapid adaptation to diverse annotation protocols, allowing developers to implement custom labeling interfaces through XML- based templates or integrate third-party annotation services via unified SDK endpoints. This capability underpins advanced use cases such as safety red-teaming | https://arxiv.org/abs/2505.17826v1 |
datasets and online instruction tuning scenarios where human judgment remains irreplaceable for quality-critical decisions, particularly in human-centric sociocultural contexts where data quality, difficulty, and reward signals are difficult to verify logically. 1# Human annotation configuration 2class HumanAnnotationConfig: 3 """Preference annotation pipeline configuration""" 4 5 def __init__(self): 6 self.process = [ 7 { 8 "human_preference_annotation_mapper": { 9 "wait_for_annotations": True, # Block until annotations complete 10 "timeout": 3600, # Maximum wait time in seconds 11 "prompt_key": "prompt", # Source field for prompts 12 "answer1_key": "answer1", # First candidate response 13 "chosen_key": "chosen" # Selected response key 14 } 15 } 16 ] 17 18 def get_pipeline(self) -> List[Dict]: 19 """Get annotation processing pipeline""" 20 return self.process Listing 5: Configuration for human preference annotation. Figure 5: An interactive interface for human preference annotation. 11 4.5 Trinity-Studio Trinity-Studio provides visual interaction for the core capabilities of Trinity-RFT , designed to bridge the gap between system complexity and user accessibility. As shown in Figure 6a, its three integrated modules, “Training Portal”, pgAdmin”, and “Label Studio”, form a cohesive interface that directly supports the active data-reward co-shaping andhuman-AI collaboration capabilities established in Section 3.3. •“Training Portal” (Figure 6b) implements configuration-to-execution procedures through declarative YAML editing with live validation and auto completion. This design aligns with Trinity-RFT ’s decou- pled architecture, enabling users to optionally specify dataset paths, processing and training param- eters, meanwhile ensuring schema-aware validation against pre-defined data schema constraints (Sec- tion B.1). The live validation mechanism prevents misconfigurations that could disrupt the explorer- trainer coordination described in Section 3.1. Furthermore, the integration of runtime metrics with tools like Wandb/TensorBoard directly helps the active data optimization feature by surfacing signals such as difficulty distribution drifts and diversity metrics mentioned in Section 4.3. This transparency ensures that users can monitor how data curation strategies impact RFT performance in real time. •“pgAdmin” (Figure 6c) reflects Trinity-RFT ’send-to-end data transformation capabilities by providing a visual panel for PostgreSQL-based storage (Section 3.3.1). This design benefits the versioned data lineagerequirements of RFT, particularly for scenarios involving asynchronous training (Section 3.3.2). With intuitive SQL query builders, users can easily adjust schema, audit training experiences and human annotation batches with fine-grained precision. This capability is valuable for rapid validation of active learning policies by cross-referencing training outcomes with metadata (e.g., difficulty scores and staleness in asynchronous mode). •“Label Studio” page (Figure 6d) operationalizes Trinity-RFT ’s bi-directional human-AI collaboration capability (Section 3.3.3). Utilizing the provided task polling and atomic batch commit mechanisms, users can annotate the data or experiences directly, allowing an asynchronous way to involve human feedback and to dynamically influence data curation. ByunifyingthesecapabilitiesinasingleUI, Trinity-Studio reducesthecognitiveloadofmanagingcomplex RFT procedures. For example, a researcher tuning a math reasoning task could use the Training Portal to adjust difficulty scoring parameters, view the resulting distribution shifts in the pgAdmin module, and then validate human annotators’ preferences in the Label Studio page. This end-to-end visibility is critical for debugging and iterating on RFT strategies, particularly when balancing tradeoffs between quality, diversity, and reward shaping. Moreover, this visual operation complements the programmatic APIs of Trinity-RFT while maintaining full compatibility with | https://arxiv.org/abs/2505.17826v1 |
CLI procedures. We implement Trinity-Studio with Singe-Spa framework [27]. The modular architecture enables custom view development through JavaScript plugins and flexible extensions for general-purpose usage. 5 Conclusion This report has provided an overview of Trinity-RFT , a general-purpose, flexible, scalable and user-friendly framework for reinforcement fine-tuning of large language models. Trinity-RFT offers a path into “the era of experience” [26], by supporting applications in diverse scenarios with complex agent-environment interaction, and serving as a unified platform for exploring advanced RL paradigms. Acknowledgements Trinity-RFT is built upon many excellent open-source projects, including but not limited to: verl [24] and PyTorch’s FSDP [5] for LLM training; vLLM [10] for LLM inference; Data-Juicer [2] for data processing functionalities; AgentScope [6] for agentic workflow; and Ray [14] for distributed systems. We have also drawn inspiration from RL/RFT projects like OpenRLHF [8], TRL [31], ChatLearn [1], and TinyZero [19]. 12 (a)Trinity-Studio dashboard. (b) Start training on the “Training Portal” page. (c) Manage data on the “pgAdmin” page. (d) Process data on the “Label Studio” page. Figure 6: Snapshots of Trinity-Studio . 13 References [1] ChatLearn. https://github.com/alibaba/ChatLearn . [2] Daoyuan Chen, Yilun Huang, Zhijian Ma, Hesen Chen, Xuchen Pan, Ce Ge, Dawei Gao, Yuexiang Xie, Zhaoyang Liu, Jinyang Gao, Yaliang Li, Bolin Ding, and Jingren Zhou. Data-juicer: A one-stop data processing system for large language models. In International Conference on Management of Data , 2024. [3] DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wang, Bochao Wu, Bei Feng, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Qu, Hui Li, Jianzhong Guo, Jiashi Li, Jiawei Wang, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, J. L. Cai, Jiaqi Ni, Jian Liang, Jin Chen, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Liang Zhao, Litong Wang, Liyue Zhang, Lei Xu, Leyi Xia, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Meng Li, Miaojun Wang, Mingming Li, Ning Tian, Panpan Huang, Peng Zhang, Qiancheng Wang, Qinyu Chen, Qiushi Du, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, R. J. Chen, R. L. Jin, Ruyi Chen, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shengfeng Ye, Shiyu Wang, Shuiping Yu, Shunfeng Zhou, Shuting Pan, S. S. Li, Shuang Zhou, Shaoqing Wu, Shengfeng Ye, Tao Yun, Tian Pei, Tianyu Sun, T. Wang, Wangding Zeng, Wanjia Zhao, Wen Liu, Wenfeng Liang, Wenjun Gao, Wenqin Yu, Wentao Zhang, W. L. Xiao, Wei An, Xiaodong Liu, Xiaohan Wang, Xiaokang Chen, Xiaotao Nie, Xin Cheng, Xin Liu, Xin Xie, Xingchao Liu, Xinyu Yang, Xinyuan Li, Xuecheng Su, Xuheng Lin, X. Q. Li, Xiangyue Jin, Xiaojin Shen, Xiaosha Chen, Xiaowen Sun, Xiaoxiang Wang, Xinnan Song, Xinyi Zhou, Xianzu Wang, Xinxia Shan, | https://arxiv.org/abs/2505.17826v1 |
Y. K. Li, Y. Q. Wang, Y. X. Wei, Yang Zhang, Yanhong Xu, Yao Li, Yao Zhao, Yaofeng Sun, Yaohui Wang, Yi Yu, Yichao Zhang, Yifan Shi, Yiliang Xiong, Ying He, Yishi Piao, Yisong Wang, Yixuan Tan, Yiyang Ma, Yiyuan Liu, Yongqiang Guo, Yuan Ou, Yuduan Wang, Yue Gong, Yuheng Zou, Yujia He, Yunfan Xiong, YuxiangLuo, YuxiangYou, YuxuanLiu, YuyangZhou, Y.X.Zhu, YanhongXu, YanpingHuang, Yaohui Li, Yi Zheng, Yuchen Zhu, Yunxian Ma, Ying Tang, Yukun Zha, Yuting Yan, Z. Z. Ren, Zehui Ren, Zhangli Sha, Zhe Fu, Zhean Xu, Zhenda Xie, Zhengyan Zhang, Zhewen Hao, Zhicheng Ma, Zhigang Yan, Zhiyu Wu, Zihui Gu, Zijia Zhu, Zijun Liu, Zilin Li, Ziwei Xie, Ziyang Song, Zizheng Pan, Zhen Huang, Zhipeng Xu, Zhongyu Zhang, and Zhen Zhang. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv, 2025. [4] Yannis Flet-Berliac, Nathan Grinsztajn, Florian Strub, Bill Wu, Eugene Choi, Chris Cremer, Arash Ahmadian, Yash Chandak, Mohammad Gheshlaghi Azar, Olivier Pietquin, and Matthieu Geist. Con- trastive policy gradient: Aligning LLMs on sequence-level scores in a supervised-friendly fashion. In EMNLP, 2024. [5] Pytorch FSDP. https://pytorch.org/docs/stable/fsdp.html . [6] Dawei Gao, Zitao Li, Xuchen Pan, Weirui Kuang, Zhijian Ma, Bingchen Qian, Fei Wei, Wenhao Zhang, Yuexiang Xie, Daoyuan Chen, Liuyi Yao, Hongyi Peng, Ze Yu Zhang, Lin Zhu, Chen Cheng, Hongzhu Shi, Yaliang Li, Bolin Ding, and Jingren Zhou. Agentscope: A flexible yet robust multi-agent platform. arXiv, 2024. [7] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset, 2021. [8] Jian Hu, Xibin Wu, Zilin Zhu, Xianyu, Weixun Wang, Dehao Zhang, and Yu Cao. OpenRLHF: An easy-to-use, scalable and high-performance RLHF framework. arXiv, 2024. [9] Huggingface. https://huggingface.co/ . 14 [10] Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serv- ing with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles , 2023. [11] ZhenqingLing, DaoyuanChen, LiuyiYao, YaliangLi, andYingShen. Diversityasareward: Fine-tuning LLMs on a mixture of domain-undetermined data. arXiv preprint arXiv:2502.04380 , 2025. [12] Model context protocol servers. https://github.com/modelcontextprotocol/servers . [13] Modelscope. https://www.modelscope.cn/home . [14] Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang, Melih Elibol, Zongheng Yang, William Paul, Michael I. Jordan, and Ion Stoica. Ray: A distributed framework for emerging ai applications. arXiv, 2018. [15] Ofir Nachum, Mohammad Norouzi, Kelvin Xu, and Dale Schuurmans. Bridging the gap between value and policy based reinforcement learning. In NIPS, 2017. [16] Nccl. https://github.com/NVIDIA/nccl . [17] Michael Noukhovitch, Shengyi Huang, Sophie Xhonneux, Arian Hosseini, Rishabh Agarwal, and Aaron Courville. Faster, more efficient RLHF through off-policy asynchronous learning. In The Thirteenth International Conference on Learning Representations , 2025. [18] Long Ouyang, Pamela Mishkin, Jeff Wu, C L Mar, Jacob Hilton, Amanda Askell, and Paul Christiano. Training language models to follow instructions with human feedback. arXiv, 2022. [19] Jiayi Pan, Junjie Zhang, Xingyao Wang, Lifan Yuan, Hao Peng, and Alane Suhr. TinyZero. https://github.com/Jiayi-Pan/TinyZero, 2025. [20] Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D. | https://arxiv.org/abs/2505.17826v1 |
Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16 , 2023. [21] Pierre Harvey Richemond, Yunhao Tang, Daniel Guo, Daniele Calandriello, Mohammad Gheshlaghi Azar, Rafael Rafailov, Bernardo Avila Pires, Eugene Tarassov, Lucas Spangher, Will Ellsworth, Aliaksei Severyn, JonathanMallinson, LiorShani, GilShamir, RishabhJoshi, TianqiLiu, RemiMunos, andBilal Piot. Offline regularised reinforcement learning for large language models alignment. arXiv, 2024. [22] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv, 2017. [23] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y. K. Li, Y. Wu, and Daya Guo. DeepSeekMath: Pushing the limits of mathematical reasoning in open language models. arXiv, 2024. [24] GuangmingSheng, ChiZhang, ZilingfengYe, XibinWu, WangZhang, RuZhang, YanghuaPeng, Haibin Lin, and Chuan Wu. HybridFlow: A flexible and efficient rlhf framework. arXiv, 2024. [25] Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Cote, Yonatan Bisk, Adam Trischler, and Matthew Hausknecht. ALFWorld: Aligning text and embodied environments for interactive learning. In Inter- national Conference on Learning Representations , 2021. [26] David Silver and Richard S. Sutton. Welcome to the era of experience. https://storage.googleapis. com/deepmind-media/Era-of-Experience%20/The%20Era%20of%20Experience%20Paper.pdf , 2025. [27] A javascript framework for front-end microservices, 2025. 15 [28] Kimi Team, Angang Du, Bofei Gao, Bowei Xing, Changjiu Jiang, Cheng Chen, Cheng Li, Chenjun Xiao, Chenzhuang Du, Chonghua Liao, Chuning Tang, Congcong Wang, Dehao Zhang, Enming Yuan, Enzhe Lu, Fengxiang Tang, Flood Sung, Guangda Wei, Guokun Lai, Haiqing Guo, Han Zhu, Hao Ding, Hao Hu, Hao Yang, Hao Zhang, Haotian Yao, Haotian Zhao, Haoyu Lu, Haoze Li, Haozhen Yu, Hongcheng Gao, Huabin Zheng, Huan Yuan, Jia Chen, Jianhang Guo, Jianlin Su, Jianzhou Wang, Jie Zhao, Jin Zhang, Jingyuan Liu, Junjie Yan, Junyan Wu, Lidong Shi, Ling Ye, Longhui Yu, Mengnan Dong, Neo Zhang, Ningchen Ma, Qiwei Pan, Qucheng Gong, Shaowei Liu, Shengling Ma, Shupeng Wei, Sihan Cao, Siying Huang, Tao Jiang, Weihao Gao, Weimin Xiong, Weiran He, Weixiao Huang, Wenhao Wu, Wenyang He, Xianghui Wei, Xianqing Jia, Xingzhe Wu, Xinran Xu, Xinxing Zu, Xinyu Zhou, Xuehai Pan, Y. Charles, Yang Li, Yangyang Hu, Yangyang Liu, Yanru Chen, Yejie Wang, Yibo Liu, Yidao Qin, Yifeng Liu, Ying Yang, Yiping Bao, Yulun Du, Yuxin Wu, Yuzhi Wang, Zaida Zhou, Zhaoji Wang, Zhaowei Li, Zhen Zhu, Zheng Zhang, Zhexu Wang, Zhilin Yang, Zhiqi Huang, Zihao Huang, Ziyao Xu, and Zonghan Yang. Kimi k1.5: Scaling reinforcement learning with LLMs. arXiv, 2025. [29] TensorBoard. https://www.tensorflow.org/tensorboard . [30] Maxim Tkachenko, Mikhail Malyuk, Andrey Holmanyuk, and Nikolai Liubimov. La- bel Studio: Data labeling software, 2020-2025. Open source software available from https://github.com/HumanSignal/label-studio. [31] Leandro von Werra, Younes Belkada, Lewis Tunstall, Edward Beeching, Tristan Thrush, Nathan Lam- bert, Shengyi Huang, Kashif Rasul, and Quentin Gallouédec. Trl: Transformer reinforcement learning. https://github.com/huggingface/trl , 2020. [32] Weights & Biases. https://wandb.ai/home . [33] Taiyi Wang, Zhihao Wu, Jianheng Liu, Jianye HAO, Jun Wang, and Kun Shao. DistRL: An asyn- chronous distributed reinforcement learning framework for on-device control | https://arxiv.org/abs/2505.17826v1 |
agent. In The Thirteenth International Conference on Learning Representations , 2025. [34] Tianbing Xu. Training large language models to reason via EM policy gradient. arXiv, 2025. [35] Zhe Xu, Daoyuan Chen, Zhenqing Ling, Yaliang Li, and Ying Shen. Mindgym: What matters in question synthesis for thinking-centric fine-tuning? arXiv preprint arXiv:2503.09499 , 2025. [36] JianhaoYan, YafuLi, ZicanHu, ZhiWang, GanquCui, XiaoyeQu, YuCheng, andYueZhang. Learning to reason under off-policy guidance. arXiv, 2025. [37] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations , 2023. 16 A Off-Policy Reinforcement Learning Algorithms For off-policy learning, one might simply adopt offline methods like supervised fine-tuning and direct prefer- ence optimization (DPO) [20], or their iterative variants where (some of) the training samples are generated online by the policy model itself and filtered by their reward values, e.g., rejection-sampling fine-tuning and iterative DPO. There also exist more advanced RL algorithms with dedicated design for off-policy or asynchronous scenarios [15, 21, 28, 4, 17, 33, 34, 36]. In the rest of this section, we present some of our findings about off-policy RL (with a focus on the bandit setting) through the lens of online policy mirror descent (OPMD). At the end of this section, we arrive at the surprising conclusion that the standard policy gradient, weighted by a coefficient and using the group mean reward as the baseline, can be a feasible direction for updating the policy even in off-policy settings (while the standard theory of policy gradient only holds for on-policy settings). This has been validated empirically in our exploratory experiments during the development of Trinity-RFT . A.1 OPMD: Kimi’s Version Below is a recap of the OPMD variant proposed in the technical report of Kimi k1.5 [28]. Analysis. For a specific task/query xand a reference policy πref, consider the following objective for training policy πθat a particular iteration of the RL process: max θJ(θ;x, π ref):=Ey∼πθ(·|x)[r(x, y)]−τ·DKL πθ(·|x)∥πref(·|x) . Note that πrefcan be changing during the RL process. In [28], πrefis set to πθtat the t-th iteration, i.e., when updating the policy from θttoθt+1. The optimal policy π⋆for this objective satisfies the following: for any response y, π⋆(y|x) =πref(y|x)er(x,y)/τ Z∝πref(y|x)er(x,y)/τ, (1) where Z:=Z(x, π ref) =Z πref(y′|x)er(x,y′)/τdy′=Ey′∼πref(·|x)[er(x,y′)/τ]. (2) Taking logarithm of both sides of Eq. (1), we see that the optimal policy π⋆must satisfy the following consistency condition: r(x, y)−τ·logZ−τ· logπ⋆(y|x)−logπref(y|x) = 0. Algorithm. Based on the above analysis, [28] proposes the following OPMD variant. For a query x, first sample Krollouts y1, . . . , y K∼πref(·|x)from the reference policy, then define a surrogate loss as follows: bJ(θ;x, π ref):=X i∈[K] r(x, yi)−τ·logbZ−τ· logπθ(yi|x)−logπref(yi|x)2 , where τ·logbZ:=τ·log1 KX i∈[K]er(x,yi)/τ . Although this is an off-policy method (since the rollout policy πrefis different from the policy πθbeing updated), it is still limited because the rollouts have to be sampled from the particular policy πref=πθt for the t-th iteration of the RL process, as mentioned earlier. The reason for this limitation is the need of | https://arxiv.org/abs/2505.17826v1 |
estimating Z=Z(x, π ref)using samples from πref(·|x). A.2 Pairwise OPMD Analysis. To eliminate the Zterm, we note that Eq. (1) is equivalent to the following: ∀y1andy2,π⋆(y1|x) π⋆(y2|x)=πref(y1|x) πref(y2|x)e r(x,y1)−r(x,y2) /τ. 17 Taking logarithm of both sides, we have logπ⋆(y1|x)−logπ⋆(y2|x) = log πref(y1|x)−logπref(y2|x) +r(x, y1)−r(x, y2) τ, or equivalently, r(x, y1)−τ· logπ⋆(y1|x)−logπref(y1|x) =r(x, y2)−τ· logπ⋆(y2|x)−logπref(y2|x) . Note that this holds true for a pair of arbitrary responses y1andy2. Algorithm. For a query xandKarbitrary responses y1, . . . , y K, we define the following surrogate loss: bJ(θ;x, π ref):=X 1≤i<j≤K ai−aj2, where ai:=r(x, yi)−τ· logπθ(yi|x)−logπref(yi|x) , i∈[K]. Here πrefcan be any reference policy for KL regularization, regardless of how y1, . . . , y Kwere sampled. While this is a fully off-policy RL method, it has its own limitation: to run this algorithm, we should make sure that multiple (at least 2) rollouts for the same task are included within one micro-batch (whose size is typically much smaller that of a batch or mini-batch), which adds to infrastructure complexity. Remark 2.In the special case of K= 2, the above method, termed as “pairwise OPMD”, turns out to be the same as “contrastive policy gradient” proposed in [4], albeit with a simpler and more intuitive derivation. A.3 OPMD: an Embarrassingly Simple Variant Analysis. Consider the t-th iteration of the RL process, i.e., updating from θttoθt+1, and use πref=πθt as the reference policy. For a specific task/query x, recall from Section A.1 the original objective: max θJ(θ;x, πθt):=Ey∼πθ(·|x)[r(x, y)]−τ·DKL πθ(·|x)∥πθt(·|x) . We leverage the analysis in Section A.2, and take a closer look at the following pairwise loss for aiandaj, normalized by 1/(1 +τ)2to make the loss scale invariant to the hyperparameter τ: (ai−aj)2 (1 +τ)2=1 (1 +τ)2 r(x, yi)−r(x, yj) −τ· logπθ(yi|x)−logπθt(yi|x) − logπθ(yj|x)−logπθt(yj|x)2 . The trick here is that, if we only intend to take one gradient step of this loss at θ=θt, then the value of (logπθ(yi|x)−logπθt(yi|x))−(logπθ(yj|x)−logπθt(yj|x))is simply zero. As a result, ∇θ(ai−aj)2 (1 +τ)2 θt=−2τ (1 +τ)2 r(x, yi)−r(x, yj) ∇θlogπθ(yi|x)|θt− ∇ θlogπθ(yj|x)|θt , and thus ∇θX 1≤i<j≤K(ai−aj)2 (1 +τ)2 θt =X 1≤i<j≤K−2τ (1 +τ)2 r(x, yi)−r(x, yj) ∇θlogπθ(yi|x)|θt− ∇ θlogπθ(yj|x)|θt =X 1≤i<j≤K−2τ (1 +τ)2 r(x, yi)−r(x, yj) ∇θlogπθ(yi|x)|θt+ r(x, yj)−r(x, yi) ∇θlogπθ(yj|x)|θt 18 =−2τ (1 +τ)2X 1≤i≤KX 1≤j≤K r(x, yi)−r(x, yj) ∇θlogπθ(yi|x)|θt =−2τ (1 +τ)2X 1≤i≤KK· r(x, yi)−r(x) ∇θlogπθ(yi|x)|θt, where r(x):=1 KP j∈[K]r(x, yj)in the last line. Algorithm. To this end, we update from θttoθt+1by taking one gradient step of the following surrogate loss, where we simplify the constant factor from 2τ/(1 +τ)2to1/(1 +τ)and also drop the Kfactor: min θbJ(θ;x):=−1 1 +τX 1≤i≤K r(x, yi)−r(x) logπθ(yi|x). This is exactly the standard policy gradient using the group mean reward as the baseline, but derived differently with an off-policy interpretation. The hyperparameter τ≥0controls the size of each policy update. A larger τis effectively the same as a smaller learning rate. As a heuristic, we simply add a regularization term (denoted by g) to the above objective when additional regularization with respect to a fixed policy, e.g., a SFT model πsft, is desired: min θbJ(θ;x):=−1 1 +τX 1≤i≤K r(x, yi)−r(x) logπθ(yi|x) | https://arxiv.org/abs/2505.17826v1 |
+β·g πθ, πsft;x, y1, . . . , y K . 19 B Additional Details for Data Pipelines B.1 Illustration of Typical Dataset Classes and Schemas The dataset shaping process described in Section 3.3 is supported by PostgreSQL persistence with compre- hensive schema designs; one example can be found in Listing 6. Key model classes include RftDatasetModel (core dataset storage with lineage tracking), ExperienceModel (which stores serialized experiences with priority scoring), DPODataModel andSFTDataModel (specialized storage for DPO and SFT, respectively). 1-- RftDataset table structure 2CREATE TABLE rft_dataset ( 3 id SERIAL PRIMARY KEY, -- Unique record identifier 4 consumed_cnt INTEGER DEFAULT 0, -- Usage counter for sampling 5 prompt TEXT, -- Input instruction 6 response TEXT, -- Model-generated response 7 reward FLOAT, -- Training signal value 8 quality_score FLOAT DEFAULT 0.0, -- Quality assessment metric 9 difficulty_score FLOAT DEFAULT 0.0, -- Complexity estimation 10 diversity_score FLOAT DEFAULT 0.0, -- Response uniqueness measure 11 priority FLOAT DEFAULT 0.0 -- Sampling preference weight 12); Listing 6: Schema definition for RftDataset PostgreSQL table. B.2 Illustration of Data Processing Service The architecture presented in Section 3.3 integrates Data-Juicer capabilities through service-oriented de- sign. Typically, the DJ-service lifecycle covers three stages: initialization via scripts/start_servers.py ; configuration through dj_config_path or natural language ( dj_process_desc ); and execution handling dataset loading, cleaning, priority scoring, and database export. Moreover, we provide DJExecutor to bridge Data-Juicer capabilities through config translation and execution sandboxing, and dynamic termination via _check_clean_targets() with melt protection. B.3 An Example of Active Data Optimization Algorithms The DataActiveIterator serves as the core engine for data-model co-evolution with multi-dimensional utility scoring. It provides extensible interfaces for flexible training prior injection, e.g., as demonstrated in Algorithm 1. Algorithm 1 Multi-Dimensional Sample Selection Require: Dataset D, weights wq, wd, wk, wf Ensure: Selected subset D′′ 1:(q, d, k )←DataCleaner .Process (D) # Compute quality ( q), diversity ( d), difficulty ( k) 2:Ui←wqqi+wddi+wkki−wffi # Combine q,d,k, frequency ( f) 3:D′← {si∈D|Ui> θmin} # Threshold filtering 4:D′′←Top-K (D′, K) # Priority selection 5:ExperienceCache .Update (D′′) # Update version-controlled cache 6:LineageTracker .Record (D′′) # Track data provenance 7:return D′′ 20 | https://arxiv.org/abs/2505.17826v1 |
arXiv:2505.17827v1 [cs.CL] 23 May 2025Not All Tokens Are What You Need In Thinking Hang Yuan1,5, Bin Yu2,5, Haotian Li2,6, Shijun Yang3,6 Christina Dan Wang4,Zhou Yu1,Xueyin Xu5,6,Weizhen Qi5,6,Kai Chen5,6, 1East China Normal University,2Harbin Institute of Technology, 3University of Science and Technology of China,4New York University Shanghai, 5Zhongguancun Academy,6Zhongguancun Institute of Artificial Intelligence Correspondence: weizhenqi@zgci.ac.cn Abstract Modern reasoning models, such as OpenAI’s o1 and DeepSeek-R1, exhibit impressive problem- solving capabilities but suffer from critical inef- ficiencies: high inference latency, excessive computational resource consumption, and a tendency toward overthinking—generating ver- bose chains of thought (CoT) laden with re- dundant tokens that contribute minimally to the final answer. To address these issues, we propose Conditional Token Selection (CTS), a token-level compression framework with a flex- ible and variable compression ratio that iden- tifies and preserves only the most essential to- kens in CoT. CTS evaluates each token’s con- tribution to deriving correct answers using con- ditional importance scoring, then trains mod- els on compressed CoT. Extensive experiments demonstrate that CTS effectively compresses long CoT while maintaining strong reasoning performance. Notably, on the GPQA bench- mark, Qwen2.5-14B-Instruct trained with CTS achieves a 9.1% accuracy improvement with 13.2% fewer reasoning tokens (13% training token reduction). Further reducing training to- kens by 42% incurs only a marginal 5% accu- racy drop while yielding a 75.8% reduction in reasoning tokens, highlighting the prevalence of redundancy in existing CoT. The code and data are available at github and huggingface. 1 Introduction Large reasoning models such as o1(Jaech et al., 2024) and R1(Guo et al., 2025) significantly en- hance their reasoning capabilities through rein- forcement learning, instructing models to generate thoughtful reasoning steps before producing final answers. Guo et al. (2025) demonstrated that by fine-tuning non-reasoning models like Qwen2.5- 14B-Instruct on long Chain of Thought (CoT) data generated by R1, these models can acquire com- parable reasoning abilities, even surpassing o1- mini on math and code reasoning tasks. Conse- quently, numerous distilled R1 reasoning datasetshave emerged, including s1K(Muennighoff et al., 2025), SkyThought(Team, 2025), OpenMathRea- soning(Moshkov et al., 2025), and AM-1.4M (Zhao et al., 2025). Small language models trained on these datasets consistently demonstrate remark- able reasoning capabilities. However, the ever- increasing length of CoT sequences burdens both training and inference, with recent studies (Sui et al., 2025) revealing that models often overthink, expending substantial resources on redundant rea- soning steps. This inefficiency raises a critical question: " How can we preserve the accuracy gains of long CoT reasoning while eliminating its computational waste? " Existing solutions, such as TokenSkip’s task-agnostic compression (Xia et al., 2025), show promise for short CoT sequences but fail to address the unique challenges of reinforcement learning- generated long CoT data (spanning thousands of tokens). Moreover, they overlook contextual sig- nals like questions and answers, which prior work Tian et al. (2025) identifies as key to effective com- pression. To bridge this gap, we propose Conditional Token Selection, a framework that dynamically prunes redundant reasoning tokens while preserv- ing those essential for deriving answers. CTS leverages a reference model (RM) trained on high- quality reasoning corpora to score | https://arxiv.org/abs/2505.17827v1 |
token impor- tance conditioned on critical context (e.g., ques- tions and answers). As shown in Figure 1, by fil- tering CoT data at adjustable compression ratios and then fine-tuning model with compressed data, we enable models to learn how to skip unnecessary reasoning tokens during inference. We conducted extensive experiments on mod- els of various sizes, including the Llama-3.1-8B- Instruct (Grattafiori et al., 2024) and the Qwen2.5- Instruct series (Qwen et al., 2025). The experi- mental results demonstrate the effectiveness of our method and confirm that there indeed exist many Figure 1: Illustration of Conditional Token Selection. For long CoT datasets, CTS leverages a well-trained reference model to evaluate the importance of each thinking token conditional on the answer, removing less important tokens based on the compression ratio α. The model is then trained on this compressed data, enabling more efficient reasoning capabilities. redundant thinking tokens in long CoT data. Specif- ically, when applied to Qwen2.5-14B-Instruct on the GPQA benchmark (Rein et al., 2024), CTS demonstrates remarkable efficiency gains: a mod- est 13% reduction in training tokens results in both a9.1% accuracy improvement and 13.2% fewer reasoning tokens. When training token reduction is increased to 42% , the method incurs only a minor 5% accuracy decrease while achieving a dramatic 75.8% reduction in reasoning tokens, highlighting an advantageous accuracy-efficiency trade-off. On other benchmarks, such as MATH500 (Hendrycks et al., 2021) and AIME24 (Mathematical Associa- tion of America, 2024), as well as with other mod- els, using the compressed training data obtained through CTS resulted in improved accuracy com- pared to the original data after training. In summary, our key contributions are: •We introduce the Conditional Token Selection framework, which assigns conditional impor- tance scores to tokens within CoT trajectories. This framework selectively preserves essential reasoning tokens, based on critical contextual information, at adjustable compression ratios. •We’ve developed a reference model, trained on high-quality reasoning data, that more ac- curately assesses the conditional importanceof tokens in reasoning CoTs. This model can be applied to other independent tasks, such as prompt compression. •We comprehensively compare token-based compression methods—both conditional and non-conditional—for long CoT data gener- ated by reinforcement learning, thereby vali- dating our token selection strategies. 2 Preliminaries In this section, we introduce some important pre- liminary concepts. 2.1 Token Compression based on Perplexity For a given context x={xi}n i=1, the self- information of a token xt(1≤t≤n) can be defined as: I(xt) =−log2P(xt|x0, x1, ..., x t−1)(1) Perplexity (PPL) is then defined based on self- information as: PPL(xt) = 2I(xt)(2) Perplexity is commonly used to measure a language model’s ability to predict the given context. Remov- ing tokens with lower perplexity (Li et al., 2023) has a relatively small impact on the model’s under- standing and prediction of the context. Then token compression with a flexible ratio can be achieved by sorting all tokens in the context by perplexity and subsequently removing lower-ranking tokens. 2.2 Conditional and Unconditional Compression To address information redundancy in prompts with long contexts and avoid resource waste during infer- ence, Li et al. (2023) proposed a method that uses | https://arxiv.org/abs/2505.17827v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.