text string | source string |
|---|---|
detection. Hum = human caption, syn = synthetic caption, #S = number of shots. Meme 1 Meme 2 Meme 3 Features responsib_micro ,humortypes_pun responsib_macro humortypes_irony Gold Stance Convinced Skeptical Convinced Predicted StanceNeither Convinced Skeptical Gold Frames ALLOCATION , PROSPECT ADEQUACY HOAX, PROPRIETY , ADEQUACY Predicted FramesREAL, IMPACT REAL, HOAX, ADEQUACY REAL, IMPACT Model Inter- pretationThis meme humorously highlights the common experience of getting unex- pected looks from strangers during St. Patrick’s Day celebrations. The im- age cleverly combines two related sce- narios: wearing green for the holiday and taking public transportation. By showing muscular arms representing these two situations joining together, it suggests that both activities share a common outcome - drawing attention or being stared at. The meme taps into the relatable discomfort of standing out in a crowd, particularly during a holi- day known for its vibrant green attire and large gatherings. Its simplicity and relatability make it an effective piece of internet humor that resonates with many people’s St. Patrick’s Day expe- riences.The meme features a man holding a dollar bill, with a caption that reads, "Is there a tax I can pay to stop COVID- 19... Or does that only work for cli- mate change?" This meme is a humor- ous take on the perceived difference in public opinion and government re- sponse to the COVID-19 pandemic and climate change. The implication is that people are more willing to pay taxes to address climate change, but not to combat the COVID-19 pandemic. The meme plays on the idea that some indi- viduals prioritize environmental issues over public health crises, and it uses the man’s expression to convey a sense of frustration or disbelief at this per- ceived attitude. The humor arises from the juxtaposition of the man’s action (offering to pay a tax) with the implica- tion that such an action would be more effective for climate change than for COVID-19.The meme features a series of im- ages of a woman with a surprised or shocked expression, holding a bottle of ”ZERO“ with the text ”Pimples? Zero“ and ”Blackheads? Zero“ above each image. The humor in the meme comes from the exaggerated and unex- pected response to the common skin issues of pimples and blackheads. The woman’s expression and the repeated use of the word ”zero“ suggest that the product being advertised is highly ef- fective in eliminating these skin prob- lems. The meme plays on the idea that the product is so powerful that it can completely eradicate these issues, which is an exaggerated and humorous claim. The meme is likely intended to be a parody of skincare product adver- tising, using hyperbole and visual cues to create a comedic effect. Table 15: Qualitative examples of meme stance and frame prediction with model errors and interpretation generated by LLaV A. | https://arxiv.org/abs/2505.16592v2 |
From Generic Empathy to Personalized Emotional Support: A Self-Evolution Framework for User Preference Alignment Jing Ye1,2, Lu Xiang1,2*, Yaping Zhang1,2, Chengqing Zong1,2 1State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, CAS, Beijing, China 2School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China yejing2022@ia.ac.cn; {lu.xiang, yaping.zhang,cqzong}@nlpr.ia.ac.cn Abstract Effective emotional support hinges on under- standing users’ emotions and needs to provide meaningful comfort during multi-turn interac- tions. Large Language Models (LLMs) show great potential for expressing empathy; how- ever, they often deliver generic and one-size- fits-all responses that fail to address users’ spe- cific needs. To tackle this issue, we propose a self-evolution framework designed to help LLMs improve their responses to better align with users’ implicit preferences concerning user profiles (personalities), emotional states, and specific situations. Our framework con- sists of two distinct phases: (1) Emotional Sup- port Experience Acquisition , where LLMs are fine-tuned on limited emotional support con- versation data to provide basic support, and (2) Self-Improvement for Personalized Emotional Support , where LLMs leverage self-reflection and self-refinement to generate personalized responses. Through iterative direct preference optimization between the pre- and post-refined responses, our model generates responses that reflect a better understanding of the user’s im- plicit preferences. Extensive experiments and evaluations demonstrate that our method sig- nificantly enhances the model’s performance in emotional support, reducing unhelpful re- sponses and minimizing discrepancies between user preferences and model outputs. 1 Introduction Emotional support conversation (ESC) systems re- quire a deep understanding of users’ emotions and need to provide meaningful comfort and assistance during multi-turn interactions (Peng et al., 2022; Rains et al., 2020), which are vital in practical ap- plications such as mental health care, emotional companionship, and customer service. Given that each user has unique emotional needs and experi- ences (Rogers, 2013), delivering personalized and *Corresponding Author I've been feeling really down since my best friend passed away in an accident . Losing a close friend suddenly in an accident is really hard to accept. 3 years ago, my childhood friend also passed away in a car accident on a rainy day. I often leave messages on his old blog posts. Maybe you could do something similar—share your daily life and thoughts to him, as if he just went abroad to study . I understand how difficult it must be for you. Losing a best friend who was like a sibling is an immense loss. Take your time to pr ocess your emotions. Thank you for understanding. It's just hard to accept that he is no longer with me. The everyday moments we shared, like having meals together and going to school , are still vivid in my mind. Dialogue History Candidate Responses She is shocked and deeply saddened by her close classmate's unexpected death. Feedback Suggestion User State Reflection A specific response, acknowledging the sudden news and suggesting a way to remember their friend, would be mor e meaningful. Direct Prompt I'm really sorry you'r e going through this. It sounds like those memories have become bittersweet for you. Here are some suggestions that | https://arxiv.org/abs/2505.16610v1 |
might help: 1. Allow yourself to grieve... 2. Create new routines ... 3. Focus on your own healing ... ... Remember that the love and the connection you shared doesn’ t simply vanish. I'm sorry to hear that. It sounds like you'r e really missing the deep connection you had with your best friend ...... verbose, formulaic Task SFT repetitive, superficial Figure 1: Example responses. Direct prompting of LLaMA results in verbose and formulaic outputs. Task-Specific SFT is empathetic but often lacks depth and variety, giving it a perceived "AI-like" quality. In contrast, self-reflection on user preferences provides a pathway to more specific and engaging responses. contextually appropriate emotional support is es- sential for ensuring practical assistance (Campos et al., 2018; Cheng et al., 2023). Despite the promising potential of LLMs for generating empathetic responses (Touvron et al., 2023; Yang et al., 2024; Achiam et al., 2023), they often struggle to provide diverse and contextually appropriate support (Wang et al., 2024a). As il- lustrated in Figure 1, direct prompting LLMs often results in superficial empathy, verbosity, and formulaic structures . A simple yet effective approach is supervised fine-tuning (SFT) LLMs on ESC corpora (Zheng et al., 2024b, 2023; QiuarXiv:2505.16610v1 [cs.CL] 22 May 2025 et al., 2023; Zhang et al., 2024). However, SFT relies on substantial, high-quality ESC data, which is often scarce and difficult to acquire. Moreover, over-reliance on SFT can lead to repetitive re- sponses that express empathy overtly but lack depth and variety (Irvine et al., 2023). As demon- strated in Figure 1 and 6, SFT models can fall into predictable patterns, frequently using phrases like "It sounds like. . . " or " I’m sorry to hear that. . . ". Recent insights highlight that LLMs can self- improve their performance through self-reflection and self-refinement guided by human-designed principles (Lu et al., 2024b; Madaan et al., 2023; Ye et al., 2023; Yasunaga et al., 2024). Inspired by these findings, we pose the intriguing ques- tion: Can LLMs betaught toconsider what kind ofresponses aregenuinely needed byusers, and canthisreflective process lead torefined andmore personalized responses? This work seeks to bridge the gap between generic empathetic responses and truly user- centered personalized emotional support by incor- porating self-reflection and self-refinement into au- tomated systems. Effective ES systems require an iterative approach that continuously reflects ongo- ing dialogue to refresh user understanding and re- fine responses, ultimately delivering targeted empa- thy and tailored solutions. The empirical evidence presented in Figure 1 and Table 3 demonstrates that instructing LLMs to summarize user situations, infer emotions and causes, and choose appropriate support strategies leads to a significant improve- ment in response quality. To this end, we introduce a self-evolution frame- work for user preference alignment. As depicted in Figure 2, our self-evolution framework com- prises two steps: (1) Emotional Support Experi- ence Acquisition : we first fine-tune LLMs on lim- ited ESC data, enabling them to provide essential emotional support. (2) Self-Improvement for Per- sonalized Emotional Support : Subsequently, we leverage LLMs’ inherent self-reflection and self- refinement capabilities to generate responses that consider the implicit | https://arxiv.org/abs/2505.16610v1 |
user preference, including profile ,situation , and emotions . The pre- and post- refined responses are considered the preference data. Through direct preference optimization, the model generates responses that reflect an under- standing of the user’s implicit preferences during interactions, thereby eliminating the need for ex- plicit reflection and refinement steps. Experimental results and extensive human evaluations indicatethat our generated responses are more diverse and better aligned with user input. These improved re- sponses effectively reduce ineffective empathy and preference misalignment, facilitating more produc- tive multi-turn interactions. Our main contributions can be summarized as follows: •We reveal the limitations of the current Emo- tional Support Chatbot, which is notably de- ficient in understanding users’ implicit pref- erences, resulting in repetitive and superficial expressions of empathy. •We present a simple yet effective self- evolution framework for personalized emo- tional support without explicit reflection and refinement steps. •Experimental results and comprehensive hu- man evaluations demonstrate that our method effectively minimizes unhelpful responses and discrepancies in personalized preferences. 2 Method Inspired by recent insights highlighting that LLMs can self-improve through language feedback (Lu et al., 2024b; Madaan et al., 2023; Ye et al., 2023; Yasunaga et al., 2024), we present a self-evolution framework designed to enable LLMs to provide per- sonalized emotional support. This framework oper- ates in two phases: Emotional Support Experience Acquisition (Section 2.1) and Self-Improvement for Personalized Emotional Support (Section 2.2). 2.1 Emotional Support Experience Acquisition 2.1.1 Task Definition Emotional Support (ES) involves understanding the user’s situation and choosing appropriate sup- portive strategies to alleviate their distress. For- mally, we can denote the ES model as M, and represent the current dialogue context as Cn= (q1, r1, ..., q i−1, ri−1, ...qn). In this representation, qiandricorrespond to the i-th utterance from the user and the model, respectively. Given the task and strategy description prompt Ptask, the goal of the ES model is to generate an emotional support- ive response rn, which can be represented as: rn=M(Ptask|| Cn) (1) 2.1.2 Task Learning We equip the model with emotional support capa- bility by fine-tuning the backbone on the manually Implicit User Pr efer ence... Instruction Following Data Task SFT LV t DPO via Preference DataSelf-Reflection on User Preference User Emotion LV t+1 LV t LV 0 Emotional Support Conversations Origin Direct Response Self-Refined for Personalized Response User SituationUser Profile (1) Emotional Support Experience Acquisition(2) Self-Improvement for Personalized Support Init t = 0 Emotional Support Seed Queries Figure 2: The overview of our self-evolution framework, which enhances personalized emotional support capabilities through a two-stage learning phase: (1) Emotional Support Experience Acquisition: We fine-tune LLMs on minimal human-annotated ESC data, equipping them with basic emotional support capability. (2) Self-Improvement for Personalized Emotional Support: We utilize the LLMs’ self-reflection abilities to tailor responses to the user’s personality, situation, and emotions. The pre- and post-refined responses are natural synthetic preference data. The process involves iterative preference optimization for generating responses that align with the user’s implicit preferences, eliminating the need for explicit reflection steps. annotated ESConv dataset (Liu et al., 2021). To pre- serve the general abilities, we | https://arxiv.org/abs/2505.16610v1 |
employ Low-Rank Adaptation (Hu et al., 2021), fine-tuning only the LoRA adapter parameters. We further incorpo- rate a replay mechanism by incorporating some instruction-following data (Wang et al., 2024c). The model is trained using the SFT loss: LSFT=−logP(y|x,P;θ) (2) where x,yrepresent the input and output of the model, respectively, while Pdenotes the task de- scription or instructions. The resulting fine-tuned model is denoted as M0. 2.2 Self-Improvement for Personalized Emotional Support While SFT improves empathetic response genera- tion, it often produces superficial outputs, failing to capture nuanced user preferences that are cru- cial for effective emotional support. To address this limitation and minimize unhelpful responses, we in- troduce a self-improvement method based on itera- tive direct preference optimization (DPO) (Rafailov et al., 2023). Guided by human design principles, the model reflects on the user’s personality, situ- ation, and emotion to refine its responses. These pre- and post-refined responses naturally serve asrejected and chosen candidates, respectively (Dong et al., 2024). Through direct preference optimiza- tion, the model generates responses that reflect an understanding of the user’s implicit preferences during interactions, thereby eliminating the need for explicit reflection and refinement steps. 2.2.1 Synthetic Preference Data Generation Rejected Response Generation Constructing high-quality preference data pairs requires a di- verse set of user queries. While synthetic ESC datasets may not produce emotional support re- sponses comparable to human quality, they offer a valuable source of varied queries (Zheng et al., 2024b, 2023). We extract the dialogue context Cn from these synthetic datasets, where nis the turn index, and employ Mtto generate responses. yt n=Mt(Cn),initial t= 0 (3) These unrestricted responses are treated as rejected responses. Self-Reflection on Implicit User Preference Re- search indicates that LLMs possess strong contex- tual inference capabilities (Yang et al., 2024; Dubey et al., 2024), enabling them to infer user emotions, implicit profiles, and even personality from ongo- ing conversations. Given the dialogue history Cn and human-designed principles I, the model Mt is tasked with summarizing the user’s profile un and current emotional state snaccording to the following equation: (un, sn) =Mt(I || C n) (4) unandsnare continuously updated throughout the conversation, enabling the model to refine its understanding of the user. Self-Refinement for Personalized Responses Re- sponses generated solely from dialogue history of- ten fail to capture the user’s implicit preferences. Drawing on insights from psychological research, user preferences can be decomposed into two key dimensions: long-term traits, encapsulated by the user profile (Fleeson, 2001), and context-sensitive emotional needs (Tamir, 2015). To better under- stand and adapt to these implicit preferences, we leverage the strong contextual reasoning of LLMs (Yang et al., 2024; Dubey et al., 2024). Given the dialogue history Cnand human-designed principles I, the model Mtis tasked with summarizing the user’s profile pnand current emotional state sn according to the following equation: ˜ytn=Mt(I || C n, un, sn, yt n) (5) The pre- and post-refined responses form a prefer- ence pair (yt,˜ytn), serving as the rejected and cho- sen candidates, respectively. Inevitably, some low- quality data is generated during this process; the data filtering process is detailed in Appendix | https://arxiv.org/abs/2505.16610v1 |
A, and the prompts for self-reflection and self-refinement are shown in Figure E.1. 2.2.2 Preference Optimization The synthetic preference data generation process naturally facilitates iterative self-improvement. In each iteration, we employ DPO (Rafailov et al., 2023) for training. LDPO = log σ(β·logP(˜ytn|Cn;θ) P(˜ytn|Cn;θ′) −β·logP(yt n|Cn;θ) P(ytn|Cn;θ′))(6) To mitigate the instability of DPO training, we incorporate an SFT loss on the chosen responses during optimization. LSFT=−logP(˜ytn|Cn;θ) (7)Dataset ExTES ESConv ServeForEmo # Session 11,167 1,295 3,749 Avg Session Len 16.68 22.58 15.91 Avg Utter. Len 29.59 21.17 18.45 Avg Seeker Utter. Len 22.63 19.90 15.39 Avg Supporter Utter. Len 36.55 22.44 21.51 Table 1: The Statistics of Emotional Support Datasets. Con- versations in these datasets typically span seven turns, with an average utterance length of approximately 20 words. The final optimization loss is: L=LDPO +γ· LSFT (8) hereβandγare set to 0.1and1, respectively. 3 Experiments 3.1 Dataset We collect three ESC datasets: the manually anno- tated ESConv dataset (Liu et al., 2021), and the syn- thetically generated ExTES (Zheng et al., 2024b) and ServeForEmo (Ye et al., 2024). Detailed statis- tics are available in Table 1. ESConv is split into training and testing sets with a 9:1 ratio. During the Emotional Support Experience Acquisition stage, we use the ESConv training set along with 500 instruction-following samples from Alpaca (Taori et al., 2023). And we combine ExTES and Serve- ForEmo as seed data for synthetic preference data generation. 3.2 Implementation Details This study employs three frequently used LLMs as backbones: LLaMA-3-8B-Instruct1, Qwen2- 7B-Instruct2, and Mistral-7B-Instruct-v0.33. The LoRA technique (Hu et al., 2021) is employed across all experiments, featuring a LoRA adapter with a rank of 8 and alpha of 16 into each linear module. For optimization, we utilize the AdamW optimizer (Loshchilov and Hutter, 2017) with a learning rate of 5×10−6and a linear warm-up dur- ing the initial 1% of the training steps. The batch size is set to 4 per device, with gradient accumu- lation every two steps across two epochs. Early stopping is implemented with a patience thresh- old of 3 evaluation steps to mitigate over-fitting. For generation and evaluation, we set the decoding 1https://huggingface.co/meta-llama/Meta-Llama-3-8B- Instruct 2https://huggingface.co/Qwen/Qwen2-7B-Instruct 3https://huggingface.co/mistralai/Mistral-7B-Instruct- v0.3 parameters to a temperature of 0.9, top-p of 0.8, top-k of 50, and a repetition penalty of 1.2. All ex- periments are conducted on 1 NVIDIA L40 40GB GPU. The implementation framework utilized is LLaMA-Factory (Zheng et al., 2024a). 3.3 Baselines To evaluate the effectiveness of our approach, we conducted a comparative evaluation across three categories under identical experimental settings: Vanilla : Instruction-based backbone models pro- vided with ESC task prompts. These served as baselines to assess inherent capabilities without task-specific fine-tuning. SFT: LLMs fine-tuned on two dataset types: the ESConv dataset ( SFT-ESConv ) and synthetic ESC datasets including ExTES and ServeForEmo ( SFT- SynESC ). Self-Evolution with Preference Learning : Mod- els at different iterations in our self-evolution framework: •M0: The initial fine-tuned ES model. •Mt: Models initialized from Mt−1and opti- mized using synthetic preference data gener- ated by Mt−1. 3.4 Evaluation Details 3.4.1 Evaluation Settings Our evaluation comprises objective andsubjective assessments. The | https://arxiv.org/abs/2505.16610v1 |
objective evaluation measures the similarity between model-generated and manu- ally annotated responses using the ESConv test set. Recognizing the limitations of text overlap metrics for the open-ended ES task, which can penalize informative and creative responses, we prioritize subjective evaluation to better reflect real-world user experience. This subjective assessment incor- porates interactive pointwise and pairwise human evaluations. Appenix F illustrate the evaluation process and guidelines, respectively. Interactive Pointwise Evaluation : To mitigate evaluation bias, we employ an in- teractive pointwise evaluation where dialogue ses- sions were randomly assigned to different models. Participants, consisting of 50 undergraduate stu- dents with diverse backgrounds, rate their satisfac- tion with the assigned ES agent on a 5-point Likert scale (Likert, 1932) across predefined dimensions. Higher scores indicate better performance. The fi- nal score for each model is calculated by averaging the ratings across all participants. Each dialogueincludes at least eight turns. LLM-as-a-judge point- wise evaluations are also provided in the Appendix C.2. Interactive Pairwise Evaluation : Four graduate students engage in dialogues with the models, with each dialogue lasting at least ten turns. At each turn, two models (A and B) generate responses simulta- neously based on user input. The user then selects "A win", "B win", or "tie". The winning response is appended to the dialogue history for subsequent turns (Zhou et al., 2024). In the event of a tie, the user can choose to continue the conversation with either response. 3.4.2 Evaluation Metrics Automation Evaluation We employ five es- tablished automatic evaluation metrics. BLEU- n(Papineni et al., 2002), ROUGE-L (Lin, 2004), METEOR(Banerjee and Lavie, 2005), and BERT- Score (Zhang* et al., 2020) metrics are used to as- sess similarity with the human-written references. For evaluating diversity, Distinct-n (Li et al., 2016) metrics are utilized. Alignment with human preference N-gram- based evaluation metrics correlate poorly with hu- man judgments due to the diverse valid responses in ESC. Following previous studies (Liu et al., 2021; Zheng et al., 2023), we focus on seven primary aspects for evaluating the alignment level with hu- man preference: Coherence, Understanding, Em- pathy (Ma et al., 2020), Informativeness, Helpful- ness, Engagement (Ghazarian et al., 2019), and Overall Quality . Detailed evaluation descriptions are provided in Appendix D. 4 Experimental Results 4.1 Objective Evaluation Table 2 presents the objective evaluation results on the ESConv test set. We evaluate all models at the utterance level, with ground truth dialogue context. From the results, we find: Our model outperforms baseline models across most dimensions. The results demonstrate that our model significantly improves upon baseline models in terms of BLEU score and Distinct-n, indicating greater diversity in generated responses. This im- provement directly addresses the issue of repeti- tive responses and suggests that our self-evolution framework promotes the generation of more varied and contextually appropriate support, a key require- ment for effective emotional support conversations. Coherence & Consistency Fluency Semantic Diversity ModelBLEU-2 BLEU-3 Rouge-l METEOR BERT-Score Distinct-2 Distinct-3 LLaMA-3-8B-Instruct Vanilla 11.29 8.04 10.43 16.14 84.27 72.83 85.35 SFT-ESConv 18.75 13.27 17.12 13.47 86.37 91.30 94.90 SFT-SynESC 18.35 12.85 16.52 13.17 86.22 91.23 94.97 M018.38 12.95 | https://arxiv.org/abs/2505.16610v1 |
16.72 13.37 86.28 90.84 94.72 M220.06 13.63 15.50 15.77 86.38 91.43 96.11 Qwen-2-7B-Instruct Vanilla 9.56 6.85 9.13 14.78 83.32 68.74 83.55 SFT-ESConv 19.24 13.54 17.19 13.78 86.33 90.66 94.71 SFT-SynESC 18.55 12.98 16.72 13.82 86.24 90.84 94.86 M019.18 13.56 17.00 13.82 86.27 90.68 94.94 M220.02 13.80 15.91 15.52 86.18 94.21 97.07 Mistral-7B-Instruct-v0.3 Vanilla 15.09 10.60 12.56 15.95 84.95 77.88 88.46 SFT-ESConv 17.49 12.16 14.18 13.59 85.71 91.17 94.97 SFT-SynESC 18.87 13.28 16.84 13.46 85.24 90.87 94.77 M019.44 13.81 16.77 14.08 86.35 91.20 94.92 M220.25 13.99 16.53 15.18 86.26 92.65 96.07 Table 2: The overall objective evaluation results on the ESConv benchmark. All the responses are evaluated at the utterance level, with ground truth dialogue context. The best result is bolded , and the second-best result is underlined . Our models( M2) significantly improve on the base models ( M0) and achieve the best performance across most dimensions. The iterative self-evolution process drives con- tinuous improvement. The progression from M0 toM2demonstrates the effectiveness of our self- evolution framework. Across all backbones, M2 shows clear improvements over M0in Coherence & Consistency and Diversity. For instance, on the LLaMA backbone, BLEU-2 improves from 18.38 to 20.06, and Distinct-3 increases from 94.72 to 96.11. Our framework demonstrates strong general- ization across backbones. The consistent per- formance gains of M2across diverse backbones (LLaMA, Qwen, and Mistral ) highlight the robust- ness and generalization of our approach. This indicates that improvements are due to the self- evolution training, not specific architectural biases. 4.2 Subjective Evaluation To assess the effectiveness of our models from a user-centric perspective, we conduct a comprehen- sive interactive human evaluation of M0,M1, and M2with LLaMA-3-8B-Instruct as the backbone. LLM evaluation result refers to Appendix C.2. Interactive Point-wise Evaluation: Figure 3 demonstrates the consistent performance gains achieved through iterative self-evolution. While the SFT-based M0, already exhibits strong perfor- mance in Coherence and Empathy, subsequent iter- CoherenceUnderstandingEmpathy Informativeness Helpfulness EngagementOverall12345 M0 M1 M2Figure 3: Interactive pointwise human evaluation results. The results demonstrate that our self-evolution framework signifi- cantly enhances user experience, with M1andM2showing notable improvements in engagement ,helpfulness , and infor- mativeness . ations ( M1andM2) show consistent gains across all dimensions, including Engagement, Informa- tiveness, Helpfulness, and Understanding. This shows that self-reflection on user contexts and sit- uations improves the model’s ability to address implicit preferences, enhancing user satisfaction. Interactive Pair-wise Evaluation: Figure 4 shows that both M1andM2achieve significantly higher win rates than M0in human interactive eval- uation. Following the evaluation settings described in Section 3.4.1, responses chosen for continued dialogue are considered "wins." This higher prefer- 0.0 0.2 0.4 0.6 0.8 1.0M2 vs M0M1 vs M0M2 vs M1 54.1% 16.2% 29.7%45.9% 24.3% 29.7%32.4% 43.2% 24.3% Win Tie LoseFigure 4: Interactive pairwise human evaluation results ob- tained using LLaMA-3-8B-Instruct as the backbone model. In the ‘A vs B’ comparisons, ■indicates ‘A win’, ■indicates ‘tie’, and ■indicates ‘B win’. Notably, M2andM1excel overM0, suggesting the effectiveness of implicit user prefer- ence learning. ence for M1andM2indicates a clear user pref- erence for responses that are perceived as more personalized and engaging, moving beyond the formulaic and superficial expressions | https://arxiv.org/abs/2505.16610v1 |
of empathy characteristic of M0.This also confirms the effec- tiveness of using pre- and post-refined responses as preference data to learn implicit user preference. 5 Analysis and Discussion This section aims to address the following key ques- tions: Q1: Does the model exhibit self-reflection and self- refinement capabilities to learn the user’s implicit preferences from the ongoing dialogue? Q2: Does self-refinement lead to better emotional support responses? Q3: What’s the advantage of the synthetic prefer- ence data in our framework? 5.1 (RQ1) Impact of Self-Reflection Model BLEU-2 BLEU-3 Rouge-l Distinct-3 LLaMA 11.29 8.04 10.43 85.35 w/ strategy guidelines 14.80 10.27 12.52 90.36 w/ self-reflection 15.40 10.62 12.78 91.66 Table 3: Comprehensive results of LLaMA-3-8B-Instruct on ESConv under different prompts (Refer to Appendix E.1. Proper guidance can help the model generate responses that are more closely aligned with human-annotated ones. Model GSM8K IFEval Truthful QA Openbook QA MMLU Pro Avg. LLaMA 79.08 60.91 51.66 43.20 39.60 54.89 SFT-ESConv 71.87 54.79 48.67 43.20 36.18 50.94 M073.92 58.03 52.72 45.40 37.24 53.46 M174.83 55.52 49.25 44.40 37.68 52.34 M273.54 55.52 49.57 44.20 37.55 52.08 Table 4: The LLM benchmark results of different version LLaMA3-8B-Instruct . Our framework leverages human-guided self- reflection on user preferences to create positive and negative training data pairs. This enables the model (a) (b) Figure 5: (a) Distribution of response relevance to user state- ments in the dialogue history. The higher relevance to the user in chosen responses indicates that self-reflection on the user’s situations and implicit preferences improves response qual- ity. (b) Similarity distribution between chosen and rejected responses across different iterations. to better align its responses with user preferences, obviating the need for complex prompt engineer- ing. To assess whether the model can better discern users’ implicit preferences in ongoing dialogues through self-reflection, we compared LLM perfor- mance under two prompt settings: (1) w/ strategy guidelines : The system prompt directs the model to use various ES strategies. (2) w/ self-reflection : The model is prompted to understand and summarize the users’ situation before choosing an appropriate response strategy. Table 3 shows that both methods outperform the vanilla LLaMA, demonstrating that appropriate guidance facilitates the generation of responses more closely aligned with human anno- tations. Additionally, to ensure alignment does not diminish the model’s self-reflection and self- refinement abilities, we evaluate its general capabil- ities using LLM benchmarks4. The results in Table 4 demonstrate that the model retains strong reason- ing and instruction-following skills after alignment, thanks to the implementation of LoRA adaptation. 5.2 (RQ2) Preference Data Analysis By considering user situations and implicit pref- erences, the self-reflection mechanism signifi- cantly improves response relevance to the user. Figure 5a shows that chosen responses exhibit higher user relevance than rejected responses. This suggests that reflecting on user context leads to responses better aligned with user needs. Iterative preference optimization progres- sively aligns the model with preferred responses, enhancing its ability to generate user-centered content. In each iteration t, synthetic preference data is generated using the model from the previous iteration Mt−1. Figure 5b illustrates the increasing correlation between chosen and | https://arxiv.org/abs/2505.16610v1 |
rejected responses 4https://github.com/EleutherAI/lm-evaluation-harness Model B-2 B-3 R-l D-2 M011.29 — 8.04 — 10.43 — 72.83 — {HM,P} M114.93 24.36% 10.51 30.79% 15.77 51.16% 93.71 28.67% M216.99 13.80% 11.94 13.61% 15.89 0.76% 92.75 -1.02% {HL,P} M116.37 44.96% 11.53 43.48% 16.26 55.86% 93.45 28.31% M217.40 6.29% 11.94 3.56% 13.73 -15.56% 84.12 -9.98% {SR,P} M115.22 34.78% 9.76 21.46% 12.37 18.57% 95.67 31.36% M216.69 9.66% 11.36 16.39% 12.99 5.01% 82.96 -13.29% Table 5: Comparison results of different preference data pairs. ‘HM’ indicates human-labeled response, ‘ HL’ indicates GPT- 4o generated response, and ‘ SR’ indicates self-refined re- sponse. ‘ P’ represents the model’s initial, unrefined output (rejected response). M0refers to LLaMA-3-8B-Instruct. across iterations. This trend indicates that each it- eration effectively captures valuable information, improving the model’s direct output and obviat- ing the need for explicit reflection and refinement steps. 5.3 (RQ3) Ablation Study on Preference Data Table 5 presents a comparative analysis of different preference data pairs, where the rejected responses (P) represent the model’s initial outputs, while cho- sen responses comprise human-annotated ( HM), GPT-4-generated ( HL), and self-refined ( SR) alter- natives. The results show: (1) Both human/human- level and self-refined chosen responses enhance emotional support capabilities. (2) While SRini- tially produces more modest gains than HM/HL , its performance consistently improves across suc- cessive refinement iterations. Conversely, the fixed nature of HM/HL chosen responses limits further learning and potential improvement. As demon- strated in Section 5.2, the margin between chosen and rejected responses diminishes with each model iteration, leading to reduced gains from preference alignment. This observation underscores that self- refinement enables continuous self-improvement through dynamically generated preference data, making it a particularly cost-effective and promis- ing approach. 6 Related Work Emotional Support Conversation Emotional support assists emotionally distressed users by un- derstanding their emotions, offering comfort, and providing practical support (Liu et al., 2021). A common approach is SFT, which minimizes the negative log-likelihood of gold standard responses. However, SFT relies on high-quality, manually cre- ated datasets, which are expensive and difficult toscale. Recent methods mitigate this by using ad- vanced LLMs to augment ESC data (Zheng et al., 2023, 2024b; Qiu et al., 2023), aiming to distill the ES capabilities of advanced LLMs. Yet, they remain constrained by the inherent limitations of LLMs and often struggle with issues related to data diversity and quality. Reinforcement learning (RL) offers a promising avenue for further enhanc- ing LLM’s ES capabilities (Li et al., 2024). For example, Zhou et al. (2023) focuses on eliciting positive emotions through multi-turn interactions, and Wang et al. (2024b) uses an LLM-as-a-judge to evaluate aspects like empathy, coherence, and effi- ciency, with the feedback helping to generate posi- tive and negative examples for contrastive learning. However, they often overlook users’ diverse prefer- ences for effective ES. LLM Alignment Aligning LLMs with human preferences is crucial for practical applications (Christiano et al., 2017; Lee et al., 2024). Although RLHF is effective, it suffers from training instabil- ity and high memory costs (Ouyang et al., 2022). DPO offers a more stable alternative by directly op- timizing LLMs using preference data | https://arxiv.org/abs/2505.16610v1 |
consisting of prompt-response pairs, where one response is pre- ferred over the other (Rafailov et al., 2023). How- ever, obtaining high-quality human-generated pref- erence data is resource-intensive (Dong et al., 2024; Cui et al., 2023). To mitigate this, some studies utilize synthetic preference data generated through varying prompts (Liu et al., 2024) or employing LLMs as judges to sample diverse responses (Yuan et al., 2024). In this work, we leverage LLMs’ self- reflection and self-refinement capabilities (Guo et al., 2024; Jiang et al., 2025) to generate pref- erence data, motivated by the principle that incor- porating more user-related information improves emotional support effectiveness. Self-improvement of LLMs Recent research has explored two primary approaches to enhancing LLM output quality through self-improvement. On- line self-improvement refines generated outputs through iterative self-evaluation without modifying model parameters (Madaan et al., 2023; Ye et al., 2023; Yasunaga et al., 2024). While effective, this approach incurs significant computational costs due to multi-turn inference and does not address under- lying model limitations. In contrast, methods like self-training with reflection (Dou et al., 2024) and the Self-Evolution framework (Lu et al., 2024a) directly improve the model by updating its param- eters based on self-generated feedback, offering a more comprehensive and potentially efficient path to model enhancement. Our work adopts this latter approach. Through direct preference optimization, our model generates responses that reflect an under- standing of the user’s implicit preferences during interactions, eliminating the need for explicit re- flection and refinement steps. 7 Conclusion This paper addresses the limitations of LLMs in providing personalized emotional support. We propose a self-evolution framework that enables models to learn implicit user preferences without explicit reflection. First, we use SFT on ESC data to equip the LLM with basic emotional sup- port skills. Second, we leverage the LLM’s self- reflection and self-refinement capabilities to gener- ate responses better aligned with the user’s implicit preference, using these pre- and post-refinement outputs as training data for iterative preference op- timization. Evaluations demonstrate the superiority of our framework in generating more diverse and user-aligned responses. Our work advances the de- velopment of more human-centric ESC systems, moving beyond formulaic empathy. Limitations This work introduces a self-evolution framework for optimizing personalized emotional support. However, several limitations warrant discussion: (1)Preference Data Quality Issues: Due to the subjective nature of ESC, obtaining objective reward signals is challenging. Therefore, this work leverages prior knowledge to guide LLMs in gener- ating language feedback, rather than relying on a dynamically learned reward model for preference data. While this approach avoids the complexities of training such a model, it introduces potential biases and noise. (2)Evaluation Issues: The evaluation of emo- tional support dialogues presents significant chal- lenges. Established metrics, including utterance- level similarity and reference-based scoring, are in- adequate for capturing the subjective dimensions of helpfulness, informativeness, empathy, and engage- ment. To address this, we employ both extensive human and LLM evaluations. However, manual evaluation is resource-intensive, while LLM-as-a- Judge (Zeng et al., 2024; Chen et al., 2023) meth- ods rely on APIs. Developing a reliable and gener-ally accepted automated evaluation methodology remains a | https://arxiv.org/abs/2505.16610v1 |
crucial area for future research. Ethical Considerations Datasets such as ESConv (Liu et al., 2021), ExTES (Zheng et al., 2024b), ServeForEmo (Ye et al., 2024), and Alpaca (Taori et al., 2023), models such as LLaMA (Dubey et al., 2024), Qwen (Yang et al., 2024), and Mistral (Jiang et al., 2023), and toolk- its like LLaMA-Factory (Zheng et al., 2024a) and lm-evaluation-harness (Gao et al., 2024) are widely used in academic research and are readily available via the Hugging Face Hub or GitHub. This work is for research purposes only. We ensured the ethical conduct of our human evaluation. Fifty undergraduate students with di- verse backgrounds and four graduate students par- ticipated voluntarily. Before participation, we com- municated transparently with participants about the study’s objectives and provided explicit details re- garding disclaimers and the evaluation process. We are committed to protecting the confidentiality of all evaluation transcripts and will not share them without explicit participant consent. We recognize the potential for demographic and geographic bi- ases to affect human evaluation outcomes. Given the substantial number of participants involved in the evaluation, calculating inter-rater reliability proved impractical. Consequently, we presented the average human scores in the main body of the paper. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 . Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with im- proved correlation with human judgments. In Pro- ceedings of the ACL Workshop on Intrinsic and Ex- trinsic Evaluation Measures for Machine Transla- tion and/or Summarization , pages 65–72, Ann Arbor, Michigan. Association for Computational Linguis- tics. Joana Campos, James Kennedy, and Jill F. Lehman. 2018. Challenges in exploiting conversational mem- ory in human-agent interaction. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems , pages 1649–1657. Yi Chen, Rui Wang, Haiyun Jiang, Shuming Shi, and Ruifeng Xu. 2023. Exploring the use of large lan- guage models for reference-free text quality evalua- tion: An empirical study. In Findings of the Associa- tion for Computational Linguistics: IJCNLP-AACL 2023 (Findings) , pages 361–374, Nusa Dua, Bali. Association for Computational Linguistics. Jiale Cheng, Sahand Sabour, Hao Sun, Zhuang Chen, and Minlie Huang. 2023. PAL: Persona-augmented emotional support conversation generation. In Find- ings of the Association for Computational Linguis- tics: ACL 2023 , pages 535–554, Toronto, Canada. Association for Computational Linguistics. Paul F Christiano, Jan Leike, Tom Brown, Miljan Mar- tic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. Ad- vances in neural information processing systems , 30. Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Wei Zhu, Yuan Ni, Guotong Xie, Zhiyuan Liu, and Maosong Sun. 2023. Ultrafeedback: Boosting lan- guage models with high-quality feedback. CoRR , abs/2310.01377. Qingxiu Dong, Li Dong, Xingxing Zhang, Zhifang Sui, and Furu Wei. 2024. Self-boosting large lan- guage models with synthetic preference data. CoRR , abs/2410.06961. Zi-Yi Dou, Cheng-Fu Yang, Xueqing Wu, Kai-Wei Chang, and Nanyun Peng. 2024. Re-rest: Reflection- reinforced self-training for language agents. In | https://arxiv.org/abs/2505.16610v1 |
Pro- ceedings of the 2024 Conference on Empirical Meth- ods in Natural Language Processing, EMNLP 2024, Miami, FL, USA, November 12-16, 2024 , pages 15394–15411. Association for Computational Lin- guistics. Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 . William Fleeson. 2001. Toward a structure-and process- integrated view of personality: Traits as density dis- tributions of states. Journal of personality and social psychology , 80(6):1011. Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Alain Le Noac’h, Haonan Li, Kyle McDonell, Niklas Muennighoff, Chris Ociepa, Jason Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lintang Sutawika, Eric Tang, An- ish Thite, Ben Wang, Kevin Wang, and Andy Zou. 2024. A framework for few-shot language model evaluation. Sarik Ghazarian, Ralph M. Weischedel, Aram Galstyan, and Nanyun Peng. 2019. Predictive engagement: An efficient metric for automatic evaluation of open- domain dialogue systems. CoRR , abs/1911.01456.Geyang Guo, Ranchi Zhao, Tianyi Tang, Xin Zhao, and Ji-Rong Wen. 2024. Beyond imitation: Leveraging fine-grained quality signals for alignment. In The Twelfth International Conference on Learning Repre- sentations . Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. Preprint , arXiv:2106.09685. Robert Irvine, Douglas Boubert, Vyas Raina, Adian Liusie, Ziyi Zhu, Vineet Mudupalli, Aliaksei Kor- shuk, Zongyi Liu, Fritz Cremer, Valentin Assassi, Christie-Carol Beauchamp, Xiaoding Lu, Thomas Rialan, and William Beauchamp. 2023. Rewarding chatbots for real-world engagement with millions of users. CoRR , abs/2303.06135. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Men- sch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guil- laume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7b. Preprint , arXiv:2310.06825. Yuxin Jiang, Bo Huang, Yufei Wang, Xingshan Zeng, Liangyou Li, Yasheng Wang, Xin Jiang, Lifeng Shang, Ruiming Tang, and Wei Wang. 2025. Bridg- ing and modeling correlations in pairwise data for direct preference optimization. In The Thirteenth In- ternational Conference on Learning Representations . Harrison Lee, Samrat Phatale, Hassan Mansoor, Kel- lie Ren Lu, Thomas Mesnard, Johan Ferret, Colton Bishop, Ethan Hall, Victor Carbune, and Abhinav Rastogi. 2024. RLAIF: Scaling reinforcement learn- ing from human feedback with AI feedback. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting ob- jective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies , pages 110–119, San Diego, California. Association for Computational Linguistics. Junlin Li, Bo Peng, Yu-Yin Hsu, and Chu-Ren Huang. 2024. Be helpful but don’t talk too much - enhanc- ing helpfulness in conversations through relevance in multi-turn emotional support. In Proceedings of the 2024 Conference on Empirical Methods in Natu- ral Language Processing , pages 1976–1988, Miami, Florida, USA. Association for Computational Lin- | https://arxiv.org/abs/2505.16610v1 |
guistics. R Likert. 1932. A technique for the measurement of attitudes. Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out , pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Siyang Liu, Chujie Zheng, Orianna Demasi, Sahand Sabour, Yu Li, Zhou Yu, Yong Jiang, and Minlie Huang. 2021. Towards emotional support dialog systems. In Proceedings of the 59th Annual Meet- ing of the Association for Computational Linguistics and the 11th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers) , pages 3469–3483, Online. Association for Computa- tional Linguistics. Wenhao Liu, Xiaohua Wang, Muling Wu, Tianlong Li, Changze Lv, Zixuan Ling, Zhu JianHao, Cenyuan Zhang, Xiaoqing Zheng, and Xuanjing Huang. 2024. Aligning large language models with human prefer- ences through representation engineering. In Pro- ceedings of the 62nd Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers) , pages 10619–10638, Bangkok, Thailand. Association for Computational Linguistics. Ilya Loshchilov and Frank Hutter. 2017. Fixing weight decay regularization in adam. arXiv preprint arXiv:1711.05101 . Jianqiao Lu, Wanjun Zhong, Wenyong Huang, Yufei Wang, Fei Mi, Baojun Wang, Weichao Wang, Lifeng Shang, and Qun Liu. 2024a. SELF: Language-driven self-evolution for large language model. Jianqiao Lu, Wanjun Zhong, Wenyong Huang, Yufei Wang, Qi Zhu, Fei Mi, Baojun Wang, Weichao Wang, Xingshan Zeng, Lifeng Shang, Xin Jiang, and Qun Liu. 2024b. Self: Self-evolution with language feed- back. Preprint , arXiv:2310.00533. Yukun Ma, Khanh Linh Nguyen, Frank Z. Xing, and Erik Cambria. 2020. A survey on empathetic dia- logue systems. Information Fusion , 64:50–70. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdan- bakhsh, and Peter Clark. 2023. Self-refine: Itera- tive refinement with self-feedback. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Sys- tems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 . Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instruc- tions with human feedback. Advances in neural in- formation processing systems , 35:27730–27744. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics , pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.Wei Peng, Yue Hu, Luxi Xing, Yuqiang Xie, Yajing Sun, and Yunpeng Li. 2022. Control globally, understand locally: A global-to-local hierarchical graph network for emotional support conversation. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022 , pages 4324–4330. ijcai.org. Huachuan Qiu, Hongliang He, Shuai Zhang, Anqi Li, and Zhenzhong Lan. 2023. SMILE: single-turn to multi-turn inclusive language expansion via chatgpt for mental health support. CoRR , abs/2305.00450. Rafael Rafailov, Archit Sharma, Eric Mitchell, Christo- pher D. Manning, Stefano Ermon, and Chelsea Finn. 2023. Direct | https://arxiv.org/abs/2505.16610v1 |
preference optimization: Your language model is secretly a reward model. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Sys- tems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 . Stephen A Rains, Corey A Pavlich, Bethany Lutovsky, Eric Tsetsi, and Anjali Ashtaputre. 2020. Sup- port seeker expectations, support message quality, and supportive interaction processes and outcomes: The case of the comforting computer program revis- ited. Journal of Social and Personal Relationships , 37(2):647–666. Carl R. Rogers. 2013. Client-centered therapy. Current Psychotherapy , pages 95–150. Maya Tamir. 2015. Why do people regulate their emo- tions? a taxonomy of motives in emotion regulation. Personality and Social Psychology Review , 20. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. https:// github.com/tatsu-lab/stanford_alpaca . Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288 . Jiashuo Wang, Chunpu Xu, Chak Tou Leong, Wenjie Li, and Jing Li. 2024a. Muffin: Mitigating unhelpfulness in emotional support conversations with multifaceted AI feedback. In Findings of the Association for Com- putational Linguistics, ACL 2024, Bangkok, Thailand and virtual meeting, August 11-16, 2024 , pages 567– 585. Association for Computational Linguistics. Jiashuo Wang, Chunpu Xu, Chak Tou Leong, Wenjie Li, and Jing Li. 2024b. Muffin: Mitigating unhelp- fulness in emotional support conversations with mul- tifaceted AI feedback. In Findings of the Association for Computational Linguistics: ACL 2024 , pages 567–585, Bangkok, Thailand. Association for Com- putational Linguistics. Yifan Wang, Yafei Liu, Chufan Shi, Haoling Li, Chen Chen, Haonan Lu, and Yujiu Yang. 2024c. InsCL: A data-efficient continual learning paradigm for fine- tuning large language models with instructions. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 663–677, Mexico City, Mexico. Association for Computational Lin- guistics. An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, and ohters. 2024. Qwen2 technical report. CoRR , abs/2407.10671. Michihiro Yasunaga, Leonid Shamis, Chunting Zhou, Andrew Cohen, Jason Weston, Luke Zettle- moyer, and Marjan Ghazvininejad. 2024. Alma: Alignment with minimal annotation. Preprint , arXiv:2412.04305. Jing Ye, Lu Xiang, Yaping Zhang, and Chengqing Zong. 2024. Sweetiechat: A strategy-enhanced role-playing framework for diverse scenarios handling emotional support agent. Preprint , arXiv:2412.08389. Seonghyeon Ye, Yongrae Jo, Doyoung Kim, Sungdong Kim, Hyeonbin Hwang, and Minjoon Seo. 2023. Selfee: Iterative self-revising llm empowered by self- feedback generation. Blog post. Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho, Xian Li, Sainbayar Sukhbaatar, Jing Xu, and Ja- son Weston. 2024. Self-rewarding language models. InForty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024 . OpenReview.net. Zhiyuan Zeng, Jiatong Yu, Tianyu Gao, Yu Meng, Tanya Goyal, and Danqi Chen. 2024. Evaluating large lan- guage models at evaluating instruction following. In The Twelfth International Conference on Learning Representations . Tenggan Zhang, Xinjie Zhang, Jinming Zhao, | https://arxiv.org/abs/2505.16610v1 |
Li Zhou, and Qin Jin. 2024. Escot: Towards interpretable emotional support dialogue systems. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2024, Bangkok, Thailand, August 11-16, 2024 , pages 13395–13412. Association for Computational Linguistics. Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Eval- uating text generation with bert. In International Conference on Learning Representations . Chujie Zheng, Sahand Sabour, Jiaxin Wen, Zheng Zhang, and Minlie Huang. 2023. AugESC: Dialogue augmentation with large language models for emo- tional support conversation. In Findings of the As- sociation for Computational Linguistics: ACL 2023 , pages 1552–1568, Toronto, Canada. Association for Computational Linguistics.Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, Zhangchi Feng, and Yongqiang Ma. 2024a. Llamafactory: Unified efficient fine-tuning of 100+ language models. In Proceedings of the 62nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 3: System Demonstra- tions) , Bangkok, Thailand. Association for Computa- tional Linguistics. Zhonghua Zheng, Lizi Liao, Yang Deng, Libo Qin, and Liqiang Nie. 2024b. Self-chats from large language models make small emotional support chatbot better. InProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2024, Bangkok, Thailand, August 11-16, 2024 , pages 11325–11345. Association for Computational Linguistics. Jinfeng Zhou, Zhuang Chen, Dazhen Wan, Bosi Wen, Yi Song, Jifan Yu, Yongkang Huang, Pei Ke, Guan- qun Bi, Libiao Peng, JiaMing Yang, Xiyao Xiao, Sahand Sabour, Xiaohan Zhang, Wenjing Hou, Yi- jia Zhang, Yuxiao Dong, Hongning Wang, Jie Tang, and Minlie Huang. 2024. CharacterGLM: Customiz- ing social characters with large language models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track , pages 1457–1476, Miami, Florida, US. Asso- ciation for Computational Linguistics. Jinfeng Zhou, Zhuang Chen, Bo Wang, and Minlie Huang. 2023. Facilitating multi-turn emotional sup- port conversation with positive emotion elicitation: A reinforcement learning approach. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 1714–1729, Toronto, Canada. Association for Computational Linguistics. A Preference Data Quality 14 B Additional Experiment Settings 14 B.1 Preference Data Pair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 B.2 LLM Evaluation Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 C Additional Experiments 14 C.1 Objective Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 C.2 LLM Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . | https://arxiv.org/abs/2505.16610v1 |
. . . . . . . . . . . 14 C.3 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 D Human Evaluation Criteria 15 E Prompts 20 E.1 Instruction Prompts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 E.2 Prompts for LLM-as-a-Judge Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . 22 F Interactive Evaluation Interface 26 A Preference Data Quality Self-generated preference data, while scalable, is susceptible to inherent noise and biases. To mit- igate these issues and ensure high-quality prefer- ence pairs, we implemented a rigorous data process- ing pipeline incorporating the following filtering and quality control measures: •Data Preprocessing: We consolidate consec- utive utterances from the same speaker and standardize dialog roles by designating the ini- tial speaker as the seeker and enforcing strict seeker-supporter turn alternation. •Response Length Normalization: Uncon- trolled response length expansion during iter- ative refinement can bias DPO training. To mitigate this, we implement dynamic length constraints. If a refined chosen response ex- ceeds twice the length of its paired rejected response (or the corresponding "golden" re- sponse from the SynESC data), we substitute it with the golden response. This prioritizes se- mantic preservation while controlling length bias. •Parsing Error Mitigation: JSON output gen- eration can introduce parsing errors. To ad- dress this, we regenerate the text up to three times. If parsing fails after these attempts, we substitute the output with the correspond- ing golden response, ensuring structured and accurate data. •Removal of greeting turns: Greeting ex- changes contribute minimally to providing personalized emotional support. Based on prior knowledge, we assume that the first turn and the last two turns in a dialogue typically involve greetings. Consequently, we filtered out these exchanges to enhance the relevance and quality of the data. B Additional Experiment Settings B.1 Preference Data Pair In Section 5.3, we define three preference data pairs. The specific configurations are detailed below: •{HM, P} : Constructed using the ESConv dataset. The rejected responses are the direct output of our model, and the chosen responses are the human-written ground truth responses from ESConv.•{HL, P} : Constructed using the Syn-ESC dataset, where responses are generated by GPT-4. The rejected responses are the direct output of our model, and the chosen response was the annotated response from Syn-ESC. The datasets are split into two parts, used for train- ing iterations 1 and 2, respectively. B.2 LLM Evaluation Settings We use GPT-4o (Achiam et al., 2023) as the judge model, employing the prompt described in Ap- pendix E.2. Aligning with human evaluation prac- tices, the assessment uses a 5-point Likert | https://arxiv.org/abs/2505.16610v1 |
scale, where higher scores indicate better performance. We evaluate response quality by sampling 100 con- textual queries from the ESConv test set. The judge model’s decoding hyperparameters are set to tem- perature 0.8, top-p 0.95, and top-k 50. Model BLEU-2 BLEU-3 METEOR Distinct-2 LLaMA M0 18.38 — 12.95 — 13.37 — 90.84 — M1 20.22 9.99% 13.72 5.96% 15.48 5.96% 90.97 0.14% M2 20.06 -0.79% 13.63 -0.64% 15.77 -0.64% 91.43 0.51% Qwen M0 19.18 — 13.56 — 13.82 — 90.68 — M1 19.80 3.24% 13.52 -0.29% 15.23 -0.29% 91.23 0.61% M2 20.02 1.09% 13.80 2.05% 15.52 2.05% 94.21 3.27% Mistral M0 19.44 — 13.81 — 14.08 — 91.20 — M1 20.45 5.20% 14.09 2.03% 15.58 2.03% 90.91 -0.32% M2 20.25 -0.98% 13.99 -0.71% 15.18 -0.71% 92.65 1.91% Table 6: The results of the iterative process. Red indicates the percentage of improvement relative to the previous iteration, while green represents decline. C Additional Experiments C.1 Objective Evaluation Table 6 presents the objective evaluation results of different models on ESConv test set. In our framework, self-refinement is used to improve the quality of the chosen candidates. As shown by the progression from M0toM1, self-reflection and refinement further enhance the results obtained through SFT. The shift from M1toM2reveals a significant increase in response diversity, demon- strating the model’s ability to enrich its output by refining its initial answers. Therefore, leverag- ing self-reflection on user-relevant information and self-refinement to better align with users’ implicit preferences is effective. C.2 LLM Evaluation To validate the model’s performance further, we use LLM-as-a-judge as our evaluation method. The Model Coherence Understanding Empathy Engagement Informativeness Helpful Overall M04.28 3.08 2.56 2.78 2.94 2.72 2.62 M14.84 3.42 3.32 3.48 3.34 3.16 3.08 M24.54 3.56 3.42 3.54 3.42 3.22 3.22 p 54.77% 75.71% 62.13% 66.83% 47.21% 60.78% 57.49% Table 7: LLM-as-a-Judge performance on ESConv test datasets evaluated on a 5-point scale. pis the Pearson correlation measuring the correlation between the model’s scores and human scores on the dataset. The backbone model is LLaMA-3-8B- Instruct. results, presented in Table 7, demonstrate signifi- cant improvements across most dimensions with each model iteration. While M2exhibited a slight decrease in coherence compared to M1, this is attributed to increased diversity, as discussed in Section 4.1. The strong correlation between LLM evaluation results and human evaluations reinforces the reliability of our assessment. C.3 Case Study This section presents interaction results comparing different models. Figure 6 illustrates the frequent phrases generated by each model. Our analysis re- veals that while the SFT model demonstrates strong empathetic tendencies, its responses often lack in- formational depth and exhibit repetitive patterns. These models tend to rely on predictable, formu- laic phrases, such as " It sounds like. . . " and " I’m sorry to hear that. . . ," resulting in empathetic but ultimately superficial interactions. In contrast, M1 andM2demonstrate a greater capacity for nuanced understanding and a richer vocabulary. Further interaction examples are provided in Figures 9-8. (a)SFT-ESConv (b)M0 (c)M1 (d)M2 Figure 6: Comparison of the frequent phrases in responses across different models.D Human Evaluation Criteria The performance of ES | https://arxiv.org/abs/2505.16610v1 |
agents is evaluated based on several key criteria that reflect the user’s experience and satisfaction with both the quality of responses and the overall interaction. •Coherence: Measures whether the response is logical, clear, and consistent with the ongoing conversation. •Understanding: Assesses the model’s ability to accurately interpret the user’s emotions and needs, ensuring that responses are contextu- ally appropriate. •Empathy: Focuses on the model’s ability to demonstrate care and sensitivity toward the user’s emotions, reflecting its capacity for emotional engagement. •Informativeness: Evaluates whether the re- sponse provides clear, useful information that helps address the user’s concerns or alleviates their emotional distress. •Helpfulness: Examines how effectively the model meets the user’s needs by offering prac- tical and actionable support. •Engagement: Gauges the level of interaction, measuring whether the user feels encouraged to continue the conversation. •Overall Quality: Provides a comprehensive evaluation of the model’s performance, con- sidering emotional support, informativeness, and the overall interaction quality. These criteria ensure that the model delivers a bal- anced and effective emotional support experience, fostering user satisfaction and meaningful interac- tion. Hello, my girlfriend thinks I'm lazy, but she's misunderstood me. Hello there. I'm sorry to here that. What should i do to prove this? I understand that you're feeling upset. What would you like to talk about? I wanna suggestions. She saw i played games when i was working, but i finish my work on time still. I see. It sounds like she might be misunderstanding you. Yes. Do you feel that you need to explain this to her? Yes, but i failed.What sort of things have you been doing that she thinks are lazy? Fail to consider user needs It sounds like you've tried talking to her but she still thinks you're lazy .Relationship Problem You are right. So what should I explain to her? Have you tried showing her evidence that you've completed your work on time? Yes, i have shown my evidence. Have you tried to explain to her why you wer e playing games? Yes, i want to explain that enough rest will help me do my work better . That's a good point. It sounds like you've tried to talk to her about this.Lack of in-depth exploration Superficial empathy Invalid Suggestion Figure 7: Example case dialogue generated by M0-LLaMA (Relationship Problem). Hello, my girlfriend thinks I'm lazy , but she's misunderstood me. Hi there! I'm here to listen. Can you tell me mor e about what's going on? She thought that i didnt do my work well because i was playing games when she was coming. It sounds like there's a misunders tanding between you and your girlfriend. She thought you weren't doing your work well because you were playing games when she arrived. Is that correct? Sure. What should i explain to her? It's unde rstandable that you want to clarify the situation with your girlfriend. Have you tried talking to her about your perspect ive? Maybe explaining how you were actually in the middle of a task or working on something important at that time? I tried. But she | https://arxiv.org/abs/2505.16610v1 |
failed to understand me. I wanted to explain that rest is an important part during work. I can understand how frustrating it must be when your girlfriend doesn't seem to understand your perspective. Ef fective communication is key in any relationship. Maybe you can try approaching the conversation again, using "I" statements to express your feelings and avoid blame. For example, "I feel misunderstood when you assume I'm not working hard because I was playing games. I want to clarify that resting is an important part of my work process." I have done it, but i failed. That's great to hear! Remember , effective communication is a process, and it might take some time to find the right approach. But with patience and persistence, I'm confident that you can work through this challenge with your girlfriend. If you need any further support or guidance, don't hesitate to reach out.Relationship Problem You are right. I will do it afterwards.Building trust and understanding takes time. It might be helpful to have a calm and open conversation again, and reassure your girlfriend that you're committed to working together and finding a solution that works for both of you.Exploration Reflect user's feeling Kindly suggestion Reflection of feelings Figure 8: Example case dialogue generated by M2-LLaMA (Relationship Problem). I have encountered a big problem in my research, what should I do? Hello there. I'm sorry to here that. I'm so anxious Im sorry about that. What seems to be the issue? I have encountered a big problem in my research I am anxious and can't sleep every day . I may delay graduation. That's no good, Have you tried taking anything for anxiety? No. What is causing this? My research has not been making good progress.Is it possible for me to assist you with your task? Do you talk to anyone else who might feel similar?Academic PressureFigure 9: Example case dialogue generated by M0-LLaMA (Academic Problem). I have encountered big problems in my scientific research recently , and I am very anxious. Hello! I'm here to listen. Can you tell me more about the issues you're facing? I'm trying hard to experiment, but the results are not good. It soun ds challenging when your experiments aren't yielding the desired outcome. It's understandable that it would make you feel frustrated and worried . Would you like to discuss specific challenges or dif ficulties you've been experiencing? I tried other researchers’ method s based on their papers, but they didn’ t work on my model. My supervisor thinks I’m terrible. That must be really disheartening and discouraging having someone who should provide support criticizing your efforts. Their negative feedback can definitely add to your anxiety levels. But you know research is never easy . Research is too painful and lonely . I want to drop out. Please know that quitting isn't an option yet because each experience has its value lessons. Researching requires perseverance, resilience, and adaptability . Considering leaving now wouldn't allow you to gain as much from what you started with. Your journey through difficult times will | https://arxiv.org/abs/2505.16610v1 |
enable future growth. You are right. It takes courage to share such perso nal struggles and express feelings of frustration. Setbacks and rejections come naturally in any field, especially research where new discoveries often require overcoming obsta cles. You've made progress already by reflecting on how far you have gone despite the initial failures. Thank you! I will continue to work hardAcademic PressureFigure 10: Example case dialogue generated by M2-LLaMA (Academic Problem). E Prompts E.1 Instruction Prompts Prompt for vanilla You are an emotional support expert. You can use the following strategies to engage with users: [Question, Affirmation and Reassurance, Reflection of Feelings, Information, Providing Suggestions, Restatement or Paraphrasing, Self-disclosure, Others] Prompt for w/strategy You are an emotional support expert. You can use the following strategies to engage with users: 1. Question: Asking for information related to the problem to help the seeker articulate the issues that they face. 2. Affirmation and Reassurance: Offering reassurance and affirming the help-seeker’s feelings or experiences. 3. Reflection of Feelings: Articulating and describing the seeker’s feelings. 4. Information: Providing useful information, such as data, facts, opinions, or resources, or answering questions. 5. Providing Suggestions: Offering suggestions on how to approach the issue, without overstepping or telling them what to do. 6. Restatement or Paraphrasing: Rephrasing the help-seeker’s statements more concisely to help them see the situation clearly. 7. Self-disclosure: Sharing similar experiences or emotions to express empathy with the help- seeker. 8. Others: Exchanging pleasantries or offering other emotional support. Prompt for w/ self-reflection You are an emotional support expert. You can use the following strategies to engage with users: 1. Question: Asking for information related to the problem to help the seeker articulate the issues that they face. 2. Affirmation and Reassurance: Offering reassurance and affirming the help-seeker’s feelings or experiences. 3. Reflection of Feelings: Articulating and describing the seeker’s feelings. 4. Information: Providing useful information, such as data, facts, opinions, or resources, or answering questions. 5. Providing Suggestions: Offering suggestions on how to approach the issue, without overstepping or telling them what to do. 6. Restatement or Paraphrasing: Rephrasing the help-seeker’s statements more concisely to help them see the situation clearly. 7. Self-disclosure: Sharing similar experiences or emotions to express empathy with the help- seeker. 8. Others: Exchanging pleasantries or offering other emotional support. Before responding to the user, please follow these steps: 1. Understand the User: Understand the user’s profile, characteristics, emotional needs, and potential preferences they reveal in the conversation. 2. Select a Strategy: Choose a response strategy based on the user’s emotional needs and prefer- ences. 3. Respond: Respond to the user with an appropriate message based on the selected strategy. Your answer should be formatted as a JSON block: { 'strategy ': <one of the strategies>, 'text ': <your response> } Prompt for generating chosen response You are an emotional support expert. You can use the following strategies to engage with users: 1. Question: Asking for information related to the problem to help the seeker articulate the issues that they face. 2. Affirmation and Reassurance: Offering reassurance and affirming the | https://arxiv.org/abs/2505.16610v1 |
help-seeker’s feelings or experiences. 3. Reflection of Feelings: Articulating and describing the seeker’s feelings. 4. Information: Providing useful information, such as data, facts, opinions, or resources, or answering questions. 5. Providing Suggestions: Offering suggestions on how to approach the issue, without overstepping or telling them what to do. 6. Restatement or Paraphrasing: Rephrasing the help-seeker’s statements more concisely to help them see the situation clearly. 7. Self-disclosure: Sharing similar experiences or emotions to express empathy with the help- seeker. 8. Others: Exchanging pleasantries or offering other emotional support. Your task is to evaluate the target sys’s response and refine it. For each target sys’s response: 1. Understand the User: Understand the user’s profile, characteristics, emotional needs, and potential preferences they reveal in the conversation. 2. Evaluate the Response: Rate the target system response on a scale of 1-5 based on how well it meets the user’s needs, aligns with their preferences, and provides appropriate emotional support. 3. Provide Feedback: Identify specific weaknesses in the original response, such as tone, empathy level, or relevance, and explain how it could be improved to better support the user. 4. Refine the Response: Provide a revised version of target system’s response that better aligns with the user’s emotional needs and preferences. Your answer should be formatted as a JSON block inside markdown: { 'understanding ': { 'user_profile ': <the profile of the user>, 'user_emotion ': <the emotion of the user>, 'user_personality ': <the personality of the user, like MBTI, etc.> 'user_intention ': <the intention of the user> }, 'evaluation_score ': <the score of the target_sys 's response>, 'feedback ': <the weaknesses of the original response and suggested improvement>, 'refined_response ': <your refined response (short!!!)> } E.2 Prompts for LLM-as-a-Judge Evaluation LLM-as-Judge Prompt: Coherence [Task Description] You are an expert evaluator responsible for assessing the coherence of emotional support conversations. Your task is to determine whether the responses are logical, clear, and consistent with the ongoing discussion. Apply strict penalties for critical errors and utilize a progressive scoring method to indicate basic competence before awarding higher scores. [Rating Criteria] Use 1-5 scale with precise criteria: 5 = Exemplary logical flow with clear and explicit contextual references. 4 = Mostly logical with minor deviations from context but overall sound coherence. 3 = Basic coherence; however, the response lacks sufficient adaptation to the conversation’s context. 2 = Contains noticeable contradictions or mismatches with the context. 1 = Exhibits illogical progression with a complete detachment from the context. [Critical Penalties] - Contradicts previous statements (-2 score cap) - Ignores critical contextual clues (-3 score cap) - Contains ambiguous pronouns/statements (-2 score cap) - Repeats already resolved topics (-2 score cap) [Output Format] The evaluation result includes a detailed explanation and score. The output format should be in JSON . Explanation: <one-sentence explanation> Score: <a scale from 0 to 5> [Input] Evaluate this conversation: <Dialogue History> {conversation} </Dialogue History> <Current Response> Supporter: {response} </Current Response> LLM-as-Judge Prompt: Empathy [Task Description] You are an expert evaluator assessing the empathy displayed in emotional support conversations. Determine whether the system shows understanding and care | https://arxiv.org/abs/2505.16610v1 |
for the user’s emotions and responds appropriately to their feelings. Apply strict penalties for critical errors and use a progressive scoring method, ensuring that basic competence is demonstrated before awarding higher scores. [Rating Criteria] Use 1-5 scale with precise criteria: 5 = Establishes a deep emotional connection and provides a safe space with personalized care and expressions 4 = Shows genuine concern with contextualized empathy 3 = Uses boilerplate empathy statements 2 = Mechanically parrots empathy phrases 1 = Displays emotional dismissal or invalidation [Critical Penalties] - Uses empathy as filler without substance (-1 cap) - Overuses clichéd phrases (I’m sorry you feel that way) (-2 cap) - Contains paradoxical reassurance attempts (-3 cap) - Overly long responses (max 2) [Output Format] The evaluation result includes a detailed explanation and score. The output format should be in JSON . Explanation: <one-sentence explanation> Score: <a scale from 0 to 5> [Input] Evaluate this conversation: <Dialogue History> {conversation} </Dialogue History> <Current Response> Supporter: {response} </Current Response> LLM-as-Judge Prompt: Engagement [Task Description] You are an expert evaluator assessing the engagement of emotional supporter’s response. Does the response maintain a conversational flow and encourage you to continue the conversation? Apply strict penalties for critical errors and use a progressive scoring method, ensuring that basic competence is demonstrated before awarding higher scores. [Rating Criteria] Use 1-5 scale with precise criteria: 5 = Natural turn-taking + deep engagement design 4 = Good interaction balance 3 = Basically maintains the conversation but lacks guidance 2 = Displays formulaic turn-taking behaviors 1 = Creates conversational dead-ends [Critical Penalties] - Overuses closed-ended questions (-2 cap) - Fails to acknowledge user’s last statement (-2 cap) - Consecutive questioning more than 3 times (max 2) - No feedback at key points (max 3) - Inappropriate topic transition (max 2) - Overly long responses (max 2) [Output Format] The evaluation result includes a detailed explanation and score. The output format should be in JSON . Explanation: <one-sentence explanation> Score: <a scale from 0 to 5> [Input] Evaluate this conversation: <Dialogue History> {conversation} </Dialogue History> <Current Response> Supporter: {response} </Current Response> LLM-as-Judge Prompt: Helpfulness [Task Description] You are an expert evaluator tasked with assessing the effectiveness of an emotional supporter’s response. Does the response adequately address the user’s needs and offer practical help or emotional support? Apply strict penalties for critical errors and utilize a progressive scoring method, ensuring that basic competence is demonstrated before awarding higher scores. [Rating Criteria] Use 1-5 scale with precise criteria: 5 = Provides support addressing root causes 4 = Offers concrete solutions with emotional validation 3 = Gives superficial suggestions lacking depth 2 = Proposes ineffective/impractical solutions 1 = Exacerbates the problem situation [Critical Penalties] - Suggests unethical interventions (-1 cap) - Overpromises results (-2 cap) - Fails to address stated priorities (-3 cap) - Creates false hope (max 1) - Overly long responses (max 2) [Output Format] The evaluation result includes a detailed explanation and score. The output format should be in JSON . Explanation: <one-sentence explanation> Score: <a scale from 0 to 5> [Input] Evaluate this conversation: | https://arxiv.org/abs/2505.16610v1 |
<Dialogue History> {conversation} </Dialogue History> <Current Response> Supporter: {response} </Current Response> LLM-as-Judge Prompt: Informativeness [Task Description] You are an expert evaluator responsible for assessing the informativeness of emotional support conversations. Does the supporter’s response offer clear, useful information that helps address your problem or alleviate your emotions? Apply strict penalties for critical errors and utilize a progressive scoring method, ensuring that basic competence is demonstrated before awarding higher scores. [Rating Criteria] Use 1-5 scale with precise criteria: 5 = Offers personalized strategies with emotional scaffolding 4 = Provides relevant resources with emotional validation 3 = Gives generic advice lacking personalization 2 = Shares marginally related information 1 = Provides invalid/harmful/dangerous suggestions [Critical Penalties] - Recommends unverified methods (-2 cap) - Overloads with technical jargon (-3 cap) - Suggests inappropriate coping mechanisms (-1 cap) - Transgresses professional boundaries (max 2) [Output Format] The evaluation result includes a detailed explanation and score. The output format should be in JSON . Explanation: <one-sentence explanation> Score: <a scale from 0 to 5> [Input] Evaluate this conversation: <Dialogue History> {conversation} </Dialogue History> <Current Response> Supporter: {response} </Current Response> LLM-as-Judge Prompt: Understanding [Task Description] You are an expert evaluator responsible for assessing the understanding of emotional support conversations. Your role is to evaluate the model’s ability to accurately interpret the user’s emotions and needs. Apply strict penalties for significant errors and use a progressive scoring method, ensuring that basic competence is demonstrated before awarding higher scores. [Rating Criteria] Use 1-5 scale with precise criteria: 5 = Captures user’s implicit emotions, states, causes, and needs with depth and nuance 4 = Accurately identifies surface emotions and states 3 = Recognizes basic emotions but lacks depth 2 = Misinterprets user’s emotions or needs 1 = Fails to recognize user’s emotions or needs [Critical Penalties] - Confuses emotional valence (positive/negative) (-2 cap) - Fails to recognize stated needs (-3 cap) - Projects inappropriate assumptions (-2 cap) - Cannot recognize emotion causes (-2 cap) [Output Format] The evaluation result includes a detailed explanation and score. The output format should be in JSON . Explanation: <one-sentence explanation> Score: <a scale from 0 to 5> [Input] Evaluate this conversation: <Dialogue History> {conversation} </Dialogue History> <Current Response> Supporter: {response} </Current Response> LLM-as-Judge Prompt: Overall [Task Description] Act as an expert evaluator of emotional support conversations. Analyze supporter responses through three core aspects: 1) Strategy application alignment with the conversation stage 2) Emotional perceptiveness 3) Response quality control Apply strict penalties for critical errors. [Evaluation Guideline] A. Strategy Implementation Evaluate the proper use of 8 emotional support strategies: [Question, Affirmation and Reassurance, Reflection of Feelings, Information, Providing Suggestions, Restatement or Paraphrasing, Self-disclosure, Others] B. Emotional Competence: - Detection accuracy: Explicit vs implicit emotional cues - Validation depth: Surface-level vs transformative acknowledgment - Connection building: Empathic attunement vs transactional responses C. Response Quality: - Focus: Single-issue addressing vs topic drifting - Safety: Harm prevention mechanisms [Rating Criteria] 5 (Expert): Accurately identifies user emotions and needs, expertly applies appropriate support strategies. Demonstrates nuanced understanding and provides transformative empathy, maintaining strong connection and focus. Ensures user safety without any | https://arxiv.org/abs/2505.16610v1 |
violations. 4 (Advanced): Effectively identifies most emotional cues and applies suitable support strategies. Offers deep empathy and strong connection in most responses. Stays on topic and provides thoughtful safety considerations. 3 (Competent): Generally identifies user emotions and employs basic support strategies. Provides some degree of empathy and stays mostly focused. Observes safety but lacks depth in validation. 2 (Developing): Misses some emotional cues and occasionally misapplies support strategies. Offers limited empathy and connection, with occasional topic drift. Safety is observed but lacks consistency. 1 (Unsafe): Fails to recognize user emotions and consistently misapplies or lacks support strategies. Provides minimal or no empathy, frequently drifts off topic, and potentially compromises user safety. [Critical Penalties] Impose MAX score caps when: ! Safety Violation (MAX=1): Harmful behavior Provides unqualified medical/legal advice Hallucinated/factual errors ! Emotional Neglect (MAX=2): Misses users’ emotional cues Uses invalidating language ("Don’t feel that way") ! Unfriendly Reading (MAX=2): Excessive repetition without substance Use of generic platitudes Responses over two sentences or 40 words Listing suggestions [Output Format] The evaluation result includes a detailed explanation and score. The output format should be in JSON . Explanation: <one-sentence explanation> Score: <a scale from 0 to 5> [Input] Evaluate this conversation: <Dialogue History> {conversation} </Dialogue History> <Current Response> Supporter: {response} </Current Response> F Interactive Evaluation Interface Figure 11: The interface of the interactive point-wise human evaluation. Figure 12: The interface of the interactive pair-wise human evaluation. Figure 13: The detailed guidelines for human evaluation. | https://arxiv.org/abs/2505.16610v1 |
Steering Large Language Models for Machine Translation Personalization Daniel Scalena1,2∗Gabriele Sarti1∗ Arianna Bisazza1Elisabetta Fersini2Malvina Nissim1 1CLCG, University of Groningen2University of Milano-Bicocca d.scalena@campus.unimib.it g.sarti@rug.nl Abstract High-quality machine translation systems based on large language models (LLMs) have simplified the production of personalized trans- lations reflecting specific stylistic constraints. However, these systems still struggle in settings where stylistic requirements are less explicit and might be harder to convey via prompting. We explore various strategies for personalizing LLM-generated translations in low-resource settings, focusing on the challenging literary translation domain. We explore prompting strategies and inference-time interventions for steering model generations towards a person- alized style, and propose a contrastive frame- work exploiting latent concepts extracted from sparse autoencoders to identify salient person- alization properties. Our results show that steer- ing achieves strong personalization while pre- serving translation quality. We further examine the impact of steering on LLM representations, finding model layers with a relevant impact for personalization are impacted similarly by multi-shot prompting and our steering method, suggesting similar mechanism at play.1 1 Introduction When we read a translated book, we do not sim- ply read the story in a new language; we also ex- perience the translator’s personal voice by means of their stylistic choices. Past efforts in the auto- matic translation of literary works have historically been constrained by the limited capabilities and flexibility of machine translation (MT) systems. The recent popularization of MT systems based on large language models (LLMs) has greatly im- proved their capacity in handling the long contexts typical of literary translations, but mimicking the creative and rich language that characterize the translators’ own style remains an open issue. In this context, several works explored the usage of ∗Equal contribution. 1Code available at DanielSc4/steering-for-personalization. Novels (7 languages) Pinocchio Around the W orld in 80 Days Crime and Punishment … Steering Methods SAE Steering ActAdd ReFT Personalized MT Output T ranslation Input Prompting Strategies Zero-shot (ZS) Multi-shot (MS) Explanation (Exp) +Literary T ranslator Style Personalized T ranslation (PT) Style Evaluation Human-like T ranslation (HT) MT (Default) Figure 1: We compare prompt-based approaches with steering techniques intervening on model internals for personalizing MT outputs in literary machine transla- tion, employing MT quality metrics and style classifiers to disentangle the effect of steering on outputs fluency and personalization adequacy. prompting and tuning-based strategies to ensure translations are stylistically appropriate (Michel and Neubig, 2018; Wang et al., 2021). However, their influence on model internal representations is rarely explored, so that their impact is less con- trollable and often unpredictable. In this work, we adopt prompting methods as baselines and compare their effectiveness to steering methods proposed in interpretability literature. These techniques can be used to surgically intervene on LLMs’ intermedi- ate representation to generate personalized trans- lations when few examples are available. For this purpose, we employ the PAR3dataset (Thai et al., 2022), which includes multiple human translations for novels translated to English from 7 typologi-arXiv:2505.16612v1 [cs.CL] 22 May 2025 cally diverse languages. We begin with preliminary assessments by ver- ifying whether translators’ styles are discernible by automatic systems, finding that | https://arxiv.org/abs/2505.16612v1 |
trained classi- fiers can distinguish writing styles with high accu- racy, while the task is notoriously challenging for human annotators (Youyou et al., 2015; Flekova et al., 2016). We also find a simple prompting setting with in-context personalization examples to improve the style accuracy of LLM translation, suggesting personalized translation styles are re- producible. We connect the conditioning induced by prompting to the inner workings of the model, identifying activations with high discriminative ca- pacity for style differences in intermediate model layers. We then propose a contrastive steering ap- proach based on sparse autoencoders (SAEs, Cun- ningham et al., 2023) to condition model genera- tions by upweighting sparse, interpretable latents at inference time. We validate the effectiveness of our method across three LLMs of various sizes onPAR3novels, comparing our results with estab- lished prompting and steering methods. Our results show that contrastive SAE steering is a promising approach for MT personalization, lead- ing to translations that are not only more in line with general human translation features but also more aligned with the desired personalized style compared to other methods. Importantly, these re- sults are achieved with no translation quality degra- dation according to established MT quality met- rics. We conclude by comparing the impact of our method on model representations with the outcome of multi-shot prompting, finding that probes trained on prompt-conditioned activations can predict the effectiveness of SAE steering with high precision. These results confirm that tested prompting and steering techniques converge to similar solutions for conditioning model behavior, enabling future in- vestigations into the mechanistic impact of prompt- ing through the study of learned SAE latents and other interpretable components. 2 Related Work Machine Translation of Literary Texts The lit- erary domain has historically been challenging for automatic MT systems due to their limited ability in handling rich linguistic and cultural contexts (Ma- tusov, 2019) and their propensity to produce overly literal outputs (Guerberof-Arenas and Toral, 2022). Automatic literary translation has a long historydating back to pre-neural MT approaches (V oigt and Jurafsky, 2012; Toral and Way, 2015, 2018; Moorkens et al., 2018) with two recent dedicated evaluation campaigns (Wang et al., 2023b, 2024a). The advent of LLMs brought new opportunities in the processing of longer context for document- level translation (Wang et al., 2023a; Briakou et al., 2024; Wu et al., 2025a), but critical errors requir- ing human translator’s intervention nonetheless per- sist (Karpinska and Iyyer, 2023). In this work, we use the PAR3dataset (Thai et al., 2022) containing multiple human translations of novels to evaluate MT personalization in the literary domain. Personalization for Machine Translation Ad- vances in MT quality recently led to a growing interest in personalization approaches to ensure a consistent format and appropriate stylistic choices in model generations (Rabinovich et al., 2017; Lin et al., 2021). Previous approaches for control- ling attributes such as formality (Sennrich et al., 2016; Niu et al., 2017; Nadejde et al., 2022) or gen- der (Vanmassenhove et al., 2018; Saunders and Byrne, 2020) typically required tuning existing models on pre-defined properties of interest, with few works attempting a real | https://arxiv.org/abs/2505.16612v1 |
data-driven adaptation from unlabeled demonstrations (Michel and Neu- big, 2018; Wang et al., 2021; Zhang et al., 2022). More recently, several studies employed prompt- ing (Garcia and Firat, 2022; Sarti et al., 2023) or preference optimization from post-editing behav- ior (Lee et al., 2023; Berger et al., 2024) to ren- der MT personalization more effective and data- efficient. In this work, we complement prompt results with steering approaches to personalize MT outputs using few user-provided examples. Steering Language Model Generations Steer- ing approaches exploit the linear structure of LM activations (Mikolov et al., 2013; Chanin et al., 2024) to craft inference-time interventions for in- fluencing model generations. These methods com- monly employ contrastive sets of in-context demon- strations (Rimsky et al., 2024; Scalena et al., 2024) to map input properties to components such as vec- tors (Turner et al., 2024; Li et al., 2023), linear probes (Zou et al., 2025) or learned projections (Wu et al., 2024, 2025b). Sparse Autoencoders (SAEs) are another family of promising approaches for en- abling fine-grained interventions in language mod- els (Yun et al., 2021; Cunningham et al., 2023; Templeton et al., 2024). They are trained to decom- ZH→EN道人道:「既如此,便你去。」 H1 "Such being the case," the Taoist acquiesced, "I am ready to follow you, whenever you please to go." H2 "Very good, I will go with you then," said the Taoist. Gemma 2 2B "If that’s the case, then go ahead" said the Taoist. Gemma 2 9B "If so, then I will go with you" the Taoist said Llama 3.1 8B The Taoist said: "If you insist on going, then go ahead." IT→EN Sarà l’effetto dell’acqua del mare. Il mare ne fa di questi scherzi H1 "The salt water must have done it. The sea plays funny tricks." H2 "It must have been the effect of sea-water. The sea makes extraordinary changes." Gemma 2 2B "It will be the effect of the sea water. The sea makes of these jokes." Gemma 2 9B It will be the effect of the sea water. The sea plays these tricks. Llama 3.1 8B It will be the effect of the sea water. The sea does things like this. Table 1: ZH→ENand IT→ENexamples for PAR3 segments translated by humans (H1, H2) and LLMs with zero-shot prompting. More examples in Appendix D. pose activations into approximately monosemantic features, offering a potentially interpretable basis for modifying model behavior. While interpret- ing their learned latents remains non-trivial (Marks et al., 2025), SAEs have proven effective for ap- plying targeted interventions along specific linear directions (Chalnev et al., 2024; Zhao et al., 2025; Ferrando et al., 2025). However, most research on SAEs have so far focused on synthetic tasks or standard benchmarks, leaving their potential in real-world settings relatively underexplored. 3 Preliminaries Before testing the effectiveness of personalization strategies, we validate some key assumptions: i) Whether the personalized translation style is dis- cernible , i.e., if it is possible to tell apart human- and machine-generated translations; ii)Whether different translation styles are automatically repro- ducible , i.e., if LLMs can mimic a specific transla- tor’s style when | https://arxiv.org/abs/2505.16612v1 |
provided with some examples; and iii)Whether style distinctions are reflected in the model’s internal representations, to motivate the interest in steering approaches for personalization. We use the PAR3dataset by Thai et al. (2022), which contains multiple non-English novels, as a benchmark to evaluate personalization. Novels are segmented into paragraphs with translations into English by two professional literary transla- tors. We select novels across seven languages: Ger- man, Russian, Chinese, Italian, Dutch, French, andJapanese2. Examples for a subset of languages are shown in Table 1. We name the two available hu- man translations H1 and H2, and compare them with MT outputs produced by LLMs, which we denote as MT model. We use three LLMs, namely Llama 3.1 8B Instruct (Team, 2024b) and Gemma 2 (Team, 2024a) in its 2B and 9B instruction-tuned variants. Our model selection is motivated by our steering requirements, discussed in Section 5. 3.1 Are Personalized Translations Discernible? Following prior work on personalization (Wang et al., 2024c; Liu et al., 2023), we train a series of classifiers based on multilingual XLM Trans- former encoders (Conneau et al., 2020) to distin- guish between H1, H2, and MT translations. If those systems can reliably separate these three classes, it suggests the presence of reasonably dis- tinct stylistic signals differentiating them. In par- ticular, the ability to distinguish between H1 and H2 would denote not only the possibility to discern a human-like style from human-made and auto- matic translations, but also a personalized style from different human translators. We train a classifier for each language and each model in our evaluation suite.3Results (re- ported in full in Table 6, Appendix C.1) indicate that translation styles are discernible with high accuracy. On average across all models and lan- guages, the classifiers reach an accuracy be- tween 77% (Japanese) and 99% (Chinese), with an average of 86%. These results suggest that per- sonalization information is abundant in the literary setting and can plausibly be exploited for model- ing. These findings corroborate previous results showing the high learnability of this task by ma- chines while remaining intrinsically difficult for human annotators (Youyou et al., 2015; Flekova et al., 2016; Wang et al., 2024b).4 3.2 Can LLMs Reproduce Human Translation Styles? To confirm whether MT personalization can be achievable, we test LLMs ability to mimic the stylistic choices of a particular translator in a multi- shot (MS) prompting setup. For each transla- 2The full list of novels is available in Appendix A. 3Classifiers’ training details are provided in Appendix B.2 4Two human annotators asked to label 100 translated para- graphs from the novel Pinocchio ( IT→EN) as either human or MT obtained an accuracy of ∼60%. Gemma 2 2B Gemma 2 9B Llama 3.1 8B ZS 0.10 0.69 0.08 0.71 0.08 0.70 MS 0.24 0.69 0.31 0.73 0.32 0.73 Table 2: Classifier-based personalization accuracy ( ) and Comet-based translation quality ( ) for zero-shot (ZS) and multi-shot (MS) prompting with 20 in-context examples averaged across all translators and languages. Figure 2: Probing classifier performance on the human translation detection task across Gemma 2 2B layers. Activations | https://arxiv.org/abs/2505.16612v1 |
in intermediate layers are found to capture translation style information with high precision, with layer 13 performing best across all tested languages. tor available across tested novels, we provide the model with 20 in-context examples selected from the original pool of translated paragraphs by that translator, asking it to generate a consistent transla- tion. We compare MS results with the default zero- shot (ZS) prompting without any example from the translator to quantify the effect of in-context exam- ples. Table 2 presents results for personalization accuracy , automatically evaluated using our high- scoring classifiers from the previous section; andtranslation quality , estimated via the widely used Comet MT metric (Rei et al., 2020). The proportion of outputs categorized as matching the translator’s style is increased two- to four-fold fol- lowing MS prompting, suggesting that LLMs can employ implicit clues in small sets of user exam- ples to produce personalized translations. Stable scores for Comet also confirm that translation quality is maintained during style adaptations. 3.3 Finding Personalization Information in LLM Representations In light of these results, we set out to test where the model encodes information reflecting a stylistic shift when style-appropriate examples are provided. To this purpose, we train linear probes (Belinkov, 2022) using model activations as input features topredict the style label (MT, H1, or H2) that the style classifier (from Section 3.1) would assign to the eventual translation, based purely on the prompt’s internal representation. Probing accuracy is measured by testing their accuracy in predicting the classified outcome before the generation, us- ing only the prompt representation formed by the model. Given a test set of human-translated para- graphs, we train our probes on a set of examples using an MS prompt with 20 in-context examples. The set is balanced between prompts showcasing personalization with gold in-context examples from a human translator, and non-personalized prompts with MT-generated examples previously produced by the same tested model in a ZS setup. Test ex- amples are selected from the respective novels to ensure for the classifier prediction shifts from MT in the ZS setting to the style of in-context examples when MS is used, signaling a causal in- fluence of demonstrations on output personaliza- tion.5This balanced setup prevents the leaking of task information, e.g., number of in-context exam- ples, to learned probes, ensuring that stylistic differ- ences among human and MT-generated in-context examples are the sole factor determining differ- ences in model activations. We focus specifically on Gemma models, extracting activations after the attention block at each model layer for the last to- ken of the prompt, which was previously shown to encode key task-relevant information (Hendel et al., 2023; Todd et al., 2024; Scalena et al., 2024). Fig- ure 2 reports probe accuracies across all Gemma 2 2B layers, with results for the 9B model reported in Appendix C. We find a peak in probe accuracy of ∼95% around intermediate model layers, suggest- ing that these layers encode stylistic information with near-perfect precision.6These results confirm that personalization is discernible from LLMs’ in- ternal representation, motivating our experiments towards the design of | https://arxiv.org/abs/2505.16612v1 |
inference-time interventions to steer models towards personalized MT outputs. 4 Methods We begin by introducing our the prompting and steering methods that we use as baselines and outline our own proposed SAE-based steering ap- proach for personalized translation. 5Examples are resampled for every test paragraph to pre- vent the probe from overfitting on spurious prompt features. 6We find probes for layers 13 and 21 to perform best for the 2B and 9B models, respectively. 4.1 Prompting Baselines Zero-Shot (ZS). The ZS setup used in our main experiment correspond to the one from Section 3.1, in which the model is simply asked to produce a translation with no conditioning from examples or explanations towards the target translation style. We use this setting to establish a baseline style and translation quality performance for the models. Zero-Shot Explain (ZS-Exp HTand ZS-Exp PT). Building upon the ZS setting, we experiment with a prompting strategy where LLMs are provided with detailed explanations (Exp) of the most salient ele- ments that characterize the desired translation style. We obtain such descriptions by prompting a capa- ble proprietary model, GPT-4o (OpenAI, 2024), with 20 translations matching the desired style, ask- ing it to synthesize a set of guidelines to matching the examples. We evaluate two contrastive variants of this approach, providing GPT-4o with either MT examples (ZS-Exp HT) or alternative human trans- lations (ZS-Exp PT) alongside examples matching the desired style, and asking to describe what char- acterizes the latter compared to the former. To avoid data leakage, all generated explanations are manually reviewed to ensure they do not contain any verbatim content or direct excerpts from the input examples.7Tested models are then prompted with GPT-4o explanations in a ZS setting, to verify whether interpretable directives synthesized from a set of examples matching the desired behavior can produce reliable personalization results. Multi-Shot (MS). Following Section 3.2’s find- ings, we adopt the same MS setup using 20 in- context translation examples matching the style of a target human translator (H1 or H2). 4.2 Steering Baselines Activation Addition (ActAdd). ActAdd is a sim- ple yet effective technique for steering language models. We employ the standard contrastive ap- proach by Rimsky et al. (2024); Scalena et al. (2024) to extract two sets of style-relevant ( {z}+) and default ( {z}−) activations from a given model layer using 20 in-context examples demonstrating default behavior (MT) and the desired behavior (H1 or H2 translations), respectively. We then compute the average ∆steering vector between the two sets of activations, scale it by a factor of 7Details on the prompt templates are in Appendix B.5.α= 2 which was found effective by previous re- search (Scalena et al., 2024) and apply it additively to the same model layer during inference. Representation Fine-tuning (ReFT). As an al- ternative to traditional weight-based Parameter- Efficient Fine-Tuning (PEFT) techniques it learns task-specific interventions applied directly to model activations at inference time (Wu et al., 2024). We apply ReFT to the same personalization- relevant layers identified in Section 3.3 and limit confounding factors by tuning ReFT interventions with the set of 20 examples | https://arxiv.org/abs/2505.16612v1 |
used for MS prompting. 4.3 Contrastive SAE Steering Given a set of LLM activations zl∈Rdsourced from the output of layer l, where dis the model hid- den size, a sparse autoencoder learns a projection to an overcomplete latent space Rm, with m≫d, from which a reconstructed version z∗ lof the orig- inal activations is then produced with minimal in- formation loss: z∗ l=SAE(zl) =h(zl)Wl dec+bdec s.t.h(zl) =ReLU ((zl−bdec)Wl enc+benc)) where Wl enc, Wl decare the SAE encoder and decoder modules, and benc, bdecare bias values. Our pri- mary interest lies in the sparse latents h(zl)∈Rm learned by the SAE encoder, which were empir- ically found to capture monosemantic and inter- pretable properties of model inputs. Contrastive prompt setup Given a set of para- graphs Dfor a novel in the PAR3dataset, each instance in it is a tuple: D={⟨s,H1,H2,MT model⟩} with sbeing the non-English source sentence, H1 and H2 translations from two distinct human translators and MT model the machine translation from the model under evaluation. Similar to pre- vious methods, we employ a contrastive approach to extract SAE latents that are most active in the presence of the desired personalization style, while simultaneously controlling for more generic fea- tures capturing generic properties of the task. We define two sets of contrastive prompts: D+= s, e+ andD−= s, e− capturing respectively the personalized style of in- terest, and baseline properties of the task. Sim- ilarly to the ZS-Exp setup from Section 4.1, we explore two D−configurations using either e−= MT (SAE Cont. HT) ore−=H2 (or H1, if H2 is the personalization target) to assess the effect of baseline choice in steering effectiveness. Feature extraction First, we gather activations z+ landz− lby prompting the model with inputs from the two contrastive sets D+andD−. Acti- vations are extracted at the last prompt token po- sition from its most informative layer, as identi- fied in Section 3.3. Activations are then converted into sparse latent representations x+=h(z+)and x−=h(z−), with x+, x−∈Rmby the SAE encoder. This procedure is repeated across 20 con- trastive examples, resulting in two collections of SAE latent vectors for positive/negative examples: X+= x+ 1, x+ 2, . . . , x+ 20 X−= x− 1, x− 2, . . . , x− 20 Relevance-based Feature Selection To identify discriminative features for personalization in the large set of latents, we employ an information- theoretic approach adapted from Zhao et al. (2025). For each of the inputs, we identify the subset of sizen < m including only the SAE active features, i.e. latent dimensions for which the logit is >0. We consider logit values in this subset as instances of a random variable Xi∈x, and calculate the mutual information I(Xi, Y)between each feature Xiand the target binary variable Y={+,−} corresponding to the style of the provided exam- ples (personalized or non-personalized). A higher I(Xi, Y)indicates that the i-th feature is more in- formative for discriminating between personalized and default inputs, and can hence be used for steer- ing. A representative sample of 40 latents showing the highest mutual information scores for both per- sonalized ( {Xi}+) | https://arxiv.org/abs/2505.16612v1 |
and non-personalized ( {Xi}−) examples is selected using this procedure.8For every selected latent, we compute its expected logit when personalization is present or absent in pro- vided examples, i.e. E+[Xi]andE−[Xi]. Inference-time intervention Finally, activations are steered by setting selected latents to their ex- pected value whenever their observed score is be- low (for the promoted personalized case) or above (for the demoted non-personalized case) the pre- 8By contrast, traditional SAE-based steering methods only employ features associated with the positive class (Chalnev et al., 2024; Arditi et al., 2024).computed average.9Hence, in the SAE Cont. HT setting we enhance the features relevant to a tar- get personalized style, e.g. {Xi}H1for H1, and suppress the features {Xi}MT, corresponding to the model’s default MT. In SAE Cont. PT, instead, we promote the same H1-related latents while sup- pressing {Xi}H2to steer the model towards H1 per- sonal style. Additionally, we modulate the mag- nitude of the resulting vector with an αcoeffi- cient , which was found to play an essential role in steering effectiveness in previous research (Scalena et al., 2024; Ferrando et al., 2025) 5 Experiments 5.1 Setup Model selection We evaluate our methods on the same three models used for our preliminary eval- uation of Section 3. Our selection is guided by the availability of open-source pre-trained SAEs, which can be otherwise computationally expen- sive to train. For Gemma models, we employ SAEs from the GemmaScope suite (Lieberum et al., 2024); for the Llama 3.1 model we employ the SAE released by (McGrath et al., 2024). GemmaScope SAEs are available for every model layer, enabling us to steer Gemma models on their most infor- mative layers for the task, which we identified in Section 3.3. On the contrary, a single SAE for the 19th layer is available for Llama, hence limiting our evaluation of SAE steering and potentially pro- ducing sub-optimal steering result for that model. Metrics We evaluate our approaches on a held- out test set sourced from the PAR3dataset for per- sonalization and output quality. For personaliza- tion, we use the classifiers described in Sec- tion 3.1. We define three submetrics employing the classifier probability distribution over the three classes (MT, H1, H2) to better analyze different aspects of classifiers predictions. First, we compute Haccuracy as the classifier’s total probability as- signed to human-like translations, p(H1) +p(H2), thereby measuring the generic human-like style of the text. To measure personalization, we em- ploy the personalization P, corresponding only to the human translation currently selected as target (H1 or H2). Finally, the more stringent P FLIPmet- ric measures the proportion of examples for which the applied conditioning procedure (either prompt- 9Algorithm 1 provides a summary of our SAE-based steer- ing approach. Gemma 2 2B Gemma 2 9B LLaMA 3.1 8B H P P FLIP H P P FLIP H P P FLIP ZS 0.21 0.10 0.05 0.69 0.15 0.08 0.04 0.71 0.24 0.08 0.05 0.70 ZS-Exp HT 0.30 0.22 0.16 0.68 0.41 0.22 0.18 0.72 0.56 0.23 0.21 0.69 ZS-Exp PT – 0.20 0.14 0.69 – 0.23 0.19 0.73 – 0.30 0.26 0.70 MS 0.37 | https://arxiv.org/abs/2505.16612v1 |
0.24 0.16 0.69 0.48 0.31 0.27 0.73 0.58 0.32 0.28 0.73 ActAdd 0.27 0.22 0.12 0.67 0.32 0.24 0.20 0.70 0.55 0.36 0.28 0.70 ReFT 0.31 0.22 0.18 0.70 0.46 0.34 0.27 0.67 0.53 0.38 0.26 0.70 SAE Cont. HT 0.39 0.27 0.19 0.70 0.46 0.33 0.29 0.72 0.59 0.31 0.27 0.72 SAE Cont. PT – 0.27 0.18 0.69 – 0.35 0.29 0.73 – 0.33 0.28 0.72 Table 3: Averaged metric scores across all tested languages (per-language breakdown in Appendix C). H: human style accuracy, i.e. p(H1) +p(H2).P: personalization accuracy p(Hx)for the target style. P FLIP: Proportion of segments for which steering has a causal impact on personalization. α= 5is used for SAE Cont. results. ing or steering) causally influences the resulting classifier prediction, identifying examples for which the label flips from MT to the desired target. To ensure that our interventions do not result in a degradation of overall translation quality, we also employ Comet10(Rei et al., 2020) using the personalized translation as reference. Theαtrade-off We begin by verifying the opti- mal steering intensity αfor our SAE steering tech- nique. We primarily focus on results from Gemma 2 2B, for which we ran a comprehensive sweep over all relevant hyperparameters.11Figure 3 il- lustrates the influence of αon MT personalization accuracy and fluency averaged across all translators for all tested languages. For values of α≤3, per- formance remains close to that of the MS baseline, indicating that the contrastive method is effectively isolating latents associated with human-like style. Asαincreases, performance generally exceeds the MS approach, achieving greater control and flexi- bility in guiding the model’s output with next to no impact of translation quality. However, for α≥10, we observe a major degradation in Comet , sug- gesting an important drop in translation fluency. Following Ferrando et al. (2025), which also em- ploy SAEs for steering, we experiment with very high alpha values (up to 150), finding the clas- sifier’s Haccuracy approaching 100% for some languages. While this indicates that the contrastive steering is aggressively optimizing toward classi- fier preferences (Figure 4), the consequent drop in Comet scores reveals a steep decline in transla- tion quality, often resulting in incoherent or non- sensical generations from a human perspective.12 10Unbabel/wmt22-comet-da 11Larger models were evaluated using a subset of the best- performing configurations. Details in Appendix C. 12A qualitative evaluation is provided in Appendix D. Figure 3: Personalization Pand Comet across vari- ous steering intensity αfor SAE Cont. HTon Gemma 2 2B. The performance of prompting baselines (ZS, MS, Exp) is also reported. Results show a trade-off between steering intensity and translation quality. Ultimately, we identify α= 5 as an appropriate steering intensity to balance personalization and fluency, and employ it for our main evaluation. 5.2 Results and Discussion Table 3 presents performances of tested models across prompting and steering setups, averaged across all languages and personalization targets (H1 and H2 for each language). We find that our SAE Cont. HTand SAE Cont. PTmethods gener- ally achieve the best trade-off between person- alization accuracy and translation quality , espe- cially for the smaller Gemma 2 | https://arxiv.org/abs/2505.16612v1 |
2B model. This could be due to the larger models’ superior abil- ity to incorporate in-context information naturally, reducing the relative benefit of explicit steering. Which contrastive setup is better? Comparing the two contrastive setups ( HTand PT) for the ZS- Figure 4: Comet andHaccuracy across αsteering intensity values for Gemma 2 2B, showing a major drop in translation quality for very high intensities ( α≥50). Exp and SAE Cont. methods, we find that using different human demonstrations as a contrastive baseline in PTgenerally produces better results for larger models . As for general performance, we conjecture this could be due to the larger mod- els’ improved ability to disentangle personalization- critical factors without explicit guidance. For the smaller Gemma 2 2B, the difference between the two approaches is minimal, suggesting the model cannot fully exploit the examples’ differences. Do SAE Steering and MS Prompting Impact Ac- tivations in a Similar Way? Since SAE-based approaches perform on par or better than MS, we set out to investigate whether the two methods re- sult in a similar impact on model representations. We collect the modified activations zsteerobtained from the SAE Cont. HTsteering setting and eval- uate them using the probing classifier trained on MS-conditioned activations we introduced in Sec- tion 3.3 for detecting personalization information. Table 4 shows probe accuracy in detecting the pos- itive impact of SAE steering across the three pos- sible outcomes of the steering procedure. We find that the probe corresponding to the SAE layer ef- fectively distinguishes between activations corre- sponding to successful and unsuccessful SAE steer- ing, despite having been exposed only to MS con- ditioning during training. This includes both in- stances for which the classifier prediction is flipped after steering (MT →H*), and settings where the conditioning fails (MT →MT). In settings where the original output already matches human style (H*→H*), the probe obtains lower accuracy with broader confidence intervals, denoting higher un- certainty. These findings suggest that the SAE’s la- tents we extract through our contrastive method are meaningfully connected to the stylistic pat-MT→H* MT →MT H* →H* Gemma 2 2B 0.94 ±0.01 0.07±0.02 0.72±0.15 Gemma 2 9B 0.93 ±0.02 0.12±0.10 0.68±0.19 Table 4: Probing accuracy on SAE Cont. HT-steered acti- vations averaged across languages and H1/H2 transla- tors. Probes trained on MS activations reliably detect the impact of SAE Cont. steering over model genera- tions, suggesting similar mechanisms. terns embedded in the multi-shot examples , pro- viding evidence that our intervention influences the internal representations of the model, aligning them to the natural effect of the MS approach. 6 Conclusion and future work We conducted a broad evaluation of various prompt- ing and steering approaches for personalizing LLM- generated translations. Our evaluation targets a practical, real-world application of literary transla- tion and addresses the underexplored challenge of steering LLM generations in a linguistically rich and stylistically sensitive domain. Through com- prehensive evaluation across multiple languages, novels, and models, we demonstrate that our pro- posed SAE-based approach outperforms prompting and alternative steering techniques. Although faithfully replicating individual human translation styles remains a highly challenging | https://arxiv.org/abs/2505.16612v1 |
task, our approach achieves strong alignment with hu- man translation quality, as reflected in both general human-likeness and translator-specific personaliza- tion metrics. These results highlight the method’s robustness and its potential to support high-fidelity translation workflows in real-world settings. Con- cretely, these results have important implications in the development of personalized MT systems based on LLMs. In particular, the notable effective- ness of our proposed approach on smaller models might enable MT personalization in lower-resource settings, easing further research on how personal- ization information is encoded and produced by language models. Future work will focus on improving the inter- pretability of the learned SAE latents, to uncover their learned monosemantic features. Additionally, we plan to explore the integration of our method with larger language models, where increased ca- pacity may further enhance the precision and flu- ency of personalized translations. Limitations While our work demonstrates the potential of steer- ing LLMs for MT personalization using sparse au- toencoders, we acknowledge several limitations. Firstly, our findings’ generalizability is con- strained by the scope of our experiments. We focused on literary translation into English from seven specific source languages and evaluated three LLMs of relatively small size. Consequently, the observed effectiveness of SAE-based steering and the identified optimal layers for intervention may not directly transfer to other language pairs, sig- nificantly different model architectures or sizes, or distinct domains beyond literary texts. Further re- search is needed to assess the robustness of our approach across a broader range of linguistic and modeling contexts. Secondly, the computational overhead associated with sparse autoencoders presents a practical chal- lenge. Although we utilized pre-trained SAEs in our study, the initial training of these components is resource-intensive. This could limit the acces- sibility and scalability of our proposed method, particularly for researchers or practitioners with limited computational resources or when frequent retraining for new models or tasks is required. The current availability of pre-trained SAEs also re- stricts model choice, as seen with the Llama 3.1 8B model where an SAE was only available for a potentially sub-optimal layer. Finally, our investigation primarily focused on downstream performance and the impact of various personalization strategies on model representations. However, we did not pursue a mechanistic under- standing of the "personalization circuits" within the LLMs. Future work could adopt a more fine- grained, mechanistic interpretability approach to study how specific SAE latents or combinations thereof encode and manipulate nuanced stylistic features, thereby providing deeper insights into the underlying processes of LLM personalization. References Andy Arditi, Oscar Obeso, Aaquib Syed, Daniel Paleka, Nina Panickssery, Wes Gurnee, and Neel Nanda. 2024. Refusal in language models is mediated by a single direction. In Advances in Neural Informa- tion Processing Systems , volume 37, pages 136037– 136083. Curran Associates, Inc. Yonatan Belinkov. 2022. Probing classifiers: Promises,shortcomings, and advances. Computational Linguis- tics, 48(1):207–219. Nathaniel Berger, Stefan Riezler, Miriam Exel, and Matthias Huck. 2024. Post-edits are preferences too. InProceedings of the Ninth Conference on Machine Translation , pages 1289–1300, Miami, Florida, USA. Association for Computational Linguistics. Eleftheria Briakou, Jiaming Luo, Colin Cherry, and Markus Freitag. 2024. Translating | https://arxiv.org/abs/2505.16612v1 |
step-by-step: De- composing the translation process for improved trans- lation quality of long-form texts. In Proceedings of the Ninth Conference on Machine Translation , pages 1301–1317, Miami, Florida, USA. Association for Computational Linguistics. Sviatoslav Chalnev, Matthew Siu, and Arthur Conmy. 2024. Improving steering vectors by targeting sparse autoencoder features. David Chanin, Anthony Hunter, and Oana-Maria Cam- buru. 2024. Identifying linear relational concepts in large language models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 1524–1535, Mexico City, Mexico. Association for Computational Linguistics. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics , pages 8440– 8451, Online. Association for Computational Lin- guistics. Hoagy Cunningham, Aidan Ewart, Logan Riggs, Robert Huben, and Lee Sharkey. 2023. Sparse autoencoders find highly interpretable features in language models. Javier Ferrando, Oscar Obeso, Senthooran Rajamanoha- ran, and Neel Nanda. 2025. Do i know this entity? knowledge awareness and hallucinations in language models. Jaden Fiotto-Kaufman, Alexander R Loftus, Eric Todd, Jannik Brinkmann, Caden Juang, Koyena Pal, Can Rager, Aaron Mueller, Samuel Marks, Arnab Sen Sharma, Francesca Lucchetti, Michael Ripa, Adam Belfki, Nikhil Prakash, Sumeet Multani, Carla Brod- ley, Arjun Guha, Jonathan Bell, Byron Wallace, and David Bau. 2024. Nnsight and ndif: Democratizing access to foundation model internals. Lucie Flekova, Jordan Carpenter, Salvatore Giorgi, Lyle Ungar, and Daniel Preo¸ tiuc-Pietro. 2016. Analyzing biases in human perception of user age and gender from text. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 843–854, Berlin, Germany. Association for Computational Linguis- tics. Xavier Garcia and Orhan Firat. 2022. Using natural language prompts for machine translation. Ana Guerberof-Arenas and Antonio Toral. 2022. Cre- ativity in translation. Translation Spaces , 11(2):184– 212. Roee Hendel, Mor Geva, and Amir Globerson. 2023. In-context learning creates task vectors. In Find- ings of the Association for Computational Linguis- tics: EMNLP 2023 , pages 9318–9333, Singapore. Association for Computational Linguistics. Marzena Karpinska and Mohit Iyyer. 2023. Large lan- guage models effectively leverage document-level context for literary translation, but critical errors per- sist. In Proceedings of the Eighth Conference on Machine Translation , pages 419–451, Singapore. As- sociation for Computational Linguistics. Jihyeon Lee, Taehee Kim, Yunwon Tae, Cheonbok Park, and Jaegul Choo. 2023. PePe: Personalized post- editing model utilizing user-generated post-edits. In Findings of the Association for Computational Lin- guistics: EACL 2023 , pages 239–253, Dubrovnik, Croatia. Association for Computational Linguistics. Kenneth Li, Oam Patel, Fernanda Viégas, Hanspeter Pfister, and Martin Wattenberg. 2023. Inference-time intervention: Eliciting truthful answers from a lan- guage model. In Advances in Neural Information Processing Systems , volume 36, pages 41451–41530. Curran Associates, Inc. Tom Lieberum, Senthooran Rajamanoharan, Arthur Conmy, Lewis Smith, Nicolas Sonnerat, Vikrant Varma, Janos Kramar, Anca Dragan, Rohin Shah, and Neel Nanda. 2024. Gemma scope: Open sparse autoencoders everywhere all | https://arxiv.org/abs/2505.16612v1 |
at once on gemma 2. InProceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP , pages 278–300, Miami, Florida, US. Association for Computational Linguistics. Huan Lin, Liang Yao, Baosong Yang, Dayiheng Liu, Haibo Zhang, Weihua Luo, Degen Huang, and Jin- song Su. 2021. Towards user-driven neural machine translation. In Proceedings of the 59th Annual Meet- ing of the Association for Computational Linguistics and the 11th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers) , pages 4008–4018, Online. Association for Computa- tional Linguistics. Xiaoming Liu, Zhaohan Zhang, Yichen Wang, Hang Pu, Yu Lan, and Chao Shen. 2023. CoCo: Coherence- enhanced machine-generated text detection under low resource with contrastive learning. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 16167–16188, Singapore. Association for Computational Linguis- tics. Samuel Marks, Can Rager, Eric J Michaud, Yonatan Be- linkov, David Bau, and Aaron Mueller. 2025. Sparsefeature circuits: Discovering and editing interpretable causal graphs in language models. In The Thirteenth International Conference on Learning Representa- tions . Evgeny Matusov. 2019. The challenges of using neural machine translation for literature. In Proceedings of the Qualities of Literary Machine Translation , pages 10–19, Dublin, Ireland. European Association for Machine Translation. Thomas McGrath, Daniel Balsam, Myra Deng, and Eric Ho. 2024. Understanding and steering llama 3 with sparse autoencoders. Paul Michel and Graham Neubig. 2018. Extreme adap- tation for personalized neural machine translation. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers) , pages 312–318, Melbourne, Australia. Association for Computational Linguistics. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 746–751, Atlanta, Georgia. Association for Computational Linguistics. Joss Moorkens, Antonio Toral, Sheila Castilho, and Andy Way. 2018. Translators ’perceptions of liter- ary post-editing using statistical and neural machine translation. Translation Spaces , 7(2):240–262. Maria Nadejde, Anna Currey, Benjamin Hsu, Xing Niu, Marcello Federico, and Georgiana Dinu. 2022. CoCoA-MT: A dataset and benchmark for contrastive controlled MT with application to formality. In Find- ings of the Association for Computational Linguistics: NAACL 2022 , pages 616–632, Seattle, United States. Association for Computational Linguistics. Xing Niu, Marianna Martindale, and Marine Carpuat. 2017. A study of style in machine translation: Con- trolling the formality of machine translation output. InProceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing , pages 2814–2819, Copenhagen, Denmark. Association for Computational Linguistics. OpenAI. 2024. Gpt-4 technical report. Ella Rabinovich, Raj Nath Patel, Shachar Mirkin, Lucia Specia, and Shuly Wintner. 2017. Personalized ma- chine translation: Preserving original author traits. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume 1, Long Papers , pages 1074–1084, Valencia, Spain. Association for Computational Lin- guistics. Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In Proceedings of the 2020 Conference on Empirical Methods | https://arxiv.org/abs/2505.16612v1 |
in Natural Language Process- ing (EMNLP) , pages 2685–2702, Online. Association for Computational Linguistics. Nina Rimsky, Nick Gabrieli, Julian Schulz, Meg Tong, Evan Hubinger, and Alexander Turner. 2024. Steer- ing llama 2 via contrastive activation addition. In Proceedings of the 62nd Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers) , pages 15504–15522, Bangkok, Thai- land. Association for Computational Linguistics. Gabriele Sarti, Phu Mon Htut, Xing Niu, Benjamin Hsu, Anna Currey, Georgiana Dinu, and Maria Nadejde. 2023. RAMP: Retrieval and attribute-marking en- hanced prompting for attribute-controlled translation. InProceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) , pages 1476–1490, Toronto, Canada. Association for Computational Linguistics. Danielle Saunders and Bill Byrne. 2020. Reducing gen- der bias in neural machine translation as a domain adaptation problem. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics , pages 7724–7736, Online. Association for Computational Linguistics. Daniel Scalena, Gabriele Sarti, and Malvina Nissim. 2024. Multi-property steering of large language mod- els with dynamic activation composition. In Proceed- ings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP , pages 577–603, Miami, Florida, US. Association for Com- putational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Controlling politeness in neural machine trans- lation via side constraints. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 35–40, San Diego, California. Association for Computational Linguis- tics. Gemma Team. 2024a. Gemma 2: Improving open lan- guage models at a practical size. Llama Team. 2024b. The llama 3 herd of models. Adly Templeton, Tom Conerly, Jonathan Marcus, Jack Lindsey, Trenton Bricken, Brian Chen, Adam Pearce, Craig Citro, Emmanuel Ameisen, Andy Jones, Hoagy Cunningham, Nicholas L Turner, Callum McDougall, Monte MacDiarmid, C. Daniel Freeman, Theodore R. Sumers, Edward Rees, Joshua Batson, Adam Jermyn, Shan Carter, Chris Olah, and Tom Henighan. 2024. Scaling monosemanticity: Extracting interpretable features from claude 3 sonnet. Transformer Circuits Thread . Katherine Thai, Marzena Karpinska, Kalpesh Krishna, Bill Ray, Moira Inghilleri, John Wieting, and Mohit Iyyer. 2022. Exploring document-level literary ma- chine translation with parallel paragraphs from worldliterature. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing , pages 9882–9902, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Eric Todd, Millicent L. Li, Arnab Sen Sharma, Aaron Mueller, Byron C. Wallace, and David Bau. 2024. Function vectors in large language models. In Pro- ceedings of the 2024 International Conference on Learning Representations . ArXiv:2310.15213. Antonio Toral and Andy Way. 2015. Translating literary text between related languages using SMT. In Pro- ceedings of the Fourth Workshop on Computational Linguistics for Literature , pages 123–132, Denver, Colorado, USA. Association for Computational Lin- guistics. Antonio Toral and Andy Way. 2018. What Level of Quality Can Neural Machine Translation Attain on Literary Text? , pages 263–287. Springer International Publishing, Cham. Alexander Matt Turner, Lisa Thiergart, Gavin Leech, David Udell, Juan J. Vazquez, Ulisse Mini, and Monte MacDiarmid. 2024. Steering language models with activation engineering. Eva Vanmassenhove, | https://arxiv.org/abs/2505.16612v1 |
Christian Hardmeier, and Andy Way. 2018. Getting gender right in neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 3003–3008, Brussels, Belgium. Associa- tion for Computational Linguistics. Rob V oigt and Dan Jurafsky. 2012. Towards a literary machine translation: The role of referential cohesion. InProceedings of the NAACL-HLT 2012 Workshop on Computational Linguistics for Literature , pages 18–25, Montréal, Canada. Association for Computa- tional Linguistics. Longyue Wang, Siyou Liu, Chenyang Lyu, Wenxiang Jiao, Xing Wang, Jiahao Xu, Zhaopeng Tu, Yan Gu, Weiyu Chen, Minghao Wu, Liting Zhou, Philipp Koehn, Andy Way, and Yulin Yuan. 2024a. Find- ings of the WMT 2024 shared task on discourse-level literary translation. In Proceedings of the Ninth Con- ference on Machine Translation , pages 699–700, Mi- ami, Florida, USA. Association for Computational Linguistics. Longyue Wang, Chenyang Lyu, Tianbo Ji, Zhirui Zhang, Dian Yu, Shuming Shi, and Zhaopeng Tu. 2023a. Document-level machine translation with large lan- guage models. In Proceedings of the 2023 Confer- ence on Empirical Methods in Natural Language Pro- cessing , pages 16646–16661, Singapore. Association for Computational Linguistics. Longyue Wang, Zhaopeng Tu, Yan Gu, Siyou Liu, Dian Yu, Qingsong Ma, Chenyang Lyu, Liting Zhou, Chao- Hong Liu, Yufeng Ma, Weiyu Chen, Yvette Graham, Bonnie Webber, Philipp Koehn, Andy Way, Yulin Yuan, and Shuming Shi. 2023b. Findings of the WMT 2023 shared task on discourse-level literary translation: A fresh orb in the cosmos of LLMs. In Proceedings of the Eighth Conference on Machine Translation , pages 55–67, Singapore. Association for Computational Linguistics. Yue Wang, Cuong Hoang, and Marcello Federico. 2021. Towards modeling the style of translators in neural machine translation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 1193–1199, Online. Association for Computational Linguistics. Yuxia Wang, Jonibek Mansurov, Petar Ivanov, Jinyan Su, Artem Shelmanov, Akim Tsvigun, Osama Mo- hammed Afzal, Tarek Mahmoud, Giovanni Puccetti, and Thomas Arnold. 2024b. SemEval-2024 task 8: Multidomain, multimodel and multilingual machine- generated text detection. In Proceedings of the 18th International Workshop on Semantic Evalua- tion (SemEval-2024) , pages 2057–2079, Mexico City, Mexico. Association for Computational Linguistics. Yuxia Wang, Jonibek Mansurov, Petar Ivanov, Jinyan Su, Artem Shelmanov, Akim Tsvigun, Osama Mo- hammed Afzal, Tarek Mahmoud, Giovanni Puc- cetti, Thomas Arnold, Alham Aji, Nizar Habash, Iryna Gurevych, and Preslav Nakov. 2024c. M4GT- bench: Evaluation benchmark for black-box machine- generated text detection. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 3964– 3992, Bangkok, Thailand. Association for Computa- tional Linguistics. Minghao Wu, Jiahao Xu, Yulin Yuan, Gholamreza Haffari, Longyue Wang, Weihua Luo, and Kaifu Zhang. 2025a. (Perhaps) beyond human translation: Harnessing multi-agent collaboration for translating ultra-long literary texts. Zhengxuan Wu, Aryaman Arora, Atticus Geiger, Zheng Wang, Jing Huang, Dan Jurafsky, Christopher D. Manning, and Christopher Potts. 2025b. Axbench: Steering llms? even simple baselines outperform sparse autoencoders. Zhengxuan Wu, Aryaman Arora, Zheng Wang, Atticus Geiger, Dan Jurafsky, Christopher D Manning, and Christopher Potts. 2024. ReFT: Representation fine- tuning for language models. In | https://arxiv.org/abs/2505.16612v1 |
The Thirty-eighth Annual Conference on Neural Information Process- ing Systems . Wu Youyou, Michal Kosinski, and David Stillwell. 2015. Computer-based personality judgments are more ac- curate than those made by humans. Proceedings of the National Academy of Sciences , 112(4):1036– 1040. Zeyu Yun, Yubei Chen, Bruno Olshausen, and Yann Le- Cun. 2021. Transformer visualization via dictionary learning: contextualized embedding as a linear su- perposition of transformer factors. In Proceedings ofDeep Learning Inside Out (DeeLIO): The 2nd Work- shop on Knowledge Extraction and Integration for Deep Learning Architectures , pages 1–10, Online. Association for Computational Linguistics. Peng Zhang, Zhengqing Guan, Baoxi Liu, Xi- anghua (Sharon) Ding, Tun Lu, Hansu Gu, and Ning Gu. 2022. Building user-oriented personal- ized machine translator based on user-generated tex- tual content. Proc. ACM Hum.-Comput. Interact. , 6(CSCW2). Yu Zhao, Alessio Devoto, Giwon Hong, Xiaotang Du, Aryo Pradipta Gema, Hongru Wang, Xuanli He, Kam-Fai Wong, and Pasquale Minervini. 2025. Steer- ing knowledge selection behaviours in LLMs via SAE-based representation engineering. In Proceed- ings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computa- tional Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 5117–5136, Al- buquerque, New Mexico. Association for Compu- tational Linguistics. Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard Ren, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann-Kathrin Dombrowski, Shashwat Goel, Nathaniel Li, Michael J. Byun, Zifan Wang, Alex Mallen, Steven Basart, Sanmi Koyejo, Dawn Song, Matt Fredrikson, J. Zico Kolter, and Dan Hendrycks. 2025. Representation engineering: A top-down approach to ai transparency. A List of novels used To ensure a diverse and representative evaluation, we select novels spanning a variety of linguistic families and cultural backgrounds. Our dataset in- cludes Romance languages such as Italian (Pinoc- chio) and French (Around the World in Eighty Days), as well as Germanic languages like Dutch (The Diary of a Young Girl) and German (Be- ware of Pity). To evaluate our setup on non-Latin scripts and distinct linguistic structures, we also include Russian (Crime and Punishment), Japanese (No Longer Human), and Chinese (Dream of the Red Chamber). Table 5 summarizes the number of paragraphs employed in the evaluation of each language. B Experiments reproducibility In this section, we provide every parameter we use for the reproducibility of our experiments setups. B.1 Base prompt We use the same prompt template across all meth- ods: ZS (which corresponds to the original model translation), ZS-Exp HT, ZS-Exp PT(detailed in Ap- pendix B.5), MS, ActAdd, ReFT, and SAE-based contrastive setups. This prompt, shown in Listing 3, instructs the model to translate the source sentence while explicitly preventing it from adding any ex- planations about the translation process. Since all test models are Instruction Tuned, we utilize their native chat templates to preprocess the input ac- cordingly. For multi-shot examples, the user and assistant turns are repeated for each example, al- ways using the same prompt structure. B.2 Classifier training All classifiers are fine-tuned from the xlm-roberta-large model13, using a linear classification head. Training is conducted for 6 epochs with a learning rate of 2e-5 and | https://arxiv.org/abs/2505.16612v1 |
a batch size of 32, selecting the best model checkpoint based on validation accuracy. Training data only includes generations from models and the translator without any source text. It is also perfectly balanced, as each paragraph provides one instance for all three labels: H1, H2, and MT. The total size of the training set varies depending on the number of paragraphs in the cho- sen novel. On average, we obtain approximately 13FacebookAI/xlm-roberta-large830 instances, resulting in a total of around 2,490 labeled examples for training (see 5). Validation and test sets are strictly held out and never seen during training. Additionally, they do not include the small 20-example subsets used in the MS, ZS- Exp HT, ZS-Exp PT, SAE Cont. HT, and SAE Cont. PT setups. B.3 ReFT training ReFT training was conducted using the PyReFT toolkit from the original authors14. We applied the intervention at the same hook point used by other steering methods - specifically, the layer output corresponding to the residual stream at the selected layer. The training configuration includes a low rank dimension of 4, lora alpha set to 32, and alora dropout of 0.05. ReFT was trained on the same 20 prompts used in the MS setup, for a total of 100 epochs. B.4 SAE Cont. HTand SAE Cont. PT We use the NNsight library (Fiotto-Kaufman et al., 2024) to extract and manipulate model activations for all steering experiments. The source code is publicly available in the repository linked in the main body of this paper. For consistency, we use the same set of contrastive examples employed in the MS approach. Algorithm 1 outlines the procedure for latent- based steering. It enhances features identified as relevant to personalization while simultaneously suppressing those negatively correlated with the task. Algorithm 1: Contrastive SAE Steering Input: Input activation z, SAE model sae, target latents expected value E+[Xi], contrast latents expected value E−[Xi], steering coefficient α Output: Steered activation znew x←sae.encode (z); m←length (x); fori←1tomdo ifE+[Xi]> x[i]then x[i]←E+[Xi] ifE−[Xi]< x[i]then x[i]←E−[Xi] zsteer←α·sae.decode (x); return zsteer B.5 ZS-Exp HTand ZS-Exp PT For both the ZS-Exp HTand ZS-Exp PTsetups, we used GPT-4o (June 2025) to generate explanations 14stanfordnlp/pyreft Lang Novel name Train Val Test ICL IT Pinocchio 745 82 107 20 FR Around the World in Eighty Days 829 92 120 20 NL The Diary of a Young Girl 769 85 110 20 DE Beware of Pity 606 67 96 20 RU Crime and Punishment 1517 168 224 20 JA No Longer Human 652 40 81 20 ZH Dream of the Red Chamber 694 76 92 20 Table 5: Each number corresponds to a single instance in the dataset. When the dataset is used for training, each instance is associated with three distinct labels - H1, H2, and MT - thus the total number must be multiplied by three. Lang. Gemma 2 2B Gemma 2 9B Llama 3.1 8B DE 0.89 0.90 0.84 RU 0.92 0.90 0.91 ZH 0.99 0.98 0.98 IT 0.78 0.85 0.80 NL 0.79 0.78 0.82 FR 0.88 0.87 0.90 JA 0.76 0.79 0.76 Table 6: Accuracy of model- and language-specific | https://arxiv.org/abs/2505.16612v1 |
3- way (MT, H1, H2) classifiers on balanced held-out sets for every language. Random baseline: 0.33. detailing the stylistic differences between a base translation and a target human translation. The prompt template used for this task is shown in List- ing 1, using the same 20 examples as in the MS, SAE Cont. HT, and SAE Cont. PTsetups. All outputs were manually inspected to ensure no verbatim excerpts from the provided examples were present, avoiding any risk of data leakage. Example outputs for different novels are shown in Listing 2. Finally, these generated guidelines are used to prompt the evaluated models, following the tem- plate shown in Listing 3. C All models results C.1 Classifiers We show in Table 6 results for every classifier trained for each model and for each language. C.2 Probing experiments We report in Figure 5 the probing classifier perfor- mance for Gemma 2 9B. We select the 21th layer as the most informative one to make our intervention. Results for the smaller model Gemma 2 2B are shown in the main body of the paper in Figure 2.C.3 Prompting and steering results We present detailed plots of the results for each novel across the three evaluated models in Fig- ure 6 (Gemma 2 2B), Figure 7 (Gemma 2 9B), and Figure 8 (Llama 3.1 8B). These plots display the performance of all evaluated methods, report- ing the three submetrics: Haccuracy (general human-likeness), Paccuracy (translator-specific accuracy), and P FLIP(personalized flip accuracy), alongside the corresponding Comet scores mea- suring translation quality. D Examples from dataset and different approaches We present in Tables 7 and 8 a selection of ex- amples from two different languages, showcasing outputs from each of the tested setups. For each example, we also report the corresponding clas- sification label predicted by the classifier and the associated Comet score. Additionly table 9 shows some examples of models generating out- put aligned with the Human translator according to the classifier but with a low Comet score corresponding to an almost unreadable output due to extreme αvalues. Source (ZH) 遂又往後看,只著一弓,弓上著香。也有一首歌云:「二十年辨是非,榴花照。 三春及初春景,虎兕相逢大。」 classifier Comet H1 Forthwith, therefore, he prosecuted a further perusal of what came next, when he caught sight of a picture of a bow. On this bow hung a citron. There was also this ode: Full twenty years right and wrong to expound will be thy fate! What place pomegranate blossoms come in bloom will face the Palace Gate! The third portion of spring, of the first spring in beauty short will fall! When tiger meets with hare thou wilt return to sleep perennial.H1 - H2 The picture that followed was of a bow with a citron hanging from it, followed by what looked like the words of a song: You shall, when twenty years in life’s hard school are done, In pomegranate-time to palace halls ascend. Though three springs never could with your first spring compare, When hare meets tiger your great dream shall end.H2 - Gemma 2 2B When he turned back to look, he saw a bow hanging on it, with a plum blossom. | https://arxiv.org/abs/2505.16612v1 |
There was also a poem that said: "For twenty years, I have been trying to distinguish right from wrong, and the plum blossoms bloom in the palace. In the spring, the tigers and the rhinoceroses meet, and they dream of returning home."MT 0.49 Gemma 2 9B Looking back, he saw a painting of a bow with a quiver hanging on it. There was also a line of lyrics that said: ¨For twenty years, I have distinguished right from wrong, and the blossoming of the pomegranate flowers illuminates the imperial court. Three springs have passed, and now it is the beginning of spring again, like a dream where tigers and rhinoceroses meet.MT 0.56 Llama 3.1 8B As I looked back, I saw a painting of a bow with a censer on it. There was also a poem that said: ¨Twenty years of distinguishing right from wrong, the lotus flower blooms in the palace. Three springs competing with the scenery of the first spring, the tiger and the rhinoceros meet and return to their dream."MT 0.53 ZS-Exp HT And as he turned his gaze back upon the scene once more, there was displayed an archer’s bow strung tautly; upon its surface hung a fragrant plum blossom. A poem also recited thus spake: "For twenty years I have sought truth amidst strife, where blooming durian flowers illuminate the palace walls. In spring’s embrace, they vie for beauty, tigers clashing in dreams of return."MT 0.60 ZS-Exp PT And as he turned his gaze back upon the scene once more, there was displayed an archer’s bow strung tautly; upon its surface hung a fragrant plum blossom. A poem also recited thus: "For twenty years I have sought truth amidst strife, where blooming durian flowers illuminate the palace walls. In spring’s embrace, they vie for beauty, tigers clashing in dreams of return."MT 0.61 MS Then, turning around, he noticed a picture depicting a bow hanging on its string, adorned with incense sticks. A verse read: "For twenty years I’ve tried to discern truth from falsehood, and wherever fragrant plum blossoms bloom, the palace gates shine brightly."H1 0.65 ActAdd When he turned back to look, he saw a bow hanging on it, with a plum blossom. There was also a poem that said: "For twenty years, I have been trying to tell right from wrong, and the plum blossoms bloom in the palace. In spring, the tigers and rhinoceroses meet, and they dream of going back home."MT 0.53 ReFT When he turned back to look, he saw a bow hanging there, adorned with a plum blossom. There was also a poem that said: "For twenty years, I have tried to distinguish right from wrong, and plum blossoms bloom within the palace. In spring, tiger and rhinoceros meet, dreaming of their return home."MT 0.58 SAE Cont. HT α= 1 When he turned back to look, he saw a bow hung upon it, with a plum blossom. There was also a verse which read: "For twenty years, I have struggled to discern truth from falsehood, and the plums blossomed where the | https://arxiv.org/abs/2505.16612v1 |
imperial court resided. During the three springs, the tiger and rhinoceros met, dreaming of their return."MT 0.62 α= 5 He looked back and there hung a bow adorned with a plum blossom. A verse read: “Twenty years I ’ve struggled for truth, where fragrant plums bloom near the imperial court. Spring arrives, tiger and rhino clash, dreaming of their return.MT 0.62 α= 150 The when the veryShellTruth lookWarmTheTheTheTigeratPoint [...] MT 0.11 SAE Cont. PT α= 5 He looked back and saw a bow adorned with fragrant citron. A verse declared: "Twenty years I’ve weighed right and wrong; where pomegranate blossoms bloom, they light the palace gates. Spring ’s full bloom cannot match early spring ’s grace. When tiger meets rhino, one returns to the great dream."H1 0.65 Table 7: Example paragraphs from “Dream of the Red Chamber” (ZH →EN) translated with H1personalization. Setup outputs are from Gemma 2 2B; only MS and SAE Cont. PTsuccessfully flip the classifier’ prediction. Source (IT) Intanto la stalla fu chiusa e Pinocchio rimase solo: e perché erano molte ore che non aveva mangiato, cominció a sbadigliare dal grande appetito. E, sbadigliando, spalancava una bocca che pareva un forno. classifier Comet H1 When the stable door was closed, Pinocchio was left alone. And since he hadn’t eaten in many hours, he began to yawn from hunger. And when he yawned, his mouth opened as wide as an oven.H1 - H2 The stable was then shut and Pinocchio was left alone. He had not eaten for many hours, and he began to yawn from hunger. And when he yawned he opened a mouth that seemed as wide as an oven.H2 - Gemma 2 2B At that moment the stable was closed and Pinocchio was left alone, and because it had been many hours since he had eaten, he began to yawn from a great hunger. And, yawning, he opened a mouth that seemed like a furnace.MT 0.79 Gemma 2 9B Meanwhile, the stable was closed and Pinocchio was left alone: and because it had been many hours since he had eaten, he began to yawn from his great hunger. And, yawning, he opened a mouth that looked like an oven.MT 0.80 Llama 3.1 8B Meanwhile the stable was closed and Pinocchio was left alone: and because it was many hours since he hadn’t eaten, he began to yawn from great hunger. And, yawning, he opened a mouth that seemed like an oven.MT 0.80 ZS-Exp HT Meanwhile, the stable door was shut, leaving Pinocchio alone. His tummy rumbled because it had been hours since his last meal, and he started yawning widely from his hunger. With each yawn, his mouth opened like an oven.MT 0.78 ZS-Exp PT Meanwhile, the stable door was shut, leaving Pinocchio all alone. His tummy rumbled because it had been hours since his last meal, and he started yawning widely from sheer hunger. With each yawn, his mouth gaped open like an oven.H2 0.80 MS Meanwhile, the stable door was shut and Pinocchio was left alone. Since it had been hours since he last ate, | https://arxiv.org/abs/2505.16612v1 |
he began yawning from hunger. Each yawn opened his mouth wide enough to look like an oven.H2 0.81 ActAdd Meanwhile, the stable was closed and Pinocchio remained alone. Because many hours passed without eating, he started yawning from very big hunger. And while yawning, he opened his mouth wide like an oven.MT 0.75 ReFT Meanwhile, the stable was closed and Pinocchio stayed alone. Because many hours had passed since he last ate, he began to yawn from big hunger. And, yawning, he opened a mouth that looked like an oven.MT 0.77 SAE Cont. HT α= 1 Meanwhile, the stable door was shut and Pinocchio was left alone. Because it had been many hours since he had eaten, he started to yawn with intense hunger. And, as he yawned, his mouth gaped open like an oven.MT 0.79 α= 5 Meanwhile, the stable door shut, leaving Pinocchio all alone. Because it had been many hours since he ’d eaten, he started to yawn with intense hunger. As he yawned, his mouth gaped open, looking like an oven.H2 0.82 α= 50 He He "It The ... You It "[They They) THE...I I HE IT [There There THEYRE WE "A A FOR "[...]MT 0.14 SAE Cont. PT α= 5 The stable was then shut, leaving Pinocchio all alone. He had not eaten for many hours and began to yawn from great hunger. As he yawned, he opened his mouth wide, like an oven.H2 0.84 Table 8: Example paragraphs from “Pinocchio” (IT →EN) translated with H2personalization. Setup outputs are from Gemma 2 9B; ZS-Exp PT, MS, SAE Cont. HTand SAE Cont. PT(both α= 5) can flip the classifier prediction. Listing 1: Prompt template used to get GPT 4o explanation using translation examples. Objective: Identify stylistic choices in translations for personalization purposes. You will be provided with a source text, a standard translation, and a target translation by a specific translator whose style we want to emulate. Your task is to analyze the 'Target translation' by comparing it to the 'Base translation' and the 'Source text'. Identify and list the distinctive stylistic patterns, choices, and preferences exhibited in the Target translation. These stylistic cues should help another translator (or an AI) to adapt their translations to match the style of the target translator. Source text: <source text here> Base translation: <MT / H− text here> Target translation: <H+ translation here> <... repeat Source, MT and Target> Please extract a concise list of key stylistic cues. Focus on aspects such as vocabulary choices, sentence structure, tone and register, handling of cultural nuances, punctuation/formatting preferences and overall creativity. Output a short list of stylistic cues as bullet points. Write the list as if you were directly giving the guidelines to the translator and avoid using specific examples. Listing 2: Examples of explanation obtained from GPT 4o when comparing different translations from different novels. When comparing H1 and MT for Beware of Pity (German): − Maintain a tone that is professional, thoughtful, and subtly persuasive. − Avoid overly technical jargon unless necessary; explain specialized terms briefly if used. − Preserve the | https://arxiv.org/abs/2505.16612v1 |
author's voice, keeping a balance between academic rigor and narrative engagement. − Ensure smooth transitions between sentences and paragraphs to support coherent argumentation. − Translate idiomatic expressions in a way that retains their intended effect, even if the wording differs. When comparing H1 and H2 for Beware of Pity (German): − Keep the tone warm, welcoming, and direct − avoid overly formal or distant language. − Use active voice wherever possible to maintain energy and engagement. − Avoid idioms or expressions that may not translate culturally; aim for universal accessibility. − Maintain consistent tone and register throughout, adapting to the intended audience's familiarity with the subject. − Respect the rhythm and structure of the original, but feel free to adjust sentence length for readability. When comparing H1 and H2 for Crime and Punishment (Russian): − Prefer dynamic over formal vocabulary: Opt for vivid or emotionally charged words when available. − Add tonal nuance and emotional shading: Enrich dialogues and narration with subtle shifts in tone, especially sarcasm, understatement, or irony, to match character voice or mood. − Use contractions and familiar phrasing: Employ contractions and relaxed expressions to preserve spoken character. − Expand or rephrase for clarity and voice: Don't hesitate to slightly reword or elaborate if it strengthens tone, clarifies intent, or enhances character differentiation. − Favor rhythmic, flowing sentence structure: Break long, formal sentences into multiple shorter clauses or use punctuation (dashes, ellipses) for dramatic or emotional effect. − Reflect subtle character dynamics: Infuse lines with interpersonal undertones ( like defiance, deference, or sarcasm) that may not be explicit in the original. − Preserve or recreate emotional tension: Use word choice and pacing to sustain psychological nuance, unease, or irony. − Use expressive punctuation and formatting: Favor dashes, ellipses, and italic like emphasis (through word placement) to reflect emotional cadence or interruptions. Listing 3: Zero shot template template when prompting language models with different setups Translate the following sentence between the angular parentheses into English. if setup == ZS−Exp { Follow the following guidelines when translating: <explanations here> } The original sentence is: <source text>. Remember to write only the translation, without any additional text or explanation. Source (FR) Cette somme vous sera restituée á votre sortie de prison, dit le juge. En attendant, vous êtes libres sous caution. classifier Comet H1 This sum of money will be returned to you when you leave prison, ’ said the judge. ’In the meantime you are free on bail.- - ZS This amount will be returned to you upon your release from prison, the judge said. Meanwhile, you are free on bail.MT 0.79 SAE Cont. HT α=5 This sum will be repaid to you at your departure from prison, the judge declared. In the meantime, you are released under bond.MT 0.85 α=50 #echo all ought to goodness too was put moreover goodness behaving goodness goodness goodness goodness [...]H1 0.20 α=150 GURL callers goodness_Noneummy amidála Reportedaps Beaverishlor Decom- piled_unset queer headquarters Headquarters twe ... pocket-or callers dt behaving indoorsgetRepository callers_lazy uninsured tipostiFRING callers callers callers callers Guinness_None ... foss bedroomsitud facet goodness goodness | https://arxiv.org/abs/2505.16612v1 |
goodness [...]H1 0.23 Source (DE) Nein was Sie lieber haben, Herr Leutnant! Nur keine Zeremonien, es ist doch ganz einerlei. classifier Comet H2 "No, no-whatever you would rather have, Lieutenant Hofmiller! Please don’t stand on ceremony, it makes no difference to us."- - ZS No, whatever you prefer, Lieutenant! Just no ceremonies, it doesn’t matter. MT 0.76 SAE Cont. HT α=5 No, anything at all you want, sir! Just don’t make a fuss about it, it really doesn’t matter. H2 0.79 α=50 ">I Don’t worry about that... I don’t want a ceremony for this one. It’s not important... H2 0.46 α=150 IWhenInWhatItDonIf Sometimes AIs Celebrating cerimonies... Sosir please don’t have parties ey’ [...]H2 0.24 Table 9: Examples from different languages being classified as Human when using extreme αvalues. Figure 5: Probing classifier performance on the human translation detection task accross Gemma 2 9B layers. For our experiments we select layer 21 as the optimal intervention point for our steering approaches. Figure 6: Results for every language on Gemma 2 2B. Figure 7: Results for every language on Gemma 2 9B. Figure 8: Results for every language on Llama 3.1 8B. | https://arxiv.org/abs/2505.16612v1 |
arXiv:2505.16624v1 [cs.CV] 22 May 20251 Grounding Chest X-Ray Visual Question Answering with Generated Radiology Reports Francesco Dalla Serra, Patrick Schrempf, Chaoyang Wang, Zaiqiao Meng, Fani Deligianni, and Alison Q. O’Neil Abstract — We present a novel approach to Chest X- ray (CXR) Visual Question Answering (VQA), address- ing both single-image image-difference questions. Single- image questions focus on abnormalities within a specific CXR ( “What abnormalities are seen in image X?” ), while image-difference questions compare two longitudinal CXRs acquired at different time points ( “What are the differences between image X and Y?” ). We further explore how the integration of radiology reports can enhance the perfor- mance of VQA models. While previous approaches have demonstrated the utility of radiology reports during the pre- training phase, we extend this idea by showing that the reports can also be leveraged as additional input to improve the VQA model’s predicted answers. First, we propose a unified method that handles both types of questions and auto-regressively generates the answers. For single-image questions, the model is provided with a single CXR. For image-difference questions, the model is provided with two CXRs from the same patient, captured at different time points, enabling the model to detect and describe temporal changes. Taking inspiration from ’Chain-of-Thought rea- soning’, we demonstrate that performance on the CXR VQA task can be improved by grounding the answer generator module with a radiology report predicted for the same CXR. In our approach, the VQA model is divided into two steps: i) Report Generation (RG) and ii) Answer Generation (AG). Our results demonstrate that incorporating predicted radiology reports as evidence to the AG model enhances performance on both single-image and image-difference questions, achieving state-of-the-art results on the Medical- Diff-VQA dataset. Index Terms — chest x-ray; deep learning; visual question answering; radiology reporting I. INTRODUCTION Visual Question Answering (VQA) refers to the task of answering questions about the contents of an image. Multiple potential roles have been suggested for medical VQA, such as a ‘second opinion’ in image interpretation, and as an assistant Francesco Dalla Serra is with Canon Medical Research Europe, Edin- burgh, EH65NP , UK (e-mail: francesco.dallaserra@mre.medical.canon). Patrick Schrempf is with Canon Medical Research Europe, Edin- burgh, EH65NP , UK. Chaoyang Wang is with Canon Medical Research Europe, Edinburgh, EH65NP , UK. Zaiqiao Meng is with School of Computing Science, University of Glasgow, Glasgow, G128QQ, UK. Fani Deligianni is with School of Computing Science, University of Glasgow, Glasgow, G128QQ, UK. Alison Q. O’Neil is with Canon Medical Research Europe, Edinburgh, EH65NP , UK.in answering imaging-based queries of clinicians to improve the radiology workflow [22]. The growing volume of medical imaging studies [34], driven in part by an ageing population, continues to place a heavy burden on radiologists. This strain, compounded by institutional limitations and a global shortage of radiologists [32, 2], can lead to diagnostic delays that adversely affect patient care [3]. AI diagnostic assistance has the ability to improve the clinical accuracy of radiologists of CXR reporting [45], therefore, a complete radiology VQA that can understand free-form questions and produce reliable answers | https://arxiv.org/abs/2505.16624v1 |
has attracted a lot of interest. In this paper, we focus on VQA applied to Chest X-Ray (CXR) radiology. CXR images are 2D projections through various overlapping structures within the thoracic cavity, such as the ribs, heart, and lungs. This makes it challenging to isolate and identify abnormalities in specific structures. Furthermore, the differences in the appearance of normal and abnormal findings are often subtle and can be difficult to detect. The multimodal nature of VQA adds another layer of complexity, as it requires the system to interpret visual information in the context of a textual query. While this presents a powerful tool for advancing medical diagnostics, it also introduces significant challenges. VQA can be open-ended, requiring the generation of free- form text answers; closed-ended, requiring the generation of short specific responses; or multiple choice, where the model selects the correct answer from a set of provided options. Additionally, questions may refer to a single image or compar- isons between two images to identify differences. Questions based on a single image ( e.g., closed-ended question: “Is there any sign of pneumonia in the given scan?” [Answer: Yes/No]) can facilitate more efficient decision-making by clinical teams, prior to the formal report issued by a radiologist. Questions that compare scans of the same patient taken at different time points ( e.g., open-ended questions: “What has changed compared to the prior scan?”, “Has the effusion resolved as expected?”, “Is the rate of change of the nodule concerning for a malignant lesion?”) are essential for monitoring disease progression or treatment response. We refer to the second as image-difference question answering . This is particularly rele- vant in the medical domain, where radiologists often compare scans from different timepoints to assess the progression of findings. To effectively handle diverse VQA scenarios, we propose 2 Right pleural drain in situ. The right pneumothorax has resolved. Heart size normal. Left lung clear. 1. Generate the finding section. 2. Generate the impression section. Report Generator Instructions vv Chest X-Ray Prior Current Answer Generator What has changed compared to the prior image? Question The right pneumothorax has resolved. Answer Finding Post-chest drain insertion. CXR to check positioning. Indication Field RG Input Text AG Input Text Resolution of right pneumothorax. Impression Fig. 1. Overview of the Report Generator–Answer Generator (RG-AG) pipeline: (1) the Report Generator first produces a radiology report based on the given Chest X-ray (or a pair of images in the case of a follow-up study), along with the instruction and the indication field. The report consists of the ‘finding’ and ‘impression’ sections, which are generated independently based on the specific instructions received by the RG module. (2) The Answer Generator then utilises this predicted report as additional contextual information to generate a more accurate and interpretable response to the input question. Red tags denote clinical input data, green tags indicate the prompts provided to each module, and yellow tags represent the output generated by each module. a flexible vision-language model that processes dual visual and textual information inputs, tailoring its input configuration based on the specific VQA task we | https://arxiv.org/abs/2505.16624v1 |
want to perform. Our approach adopts anatomy-finding anatomical tokens [6] and a longitudinal projection module [5]. Moreover, we study the effect of grounding the answer generation module using a radiology report predicted from the same image. Our approach is inspired by Chain-of-Thought (CoT) reasoning, as demon- strated in language-only QA tasks [42, 19], and more recently in vision-language models [47]. In these settings, having the model explicitly generate a reasoning process before producing an answer has been shown to improve inference quality. We propose that providing additional supporting evidence to a VQA model could improve answer prediction. Specifically, we hypothesise that generating a comprehensive description of a CXR’s appearance in the form of a detailed radiology report—based on the same scan the question refers to—and grounding the VQA model with this predicted report could enhance both the accuracy and reliability of the generated answer. Our focus is on generating the two main sections of a radiology report: the ‘findings’ section, which offers a detailed account of the scan’s visual characteristics, and the ‘impression’ section, where radiologists synthesise the findings to form a cohesive clinical interpretation. We further incorporate successful strategies from previous works on the diff-VQA task, namely anatomical feature repre- sentations [13] and a pre-training strategy based on radiology report generation [4]. To the best of our knowledge, this is the first study to examine the impact of grounding radiology reports for medical VQA and to provide evidence of its effectiveness on both single-image and image-difference VQA. In summary, our contributions consist of the following. 1) Propose a unified approach—the Report Genera- tor–Answer Generator (RG-AG) pipeline—for address- ing both single-image and image-difference CXR VQA tasks. 2) Show the effectiveness of grounding the answer gener-ation process using the corresponding radiology report to improve the quality and accuracy of the response. 3) Achieve state-of-the-art performance on the publicly available Medical-Diff-VQA dataset [12, 13], with the most significant improvements observed on image- difference questions and single-image open-ended ques- tions. II. R ELATED WORKS A. Medical Visual Question Answering The limited diversity of answers in medical VQA datasets [23, 20] has often led researchers to approach medical VQA as a classification task [25, 17, 8]. However, treating VQA as a classification task limits the solution to a predetermined set of answers. Generative approaches [30, 33, 36] have been enabled by the availability of open-access datasets [46, 11] and the rise of generative large language models [35, 26]. Treating VQA as a text-generation task naturally yields more detailed and wide- ranging responses expressed in the form of one or multiple sentences. Some authors [30, 33] have proposed medical VQA methods that use two parallel heads, allowing the model to perform either classification or answer generation. These heads are trained separately depending on the answer type, whether closed-ended or open-ended. More recently, in [36] the authors proposed a VLM that integrates a vision encoder with GPT-2 [29], a large language model pre-trained on a general corpus, demonstrating the model’s effectiveness by fine-tuning it on three medical VQA datasets [23, 10, 14]. B. Medical Image-Difference Question Answering Image-difference | https://arxiv.org/abs/2505.16624v1 |
question answering , henceforth referred to asdiff-VQA , is the task where questions refer to the differences between two or more images. This task has received limited attention in both the general [28, 44] and medical [13, 4] domains. In the medical domain, this is primarily due to the DALLA SERRA et al. : GROUNDING CHEST X-RAY VISUAL QUESTION ANSWERING WITH GENERATED RADIOLOGY REPORTS 3 lack of a suitable dataset, until the creation of Medical-Diff- VQA [13]—a CXR VQA dataset designed to include such questions. [13] propose a method for diff-QA that utilises anatomical features and a multi-relationship image-difference graph feature representation learning method to extract image- difference features. To the best of our knowledge, only one other work [4] has addressed diff-VQA. The authors adopt a pre-trained VLM [39] and propose a multi-stage pre-training pipeline (PLURAL), where the VLM is first fine-tuned on the report generation task and subsequently fine-tuned on the VQA task. However, this approach does not leverage the generated radiology reports during the VQA task. C. Grounding CXR-VQA with Radiology Reports To the best of our knowledge, the use of predicted radiology reports to enhance VQA performance has not been explored in the literature. The most relevant work is by [40], who proposed a method that integrates the output of Computer- Aided Diagnosis (CAD) networks with a Large Language Model (LLM) to leverage the LLM’s medical domain knowl- edge and logical reasoning. However, their approach focuses on leveraging LLMs to improve the interactivity of a CAD network. Conversely, we focus on demonstrating how CAD networks can enhance the VQA performance of a VLM. III. M ETHOD We present an overview of our Report Generator -Answer Generator (RG-AG) model in Figure 1. A. Visual Anatomical Tokens Extractor We extract visual CXR features in the form of finding-aware anatomical tokens [6] i.e. vector representations corresponding to a predefined set of anatomical regions in the CXR. To generate tokens, we train a Faster R-CNN model [31] to perform two tasks: (1) anatomical region localisation – detecting the bounding box of N= 36 anatomical regions; and (2)finding detection – determining the presence or absence of 71 findings within each region. Tokens are extracted for each CXR by selecting the bounding box representation with the highest confidence score for each anatomical location and extracting the corresponding feature vectors from the Region of Interest pooling layer of the Faster R-CNN. This results inNvectors V={⃗ vn}N n=1with ⃗ vn∈Rdandd=1024. If an anatomical region iis not detected in a CXR, the corresponding token ⃗ viis a zero vector. For more details about the training and the model architecture, we refer the reader to [6]. These tokens are input to both the RG and VQA modules. B. Vision-Language Model Architecture The RG and AG use the same VLM architecture, com- posed of a Longitudinal Projection Module (LPM) and a Transformer-based Language Model (LM). This architecture has a total of 68M trainable parameters. The VLM is defined as a function fwhich takes visual features ( V) and text ( T) as inputs, and generates output text ( Y):Y=f(V, | https://arxiv.org/abs/2505.16624v1 |
T), (1) which is achieved in an autoregressive manner. The text inputs and outputs vary based on the specific task (RG or AG). The VLM architecture of the RG and AG models are shown in Figure 2. 1) Longitudinal Projection Module: The LPM is responsible for projecting the current and prior CXR scans into a joint representation. The input visual components of the LPM correspond to the finding-aware anatomical tokens from the current and for image-difference types of questions, the prior scan is also provided. These are denoted as Vc={⃗ vc,n}N n=1 andVp={⃗ vp,n}N n=1, respectively. Whenever we do not intend to use the prior scan as input, we set Vp={⃗0}N n=1. We first align and concatenate each pair of anatomical tokens for the current and prior scans assigned to anatomical region ni.e., ( [⃗ vc,n,⃗ vp,n]). We then pass the sequence of N pairs into the Multi-Layer Perceptron (MLP), which consists of a stack of a Fully-Connected layer (FC1), a ReLU function and another Fully-Connected layer (FC2). The output of the MLP is added to its input via a residual connection, yielding: ⃗ vjoint,n =MLP ([⃗ vc,n,⃗ vp,n]) + [⃗ vc,n,⃗ vp,n] (2) This formulation preserves the original input information while allowing the MLP to refine the representation. We refer to the output of the LPM as the current-prior joint representation Vjoint={⃗ vjoint,n}N n=1. 2) Language Model: The LM consists of a vanilla encoder- decoder Transformer [37]. Both the encoder and the decoder are composed of 3 attention layers with 8 heads and 512 hidden units. The LM takes as input the current-prior joint representation Vjoint and the tokenised input text embeddings Tand generates the target text Y: Y=LM(Vjoint, T). (3) The input visual Vjoint and text embeddings Tare concate- nated, and summed to the position embeddings – which estab- lish the position of each token within the input sequence. The segment embedding is used to enable the model to distinguish between the modality of each set of input tokens: vision vs. text. While the RG and AG models share the same architecture design, the input and output components are different, as described in the following sections. C. Report Generator The Report Generator (RG) is responsible for generating a report describing the target (current) CXR. Specifically, it generates the two main sections of the radiology report— the ‘findings’ and ‘impression’ sections—independently, using distinct prompts for each. The RG model consists of the LPM and the LM. The visual input comprises the current ( Vc) and prior anatomical tokens ( Vp), when available. The target text Yiseither the finding ( F)orthe impression ( I) section of a radiology report, and the text input Tcomprises not only the indication field ( {ind}) but also an instruction specifying 4 Fig. 2. The model architecture of the Report Generator and Answer Generator. This diagram illustrates on the left the Visual Anatomical Token Extractor – responsible for extracting the visual tokens from CXRs. This component is trained independently of the Vision-Language Model. On the right, the Vision-Language Model architecture is responsible for generating the radiology report | https://arxiv.org/abs/2505.16624v1 |
or the answer. The diagram shows how visual inputs ( i.e., anatomical tokens Atok) are aligned, concatenated and projected into a joint representation via the Longitudinal Projection Module . This representation is then combined with tokenised text inputs, which the language model processes to generate the target text. For the Report Generator, the input text is an instruction requesting the generation of a specific report section (finding or impression), with the target text being that section. For the Answer Generator, the input text is the concatenation of the question and the predicted report (finding + impression), and the target text is the answer. the section to be generated: (1) Inst f=“generate the finding section” or (2) Inst i=“generate the impression section” . Following Eq. 1, the RG can be defined as a function fRG: F=fRG(V= [Vc, Vp], T= “[RG]{ind}[Q]Inst f”) I=fRG(V= [Vc, Vp], T= “[RG]{ind}[Q]Inst i”)(4) where [RG] corresponds to a special token indicating the report generation task, and [Q] is the special token used in front of the instruction. D. Answer Generator The Answer Generator (AG) is responsible for performing VQA. Similar to the RG, it consists of the LPM and the LM. Following [4], we initialise the AG model using the RG weights, i.e. the RG task effectively acts as pre-training. The AG model can be defined as a function fAG(V, T), whose input visual features ( V) and input text ( T) vary depending on the question type. When the question asks about the difference in appearance between two scans, the AG takes as input two sets of anatom- ical tokens ( current andprior ). Otherwise, the AG takes only one set of anatomical tokens (considered to be current ) and setsVp=∅={⃗0}N n=1. This results in the visual component being: V=( (Vc, Vp),if diff-VQA ; (Vc,∅),otherwise .(5) Additionally, the input text Tvaries depending on whether the question {q}is open-ended or closed (multiple choice):T= “[OEVQA]{rr}[Q]{q}” if{q}is open-ended , “[MCVQA]{rr}[Q]{q}[MC]{a1}. . .[MC]{aM}” if{q}is multiple choice . (6) where {rr}refers to the predicted radiology report for the given scan, obtained by concatenating the findings ( F) and impression sections ( I) from Eq. 4. We define [OEVQA] and [MCVQA] as special tokens, used to specify the task as open- ended VQA or multiple-choice VQA to the model. [MC] is a special token placed before each possible answer {aj}, from which the model has to pick. IV. E XPERIMENTAL SETUP A. Datasets We conduct our experiment on the publicly available Medical-Diff-VQA dataset [12, 13]. The question-answer pairs in this dataset are derived from the free-text radiology reports from MIMIC-CXR [15, 16, 9]. The dataset construction fol- lowed three main steps: (1) The authors collected keywords by applying ScispaCy [24] to extract entities from the reports, which were manually inspected to ensure quality. (2) They then constructed an intermediate KeyInfo dataset through an Extract-Check-Fix cycle, using regular expressions to identify abnormality keywords, manual and automated checks to en- sure accuracy, and refining the extraction process until minimal errors were detected, resulting in a dataset that included study details, positive and negative findings, and | https://arxiv.org/abs/2505.16624v1 |
their attributes. (3) The question-answer pairs were generated based on the entities DALLA SERRA et al. : GROUNDING CHEST X-RAY VISUAL QUESTION ANSWERING WITH GENERATED RADIOLOGY REPORTS 5 Split QA pairs CXR pairs Training 560,563 88,098 Validation 70,070 10,864 Test 70,070 10,963 TABLE I NUMBER OF QA PAIRS AND CXR PAIRS FOR EACH DATA SPLIT (TRAINING /VALIDATION /TEST )IN THE MEDICAL -DIFF-VQA DATASET [12, 13]. and attributes from the KeyInfo dataset, and then categorised into seven types: 1)abnormality (“what abnormalities are seen in the <location >?”) 2)location (“Where in the image is the <abnormality > located?” ) 3)type (“What type is the <abnormality >?”) 4)level (“What level is the <abnormality >?”) 5)view (“Which view is this image taken?” ) 6)presence (“Is there any evidence of <abnormality >?”) 7)difference (“What has changed in the <location > area?” ) The difference questions, which we refer to as diff-VQA, ask about differences in appearance between the current and a prior scan. In accordance with previous works using this dataset [13, 4], we classify closed questions as those with answers limited to “yes” or “no”, treating them as multiple choice as detailed in Section III-D. The remaining questions are considered open-ended, with free-form answers. The Medical-Diff-VQA dataset contains a total of 700,703 question-answer pairs (QA) related to 109,923 pairs of current and prior CXRs. We use the official split; the number of QA pairs and CXR pairs for each data split are presented in Table I. The dataset is divided into training, validation, and testing sets in an 8:1:1 ratio at the study level, ensuring that studies from the same patient appear in only one split to prevent data contamination. To ensure the availability of a second image for differential comparison, only patients from the MIMIC- CXR dataset with more than one prior radiology visit were included. We use the MIMIC-CXR dataset to train the RG model and the Chest ImaGenome annotation [43] to extract anatomical tokens, following the data split indicated in Medical-Diff-VQA at all stages. B. Implementation Details The Report Generator is initialised with random weights and is trained end-to-end for 100 epochs using a cross-entropy loss and Adam optimiser [18]. We set the initial learning rate to 1×10−4and reduce it every 10 epochs by a factor of 0.8. The RG is trained to predict both the finding and impression sections as detailed in Section III-C. The best- performing model is selected based on the highest BLEU- 4 score computed across both the finding and impression sections of the validation set. The Answer Generator is initialised using the RG weights and is fine-tuned for 100 epochs using the same loss, optimiser and learning rate as the RG. We select the best model basedon the highest BLEU-4 score computed across all questions of the validation set. Each experiment is repeated three times using different random seeds and we report the average in our results. C. Metrics We adopt different metrics based on the type of question, in line with previous studies [13, 4]. For “difference” type ques- tions, we report natural language generation metrics including | https://arxiv.org/abs/2505.16624v1 |
BLEU [27], METEOR [1], ROUGE [21], and CIDEr [38]. We calculate exact-match accuracy for other types of ques- tions, differentiating between open-ended and closed (yes/no) questions. D. Baselines We evaluate our method against existing approaches de- signed for comparing multiple input images. For the diff-VQA task, we compare with two methods originally developed for general image-difference captioning: MCCFormers [28] and IDCPCL [44]. Additionally, we bench- mark against EKAID [13] and PLURAL [4], which were previously assessed on the Medical-Diff-VQA dataset. For all other question types, we compare with state-of-the- art medical VQA methods also evaluated on Medical-Diff- VQA, including MMQ [8], EKAID, and PLURAL. V. R ESULTS A. VQA Results: Difference & Non-Difference We present the diff-VQA results in Table II and the non- diff-VQA results in Table III, comparing our method with and without the report generation step (RG-AG and AG, respectively) as well as with other state-of-the-art approaches. Our RG-AG method achieves state-of-the-art performance, demonstrating superior results on all NLG metrics for dif- ference type questions and improved overall accuracy on the remaining questions. These results suggest that using the pre- trained model on report generation not only to initialise the VQA model – as done in the PLURAL [4] and AG methods – but also to predict the reports and use them to ground the answer generation step helps generate more precise answers, especially for difference type questions. For other question types, the RG-AG model consistently outperforms AG across all metrics. Furthermore, compared to PLURAL, our RG-AG method achieves higher accuracy for open-ended questions but lower accuracy for closed-ended questions (yes/no). This suggests that the intermediate report generation step has a more pronounced impact on open-ended questions, while its influence on closed-ended questions is comparatively limited. B. Ablation Study In this ablation study, we explore the impact of various input components provided to the AG model. The quantitative results, which are detailed in Table IV, illustrate the positive effect of grounding the AG model with relevant sections of radiology reports. By breaking down the components indi- vidually, we observe that the Finding section (F) has the 6 Model BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-L CIDEr MCCFormers [28] 0.214 0.190 0.170 0.153 0.319 0.340 0 IDCPCL [44] 0.614 0.541 0.474 0.414 0.303 0.582 0.703 EKAID [13] 0.628 0.553 0.491 0.434 0.339 0.577 1.027 PLURAL [4] 0.704 0.633 0.575 0.520 0.381 0.653 1.832 AG (w/o report) 0.678 0.619 0.569 0.525 0.372 0.659 2.102 RG-AG (w/ report) 0.711 0.650 0.600 0.551 0.384 0.668 2.198 TABLE II COMPARISON RESULTS BETWEEN OUR PROPOSED APPROACH ,BOTH WITH THE REPORT GENERATION STEP (RG-AG) AND WITHOUT IT (AG), AND PREVIOUS METHODS ON THE difference QUESTIONS OF THE MIMIC- DIFF-VQA DATASET [13]. W E SHOW THE BEST RESULTS IN BOLD . ALL THE RESULTS OF THE COMPARISON METHODS ARE TAKEN FROM [4]. Model Open Question Closed Question All Questions MMQ [8] 0.115 0.108 0.115 EKAID [13] 0.264 0.799 0.525 PLURAL [4] 0.512 0.873 0.688 AG (w/o report) 0.509 0.865 0.683 RG-AG (w/ report) 0.523 0.871 0.693 TABLE III COMPARISON RESULTS BETWEEN OUR PROPOSED APPROACH ,BOTH WITH THE REPORT | https://arxiv.org/abs/2505.16624v1 |
GENERATION STEP (RG-AG) AND WITHOUT IT (AG), AND PREVIOUS METHODS ON ALL BUT THE difference QUESTIONS OF MIMIC- DIFF-VQA DATASET [13]. W E COMPUTE THE ACCURACY (EXACT MATCH )ON THE OPEN -ENDED AND THE CLOSED -ENDED QUESTIONS (YES/NO). W E SHOW THE BEST RESULTS IN BOLD . ALL THE RESULTS OF THE COMPARISON METHODS ARE TAKEN FROM [4]. most substantial impact on the model’s performance when compared to the Impression section (I). This outcome is as expected, as the Finding section typically contains more granular and detailed information about the CXR, offering richer data for the model to process. However, the study also reveals that when both sections are provided in conjunction (F + I), the VQA results show improvement beyond what is achieved with either section alone. This synergy suggests that the combined information from both sections provides a more comprehensive context to the AG model, enhancing its predictions. Additionally, we examined the scenario where the AG model is provided exclusively with textual information from the report, omitting any visual input from the CXR, to deter- mine whether the generated report alone suffices for accurate answer prediction. The results, however, indicate a noticeable decrement in performance across all metrics, underscoring that the textual reports alone are insufficient. This can be attributed to several factors: predicted reports may contain inaccuracies, or may not contain the answer. These issues highlight that the visual data from the CXR is still necessary for the AG model to generate accurate and reliable answers. To investigate the impact of inaccuracies, we repeated the experiment using the original expert radiology reports from the MIMIC-CXR dataset in place of the predicted reports. As demonstrated in Table IV, providing the AG model with high- quality curated reports led to improvements across all metrics. This finding underscores the pivotal role of report quality on VQA. In Figure 3, we present the accuracy across different ques- tion types, excluding difference questions. Our proposed RG- Fig. 3. We compare the accuracy of our proposed RG-AG model with the baseline AGmodel (which does not include the predicted CXR radiology report as input) for each question type, except for the difference questions. We highlight the difference in accuracy ( ∆) for each question type. AGmodel is compared with the baseline AGmodel, which does not incorporate the predicted CXR radiology report as input. The results indicate that not all question types are affected equally when we provide the model with the predicted reports as additional context. Notably, questions related to location ,type, and level show the greatest improvement from this additional input. C. Qualitative Results Finally, we present qualitative results in Figures 4 and 5 to provide a more comprehensive understanding of the model’s behaviour. Figure 4 compares the outputs of the baseline AG model—trained without the predicted CXR radiology report— with those of our proposed RG-AG model. In these results, we highlight the segments in the predicted report that correctly contain the information needed to predict the correct answer. These results underscore the significance of radiology reports in providing additional evidence for VQA, further validating | https://arxiv.org/abs/2505.16624v1 |
the importance of this input in enhancing model performance. Figure 5 highlights several borderline and failure cases of our proposed RG-AG model. These examples illustrate: (rows 1–2) the model’s robustness to inconsistencies within the predicted reports ( i.e., different descriptions between the “finding” and “impression” sections); (row 3) error propaga- tion from the radiology report to the predicted answer; (row 4) report inconsistency leading to incorrect answer; (row 5) uninformative report results in incorrect answer; and (row 6) DALLA SERRA et al. : GROUNDING CHEST X-RAY VISUAL QUESTION ANSWERING WITH GENERATED RADIOLOGY REPORTS 7 Visual Text BLEU-1 BLEU-4 METEOR ROUGE-L CIDEr Acc C - 0.686 0.520 0.373 0.634 1.854 0.668 C + P - 0.678 0.525 0.372 0.659 2.102 0.683 C + P I 0.679 0.523 0.369 0.654 2.111 0.690 C + P F 0.690 0.533 0.374 0.655 2.117 0.691 - F + I 0.633 0.479 0.344 0.595 1.734 0.630 C + P F + I 0.711 0.551 0.385 0.668 2.198 0.693 C + P F + I (Ground Truth) 0.723 0.570 0.398 0.685 2.484 0.751 TABLE IV ABLATION RESULTS . WE TEST VARIOUS VISUAL INPUTS TO THE ANSWER GENERATION (AG) MODEL :THE CURRENT SCAN ONLY (C), BOTH CURRENT AND PRIOR SCANS (C + P), AND NO SCAN (-). A DDITIONALLY ,WE TEST DIFFERENT TEXTUAL INPUTS PROVIDED ALONGSIDE THE QUESTION :THE FINDINGS SECTION (F), THE IMPRESSION SECTION (I), BOTH SECTIONS COMBINED (F + I), AND NO ADDITIONAL INPUT TEXT (-). IN THE FINAL ROW ,WE PRESENT RESULTS WHEN THE AG MODEL IS GIVEN THE GROUND TRUTH FINDINGS AND IMPRESSION SECTIONS . THIS SERVES AS AN UPPER BOUND FOR PERFORMANCE ,EXCLUDED FROM DIRECT COMPARISON . WE SHOW THE BEST RESULTS IN BOLD . Fig. 4. We compare the quality of our predicted answers without the predicted CXR radiology report ( AGmodel) and with it (our RG-AG model). For each question (Q), we highlight the correct parts of the answer (A) in green and the errors in red. Similarly, in the predicted radiology reports (R), segments containing correct information relevant to the question are shown in green. 8 Fig. 5. We present borderline and failure cases of our RG-AG model, with explanatory comments in the rightmost column to describe the associated errors. For each question (Q), we highlight the correct parts of the answer (A) in green and the errors in red. Similarly, in the predicted radiology reports (R), segments containing correct information relevant to the question are shown in green, and segments inconsistent with the ground truth answer are shown in red. inconsistency between the predicted report and the predicted answer. VI. L IMITATIONS The Medical-Diff-VQA dataset has limitations related to the origin and scope of the question-answer pairs. Pairs were derived semi-automatically from the ground truth radiology reports, therefore, the style and content of information con- tained within these reports are highly relevant for answering the questions. However, other types of questions (those not explicitly addressed in the reports) might limit or negate the benefits of grounding the VQA system with the predicted reports. Further, the questions are limited in scope; there | https://arxiv.org/abs/2505.16624v1 |
are a total of only 18 unique question templates into which relevant keywords are inserted from data-mined lists of 29disease keywords and approximately 100 keywords describing spatial distribution. This means that more nuanced variation is lacking; for instance for the ‘difference’ question type, rather than asking “ What has changed in <location >area? ” a clinician might be interested to know “ Has the effusion resolved as expected? ” or “ Is the rate of change of the nodule concerning for a malignant lesion? ”. These questions would require more sophisticated parsing and answer expressivity from any VQA system. Additional evaluation on more varied questions would complement the evaluation presented in this paper. In terms of methodology, our two-stage RG-AG approach is prone to error propagation between the first and second stage of our pipeline, i.e.the predicted reports provided to the AG model might contain errors which can lead to the generation DALLA SERRA et al. : GROUNDING CHEST X-RAY VISUAL QUESTION ANSWERING WITH GENERATED RADIOLOGY REPORTS 9 of wrong answers. Furthermore, we generated the findings and impression sections of the report separately, following the methodology proposed in [4]. However, this approach can result in inconsistencies between the two sections, potentially leading to incorrect answer predictions ( e.g., discrepancies in the laterality or severity of a finding described in each section). A more effective strategy might involve generating theimpression section based on the predicted findings , as is done in the task of summarising radiology reports [7]. VII. C ONCLUSION We have explored the use of radiology reports as additional context to ground the answer generation task of a CXR VQA system. Our RG-AG method shows state-of-the-art results on the Medical-Diff-VQA dataset, with the most notable improve- ments noted on the difference types of questions compared to prior methods. We have further investigated the role of the radiology report for VQA in the ablation study, which shows how our RG- AG method achieves the highest overall performance when combining the Finding and Impression sections predicted from a CXR. Furthermore, our results highlight how the quality of the radiology report plays an important role—shown by using the original reports written by expert radiologists—and how providing the AG with visual clues from the CXR is still necessary. In this paper, we have only addressed whether using the predicted radiology reports can enhance the VQA perfor- mance. However, other types of clinical information related to a patient could be provided as evidence to the VQA model to improve its answer generation capability. This strongly depends on the type of questions we want our VQA model to be able to respond to. Moreover, we have investigated the problem of grounding the AG with radiology reports, adopting a two-step approach, following the approach proposed in [47] for multimodal CoT reasoning. We hypothesise that using a two-stage approach, with each stage implemented using a different model, may only be necessary for smaller models like ours. Larger and more capable models may be able to perform all tasks using a single model, as shown in text-only question-answering [41, 48]. | https://arxiv.org/abs/2505.16624v1 |
We leave these two directions as open questions for future work in this space. REFERENCES [1] Satanjeev Banerjee and Alon Lavie. “METEOR: An automatic metric for MT evaluation with improved correlation with human judgments”. In: Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summariza- tion. 2005, pp. 65–72. [2] Daniel J Cao, Casey Hurrell, and Michael N Patlas. “Current status of burnout in Canadian radiology”. In:Canadian Association of Radiologists Journal 74.1 (2023), pp. 37–43.[3] Care Quality Commission. “A national review of radiol- ogy reporting within the NHS in England”. In: (2018), pp. 1–26. [4] Yeongjae Cho, Taehee Kim, Heejun Shin, Sung- zoon Cho, and Dongmyung Shin. “Pretraining Vision- Language Model for Difference Visual Question An- swering in Longitudinal Chest X-rays”. In: Medical Imaging with Deep Learning . 2024. [5] Francesco Dalla Serra, Chaoyang Wang, Fani Deli- gianni, Jeff Dalton, and Alison O’Neil. “Control- lable Chest X-Ray Report Generation from Longitudi- nal Representations”. In: Findings of the Association for Computational Linguistics: EMNLP 2023 . Ed. by Houda Bouamor, Juan Pino, and Kalika Bali. Singapore: Association for Computational Linguistics, Dec. 2023, pp. 4891–4904. [6] Francesco Dalla Serra, Chaoyang Wang, Fani Deli- gianni, Jeffrey Dalton, and Alison Q. O’Neil. “Finding- Aware Anatomical Tokens for Chest X-Ray Automated Reporting”. In: Machine Learning in Medical Imaging . Ed. by Xiaohuan Cao, Xuanang Xu, Islem Rekik, Zhiming Cui, and Xi Ouyang. Cham: Springer Nature Switzerland, 2024, pp. 413–423. [7] Jean-Benoit Delbrouck, Maya Varma, Pierre Cham- bon, and Curtis Langlotz. “Overview of the RadSum23 Shared Task on Multi-modal and Multi-anatomical Ra- diology Report Summarization”. In: The 22nd Work- shop on Biomedical Natural Language Processing and BioNLP Shared Tasks . Ed. by Dina Demner-fushman, Sophia Ananiadou, and Kevin Cohen. Toronto, Canada: Association for Computational Linguistics, July 2023, pp. 478–482. [8] Tuong Do, Binh X Nguyen, Erman Tjiputra, Minh Tran, Quang D Tran, and Anh Nguyen. “Multiple meta-model quantifying for medical visual question answering”. In:Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Con- ference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part V 24 . Springer. 2021, pp. 64– 74. [9] Ary L Goldberger, Luis AN Amaral, Leon Glass, Jef- frey M Hausdorff, Plamen Ch Ivanov, Roger G Mark, Joseph E Mietus, George B Moody, Chung-Kang Peng, and H Eugene Stanley. “PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals”. In: circulation 101.23 (2000), e215–e220. [10] Xuehai He, Yichen Zhang, Luntian Mou, Eric Xing, and Pengtao Xie. PathVQA: 30000+ Questions for Medical Visual Question Answering . 2020. [11] Xinyue Hu et al. “Medical-Diff-VQA: A Large-Scale Medical Dataset for Difference Visual Question An- swering on Chest X-Ray Images”. In: (). [12] Xinyue Hu, L Gu, Q An, M Zhang, L Liu, K Kobayashi, T Harada, R Summers, and Y Zhu. Medical-Diff-VQA: A Large-Scale Medical Dataset for Difference Visual Question Answering on Chest X-Ray Images . 2023. 10 [13] Xinyue Hu, Lin Gu, Qiyuan An, Mengliang Zhang, Liangchen Liu, Kazuma Kobayashi, Tatsuya Harada, Ronald M Summers, and Yingying Zhu. “Expert knowledge-aware image difference graph representation learning for difference-aware medical | https://arxiv.org/abs/2505.16624v1 |
visual question answering”. In: Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining . 2023, pp. 4156–4165. [14] Yefan Huang, Xiaoli Wang, Feiyan Liu, and Guofeng Huang. “OVQA: A clinically generated visual question answering dataset”. In: Proceedings of the 45th Interna- tional ACM SIGIR Conference on Research and Devel- opment in Information Retrieval . 2022, pp. 2924–2938. [15] Alistair EW Johnson, Tom J Pollard, Seth J Berkowitz, Nathaniel R Greenbaum, Matthew P Lungren, Chih- ying Deng, Roger G Mark, and Steven Horng. “MIMIC- CXR, a de-identified publicly available database of chest radiographs with free-text reports”. In: Scientific Data 6.1 (2019). [16] Alistair EW Johnson, Tom J Pollard, Nathaniel R Greenbaum, Matthew P Lungren, Chih-ying Deng, Yi- fan Peng, Zhiyong Lu, Roger G Mark, Seth J Berkowitz, and Steven Horng. “MIMIC-CXR-JPG, a large publicly available database of labeled chest radiographs”. In: arXiv preprint arXiv:1901.07042 (2019). [17] Yash Khare, Viraj Bagal, Minesh Mathew, Adithi Devi, U Deva Priyakumar, and CV Jawahar. MMBERT: Mul- timodal BERT Pretraining for Improved Medical VQA . 2021. [18] Diederik P Kingma and Jimmy Ba. “Adam: A method for stochastic optimization”. In: arXiv preprint arXiv:1412.6980 (2014). [19] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. “Large language models are zero-shot reasoners”. In: Advances in neural information processing systems 35 (2022), pp. 22199– 22213. [20] Jason J. Lau, Soumya Gayen, Asma Ben Abacha, and Dina Demner-Fushman. “A dataset of clinically generated visual questions and answers about radiology images”. In: Scientific Data 5.1 (Nov. 2018). [21] Chin-Yew Lin. “Rouge: A package for automatic eval- uation of summaries”. In: Text summarization branches out. 2004, pp. 74–81. [22] Zhihong Lin, Donghao Zhang, Qingyi Tao, Danli Shi, Gholamreza Haffari, Qi Wu, Mingguang He, and Zongyuan Ge. “Medical visual question answering: A survey”. In: Artificial Intelligence in Medicine 143 (2023), p. 102611. [23] Bo Liu, Li-Ming Zhan, Li Xu, Lin Ma, Yan Yang, and Xiao-Ming Wu. SLAKE: A Semantically-Labeled Knowledge-Enhanced Dataset for Medical Visual Ques- tion Answering . 2021. [24] Mark Neumann, Daniel King, Iz Beltagy, and Waleed Ammar. “ScispaCy: Fast and Robust Models for Biomedical Natural Language Processing”. In: BioNLP Workshop and Shared Task . 2019, pp. 319–327.[25] Binh D Nguyen, Thanh-Toan Do, Binh X Nguyen, Tuong Do, Erman Tjiputra, and Quang D Tran. “Over- coming data limitation in medical visual question an- swering”. In: Medical Image Computing and Com- puter Assisted Intervention–MICCAI 2019: 22nd In- ternational Conference, Shenzhen, China, October 13– 17, 2019, Proceedings, Part IV 22 . Springer. 2019, pp. 522–530. [26] OpenAI. “GPT-4 Technical Report”. In: ArXiv abs/2303.08774 (2023). [27] Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. “Bleu: a method for automatic evaluation of machine translation”. In: Proceedings of the 40th annual meeting of the Association for Computational Linguistics . 2002, pp. 311–318. [28] Yue Qiu, Shintaro Yamamoto, Kodai Nakashima, Ryota Suzuki, Kenji Iwata, Hirokatsu Kataoka, and Yutaka Satoh. “Describing and Localizing Multiple Changes With Transformers”. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) . Oct. 2021, pp. 1971–1980. [29] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, | https://arxiv.org/abs/2505.16624v1 |
Dario Amodei, Ilya Sutskever, et al. “Language models are unsupervised multitask learners”. In: OpenAI blog 1.8 (2019), p. 9. [30] F. Ren and Y . Zhou. “CGMVQA: A New Classification and Generative Model for Medical Visual Question An- swering”. In: IEEE Access 8 (2020), pp. 50626–50636. [31] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. “Faster R-CNN: Towards real-time object detec- tion with region proposal networks”. In: Advances in neural information processing systems 28 (2015). [32] Abi Rimmer. “Radiologist shortage leaves patient care at risk, warns royal college”. In: BMJ: British Medical Journal (Online) 359 (2017). [33] Dhruv Sharma, Sanjay Purushotham, and Chandan K Reddy. “MedFuseNet: An attention-based multimodal deep learning model for visual question answering in the medical domain”. In: Scientific Reports 11.1 (2021), p. 19826. [34] Rebecca Smith-Bindman, Diana L Miglioretti, and Eric B Larson. “Rising use of diagnostic medical imaging in a large integrated health system”. In: Health affairs 27.6 (2008), pp. 1491–1502. [35] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth ´ee Lacroix, Bap- tiste Rozi `ere, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. “LLaMA: Open and Effi- cient Foundation Language Models”. In: arXiv preprint arXiv:2302.13971 (2023). [36] Tom Van Sonsbeek, Mohammad Mahdi Derakhshani, Ivona Najdenkoska, Cees GM Snoek, and Marcel Wor- ring. “Open-ended medical visual question answering through prefix tuning of language models”. In: In- ternational Conference on Medical Image Computing and Computer-Assisted Intervention . Springer. 2023, pp. 726–736. DALLA SERRA et al. : GROUNDING CHEST X-RAY VISUAL QUESTION ANSWERING WITH GENERATED RADIOLOGY REPORTS 11 [37] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. “Attention is all you need”. In: Advances in neural information processing systems 30 (2017). [38] Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. “Cider: Consensus-based image description evaluation”. In: Proceedings of the IEEE conference on computer vision and pattern recognition . 2015, pp. 4566–4575. [39] Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. “Ofa: Unifying architectures, tasks, and modalities through a simple sequence-to- sequence learning framework”. In: International Con- ference on Machine Learning . PMLR. 2022, pp. 23318– 23340. [40] Sheng Wang, Zihao Zhao, Xi Ouyang, Qian Wang, and Dinggang Shen. “Chatcad: Interactive computer- aided diagnosis on medical image using large language models”. In: arXiv preprint arXiv:2302.07257 (2023). [41] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. “Self-Consistency Improves Chain of Thought Reasoning in Language Models”. In: The Eleventh International Conference on Learning Repre- sentations . 2023. [42] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. “Chain-of-thought prompting elicits reasoning in large language models”. In: Advances in neural information processing systems 35 (2022), pp. 24824–24837. [43] Joy T Wu, Nkechinyere Nneka Agu, Ismini Lourentzou, Arjun Sharma, Joseph Alexander Paguio, Jasper Seth Yao, Edward Christopher Dee, William G Mitchell, Satyananda Kashyap, Andrea Giovannini, et al. “Chest ImaGenome | https://arxiv.org/abs/2505.16624v1 |
Dataset for Clinical Reasoning”. In: NeurIPS: Datasets and Benchmarks Track (Round 2) . 2021. [44] Linli Yao, Weiying Wang, and Qin Jin. “Image dif- ference captioning with pre-training and contrastive learning”. In: Proceedings of the AAAI Conference on Artificial Intelligence . V ol. 36. 3. 2022, pp. 3108–3116. [45] Feiyang Yu, Alex Moehring, Oishi Banerjee, Tobias Salz, Nikhil Agarwal, and Pranav Rajpurkar. “Hetero- geneity and predictors of the effects of AI assistance on radiologists”. In: Nature Medicine 30.3 (2024), pp. 837– 849. [46] Xiaoman Zhang, Chaoyi Wu, Ziheng Zhao, Weixiong Lin, Ya Zhang, Yanfeng Wang, and Weidi Xie. “Pmc- vqa: Visual instruction tuning for medical visual ques- tion answering”. In: arXiv preprint arXiv:2305.10415 (2023). [47] Zhuosheng Zhang, Aston Zhang, Mu Li, hai zhao, George Karypis, and Alex Smola. “Multimodal Chain- of-Thought Reasoning in Language Models”. In: Trans- actions on Machine Learning Research (2024).[48] Denny Zhou, Nathanael Sch ¨arli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc V Le, and Ed H. Chi. “Least-to-Most Prompting Enables Complex Reasoning in Large Language Models”. In: The Eleventh Interna- tional Conference on Learning Representations . 2023. | https://arxiv.org/abs/2505.16624v1 |
MiLQ: Benchmarking IR Models for Bilingual Web Search with Mixed Language Queries Jonghwi Kim1, Deokhyung Kang1,, Seonjeong Hwang1, Yunsu Kim3,Jungseul Ok1,2,Gary Geunbae Lee1,2, 1Graduate School of Artificial Intelligence, POSTECH, Republic of Korea , 2Department of Computer Science and Engineering, POSTECH, Republic of Korea, 3aiXplain Inc., Los Gatos, CA, USA, {jonghwi.kim, deokhk, seonjeongh, jungseul.ok, gblee}@postech.ac.kr ,yunsu.kim@aixplain.com Abstract Despite bilingual speakers frequently using mixed-language queries in web searches, Infor- mation Retrieval (IR) research on them remains scarce. To address this, we introduce MiLQ , Mixed-Language Query test set, the first pub- lic benchmark of mixed-language queries, con- firmed as realistic and highly preferred. Exper- iments show that multilingual IR models per- form moderately on MiLQ and inconsistently across native, English, and mixed-language queries, also suggesting code-switched train- ing data’s potential for robust IR models han- dling such queries. Meanwhile, intentional En- glish mixing in queries proves an effective strat- egy for bilinguals searching English documents, which our analysis attributes to enhanced token matching compared to native queries. 1 Introduction Code-switching1, where bilingual speakers alter- nate languages within a context, is a prevalent lin- guistic behavior in multilingual communities (Auer, 1999; Gardner-Chloros, 2009; Auer, 2013). This phenomenon extends to Human-Computer Inter- action (HCI), especially via AI agents like Chat- GPT (OpenAI, 2023), where understanding mixed- language input critically affects their perceived re- liability by bilingual users (Bawa et al., 2020; Choi et al., 2023). Information Retrieval (IR) systems also face the challenge of effectively handling such mixed-language queries (Sitaram et al., 2019). Meanwhile, recent IR research has expanded beyond Monolingual IR ( MonoIR ) settings to di- verse multilingual settings. The benchmarks (Asai et al., 2021; Lawrie et al., 2023b,a; Soboroff, 2023; Adeyemi et al., 2024; Litschko et al., 2025) are widely utilized, representing diverse language sce- narios. However, research on mixed-language queries remains sparse and outdated (Fung et al., 1In this study, code-switching, mixed-language, and code- mixing are used synonymously. Was sind die V orteile und Nachteile einer einheitlichen europäischen Währung? Was sind die Advantages und Disadvantages einer single European Currency? What are the advantages and disadvantages of a single European currency?German Query Mixed -Language Query English QueryBilingual Speaker (Fluent in German and English) Figure 1: Illustration of a bilingual user freely using German, English, and mixed-language queries. German elements are ingreen , and English in orange . 1999; Gupta et al., 2014; Sequiera et al., 2015), with no publicly available benchmark. To address these gaps, we introduce MiLQ , the first Mixed-Language Query benchmark created by actual bilingual users (Figure1). Using MiLQ, we explore three main research questions: (RQ1) How realistic are our mixed-language queries, and which query language do bilingual users prefer? (RQ2) How well do existing multilingual IR mod- els perform in Mixed-language Query Information Retrieval ( MQIR )?(RQ3) Is the behavior of inten- tionally mixing English terms into query, noted in HCI studies (Fu, 2017, 2019), an effective strategy? The main contributions of our work are: •We introduce MiLQ , the first public bench- mark of mixed-language queries, qualified as realistic and highly preferred by bilinguals. •We provide a comprehensive | https://arxiv.org/abs/2505.16631v1 |
performance analysis of multilingual IR models on MiLQ , establishing initial baselines for MQIR . •We show intentionally mixed-language queries are effective for English document retrieval across diverse methods, providing token-level analysis of their rationale. 2 MiLQ: Mixed-Language Query test set Data Construction We started with queries from two Cross-Language IR ( CLIR ) benchmarks: CLEF (Braschler, 2003) and NeuCLIR22 (Lawrie et al., 2023a), addressing native-to-English and English-to-native retrieval, respectively. These were selected to ensure diverse language scenar- ios while maintaining quality, based on three crite- ria: (1) availability of parallel English and native-arXiv:2505.16631v1 [cs.IR] 22 May 2025 Retrieval Native Num. Title Query Description Query Scenario Lang of CMI GPT-Eval Human-Eval CMI GPT-Eval Human-Eval (Q→D) (XX) Query (XX→MiLQ) Acc. Flu. Acc. Flu. Real. (XX→MiLQ) Acc. Flu. Acc. Flu. Real. Mixed→ENSW 151 8.4→38.6 2.35 2.39 2.83 2.65 2.66 5.6→30.7 2.83 2.44 2.83 2.62 2.61 SO 151 16.2→59.6 2.38 2.34 2.73 2.58 2.95 5.4→36.1 2.76 2.34 2.63 2.51 2.77 FI 151 7.3→40.2 2.48 2.52 2.79 2.70 2.59 2.2→45.3 2.63 2.15 2.63 2.44 2.28 DE 151 9.1→61.8 2.67 2.68 2.61 2.50 2.21 2.1→41.1 2.55 2.11 2.43 2.15 1.80 FR 151 5.7→35.0 2.52 2.55 2.84 2.51 2.31 2.3→32.9 2.80 2.30 2.84 2.51 2.31 Mixed→XXZH 47 0.3→13.7 2.85 2.85 2.79 2.79 2.64 2.4→9.0 2.89 2.91 2.65 2.70 2.50 FA 45 2.2→15.0 2.98 2.98 2.87 2.82 2.64 0.1→5.6 3.00 2.93 2.91 2.81 2.68 RU 44 0.0→51.7 2.89 2.50 2.72 2.30 2.16 0.6→51.7 2.93 2.45 2.73 2.16 2.14 Average 111.4 6.2→39.5 2.64 2.60 2.78 2.59 2.57 5.0→31.6 2.80 2.45 2.76 2.45 2.46 Table 1: Quality measurements for MiLQ (Title & Description queries). Code-Mixing Index (CMI) is on a 0-100 scale (Original Query CMI →Mixed-language Query ( MiLQ ) CMI). For GPT-Eval (Accuracy [Acc.] & Fluency [Flu.]) and Human-Evaluation (Acc. & Flu. & Realism [Real.]), both on a 1-3 scale, cell backgrounds are colored in a fine-grained red gradient from lightest red (scores ≈1.0) to darkest red (scores ≈3.0) . The ’Average’ row is bolded . "XX" denotes the native language. language queries, (2) widespread use for perfor- mance comparison, and (3) budgetary feasibility. Both follow the TREC format (V oorhees, 2005), in- cluding short Title and longer Description queries, for which we created mixed-language versions. Bilingual speakers, experienced in both lan- guages and mixed-language search, crafted natural mixed-language queries from original English and native query pairs, while preserving the original search intent. To reflect realistic code-switching patterns, we adopt Matrix Language Frame theory (Myers-Scotton, 1997) and follow prior studies (Fu, 2017, 2019; Yong et al., 2023; Winata et al., 2023) that describe common code-switching as featuring native language as the grammar-governing matrix and English language as embedded. Accordingly, annotators integrated English terms into the native language structure only when conceptually neces- sary and linguistically sound. Annotation guide- lines are in Appendix A.1, and MiLQ samples are in Appendix A.2 (Figures 5, 6). Quality Measurement and Analysis We mea- sured MiLQ’s quality considering its language mix- ing, meaning preservation, naturalness, and real- ism (Table 1). First, for language mixing, we used theCode-Mixing Index (CMI) (Das and Gam- bäck, 2014) (0-100 | https://arxiv.org/abs/2505.16631v1 |
scale, higher=more mixing; Ap- pendix A.3). Average CMI increased from 6.2 to 39.5 (Title) and 5.0 to 31.6 (Description), show- ing substantially more mixing than originals. Next, GPT-Eval (GPT-4o) using Kuwanto et al.’s frame- work (high human alignment, Kendall’s Tau > 0.5) assessed MiLQ (1-3 scale; rubrics in Appendix A.4) for Accuracy (Acc.) (meaning preservation, correct term use) and Fluency (Flu.) (naturalness, readability, seamlessness). MiLQ achieved strong average GPT-Eval scores: Acc. 2.64 / Flu. 2.60 (Ti- tle) and Acc. 2.80 / Flu. 2.45 (Description). Lastly, forHuman-Eval , three bilingual annotators per query assessed MiLQ on a 1-3 scale (detailed guide-lines in Appendix A.5). This evaluation covered Accuracy (Acc.) andFluency (Flu.) , using criteria consistent with GPT-Eval, and an additional Re- alism (Real.) . Realism specifically assessed how naturally bilingual speakers might use the given mixed-language query in real search scenarios. Hu- man evaluators rated MiLQ highly, with average scores: Acc. 2.78 / Flu. 2.59 / Real. 2.57 (Title) and Acc. 2.76 / Flu. 2.45 / Real. 2.46 (Description). These consistently high scores in all metrics affirm the quality and reliability of MiLQ. Native Title Description Lang. Agr.(%) Nat. Mix. Eng. Agr.(%) Nat. Mix. Eng. SW 78.8 0.44 0.82 2.01 88.7 0.30 1.05 1.68 SO 98.0 0.73 2.26 0.01 100.0 0.19 2.76 0.05 FI 70.9 1.60 1.16 1.26 81.5 1.84 0.34 1.16 DE 53.0 1.11 0.46 1.38 77.5 0.79 0.38 1.85 FR 69.5 1.02 1.86 0.47 78.8 0.82 1.89 0.38 ZH 53.2 1.40 0.92 1.08 70.2 0.88 0.94 1.48 FA 60.0 0.78 2.00 0.37 66.7 0.73 2.00 0.47 RU 72.7 0.59 1.69 0.91 75.0 0.52 1.76 0.79 A VG. 69.3 0.87 1.43 0.89 79.6 0.60 1.54 0.96 Table 2: User preference for Native (Nat.), Mixed-language (Mix.), and English (Eng.) queries. Agr.(%): Percentage of queries where a majority (2+ of 3) of annotators agreed on preferred query type(s). Nat./Mix./Eng. values represent average annotator votes (0-3) for each type. Background color intensity indicates preference strength. To investigate user preferences for Native (Nat.), Mixed-language (Mix.), and English (Eng.) query formulations, we asked annotators to select their preferred formulation(s), allowing for multiple se- lections. For robust assessment, Table 2 presents results for queries in which a majority of annota- tors (2+ of 3) agreed on their preferred formulation. The scores for each formulation type (0-3) repre- sent the average number of annotators who selected that type as preferred. Overall, Mix. received the highest average scores, with 1.43 for Title and 1.54 for Description queries, outperforming Nat. and Eng. formulations. However, the degree of pref- erence varied across languages. Notably, Somali (SO) exhibited the strongest preference for mixed- language (e.g., Title: 2.26, Description: 2.76). To Figure 2: Performance of retrieval models across CLIR ,MQIR (MiLQ ), and MonoIR scenarios. Results are averaged by language group: low-resource (SW, SO; MAP@100) [left], high-resource (FI, DE, FR; MAP@100) [middle], and diverse document language (ZH, FA, RU; nDCG@20) [right]. Models include BM25 , specialized multi-vector dense retrievers ( Mono-, Mixed-, Cross-Distill ), and mContriever . See Appendix B.4 for per-language details. uncover the reasons for such variations, we con- ducted interviews with | https://arxiv.org/abs/2505.16631v1 |
annotators in all languages. These discussions revealed that Somali speakers frequently code-switch, primarily using English to express modern concepts due to Somali’s limited contemporary vocabulary—findings aligned with prior literature (Andrzejewski, 1979, 1978; Kap- chits, 2019). Further interview insights, including common themes on mixed-language query usage, are provided in Appendix A.6. In summary, this section addressed (RQ1) , con- firming that MiLQ is perceived as highly realis- tic and that bilingual users prefer mixed-language query formulations. Additional details of MiLQ are in Appendix A.7. 3 Experimental Setup This section details our experimental setup, de- signed to evaluate various multilingual IR models on mixed-language queries using MiLQ . Test Scenarios & Collections We evaluate three retrieval scenarios: MQIR (MiLQ ) (Mixed →XX), MonoIR (XX→XX), and CLIR (XX→YY). Doc- ument collections include NeuCLIR22 (Lawrie et al., 2023a) and CLEF00-03 (Braschler, 2000, 2002a,b, 2003) (statistics in Appendix B.1). Fol- lowing prior works (Huang et al., 2023; Yang et al., 2024), queries are concatenations of Title and De- scription, with MAP@100 and nDCG@20 serving as the primary metrics (detailed in Appendix B.2). Retrieval Models To create retrieval models spe- cialized for distinct language scenarios, we de- veloped three ColBERT-based (Khattab and Za- haria, 2020) dense retrievers: Mono-Distill ,Cross- Distill , and Mixed-Distill . Based on a multilin- gual pretrained language model, these models are trained via Knowledge Distillation (KD) adapt- ing Translate-Distill strategy (Yang et al., 2024) where English IR training data is translated into tar-get languages. Thus, their specialization for each scenario arises solely from the training data used. Mono-Distill is trained for MonoIR (e.g., XX →XX or EN→EN) with monolingual query-document pairs (original MSMARCO or translated version). Cross-Distill is trained for CLIR (e.g., XX →EN or EN→XX) with cross-lingual query-document pairs derived from MSMARCO. Mixed-Distill is trained forMQIR (e.g., Mixed →EN or Mixed →XX) with artificially code-switched query-document pairs, generated via bilingual lexicon without translation. We also include the following baselines: mCon- triever (Izacard et al.) serves as a multilingual single vector dense retriever pre-trained for broad language coverage. BM25 (Robertson et al., 2009) is a standard sparse lexical matching retriever. Translate-Test first translates queries into the doc- ument’s language via Neural Machine Translation (NMT), then applies BM25 or Mono-Distill for re- trieval. Detailed implementation specifics for all models are in Appendix B.3. 4 Results and Analysis Main Results In response to (RQ2) , MiLQ (MQIR in Figure 2) shows that multilingual IR models like Mono-Distill and Cross-Distill achieve moderate performance in MQIR , performing be- tween their MonoIR andCLIR performance. This pattern, also observed with the lexical-based BM25, is attributable to MQIR ’s intermediate level of lexi- cal cues compared to MonoIR andCLIR settings. Further observations underscore specialization’s limitations. For instance, Mono-Distill ( MonoIR - optimized) outperformed Cross-Distill ( CLIR - optimized) in MonoIR settings, and vice-versa. Additionally, mContriever consistently trails spe- cialized models. Notably, Mixed-Distill trained with artificial code-switched text shows well- balanced performance, often outperforming Cross- Distill in MonoIR and Mono-Distill in CLIR/MQIR . This highlights potential benefits of using mixed- language queries in training for a robust bilingual IR system—a core | https://arxiv.org/abs/2505.16631v1 |
challenge MiLQ addresses: de- veloping a single robust IR model for bilingual users freely querying in native, English or mixed language. To better harness this potential of code- switched training data explored in prior studies (Litschko et al., 2023; Liu et al., 2025), future work could explore advanced methods, like multilingual LLMs, beyond simple lexicon augmentation. Regarding (RQ3) , intentionally using mixed- language queries offers context-dependent benefits. While native queries are optimal for retrieving na- tive content ( MonoIR , XX→XX), mixed-language queries ( MQIR , Mixed →EN) prove superior to na- tive ones ( CLIR , XX→EN) when bilinguals search- ing English content, thus offering a clear strate- gic advantage. Notably, in low-resource MQIR for English document retrieval (Figure 2, left), BM25 outperforms neural models like mContriever and Mono-Distill. Consequently, for low-resource languages where neural models struggle with na- tive queries, mixed-language queries with BM25 present a more effective IR system. MethodLow Resource High ResourceMonoIR CLIR MQIR CLIR MQIR BM25 12.35 38.35 8.56 34.9248.71NMT→BM25 41.07 48.10 46.01 47.08 Mono-Distill 9.53 36.34 41.32 52.2657.93NMT→Mono-Distill 50.14 56.92 56.25 56.78 Table 3: Performance of BM25 and Mono-Distill before and after applying NMT. The metric used is MAP@100 (%). Effectiveness of Translate-Test Translate-Test, applying NMT at test time, is widely used in CLIR (Nair et al., 2022). We evaluated its effectiveness for English document retrieval (XX or Mixed → EN), projecting native and mixed-language into English. Table 3 shows introducing NMT for both query types consistently improved perfor- mance, bringing them closer to the MonoIR sce- nario. Notably, NMT on mixed-language queries (Mixed →EN) surpassed NMT on native queries (XX→EN). This suggests English terms in mixed queries aid translation, making NMT on these in- tentionally mixed queries (relevant to RQ3 ) more effective. However, current research on Code- Switching Translation (Huzaifah et al., 2024) has been limited to specific language pairs, underscor- ing the need for tailored NMT models to better support MQIR . Figure 3: Token-level similarity matrices from Cross-R for German and mixed-language queries on ground truth passage. The y-axis shows tokenized queries (mixed-language left, na- tive right), and the x-axis represents the tokenized English passage. MaxSim tokens are marked by ×, and the code- switched parts are highlighted . Token-Level Analysis for MQIR The mecha- nism of multi-vector retriever (e.g., ColBERT) in- volves identifying the most similar document to- kens for each query token. While prior research (Wang et al., 2023; Liu et al., 2024) has explored this in MonoIR , its behavior in other language con- texts remains unexplored. This token-level analysis offers a rationale for a key aspect of (RQ3) : under- standing why mixed-language queries can outper- form native queries for English document retrieval. Our analysis compared MaxSim token pair sim- ilarity (a query token and its maximal similarity document token) in mixed-language versus native queries. Figure 3 (left) shows mixed-language queries, by including English terms (e.g., "Intellec- tual Property Rights" from German "Intellektuelle Eigentumsrechte"), allow these English tokens to form MaxSim pairings ( ×) with accurate, higher similarity scores. Conversely, native queries (right) rely on cross-lingual interpretation of | https://arxiv.org/abs/2505.16631v1 |
native tokens (e.g., German "Intellekt," "Eigen") to map English concepts. While MaxSim pairings are also iden- tified (×), this mapping yields weaker similarity for such crucial English concepts. Thus, inten- tionally mixing English terms improves MaxSim matching through higher similarity scores for En- glish terms—a key rationale ( RQ3 ) for MQIR ’s enhanced English retrieval. 5 Conclusion This study addressed prevalent yet understudied mixed-language querying by bilinguals by intro- ducing MiLQ —the first public user-crafted MQIR benchmark, confirmed for realism and high user preference. Experiments on MiLQ revealed incon- sistent performance of current IR models on diverse queries, underscoring the need for robust models and the potential of code-switched training data. Finally, we find that intentional English mixing boosts English document retrieval for bilinguals. 6 Limitations While MiLQ is a valuable first public MQIR benchmark, it shares limitations common to the broader multilingual IR field. A key challenge is the test set scale; unlike large monolingual English benchmarks (e.g., MS-MARCO (Bajaj et al., 2018), NQ (Kwiatkowski et al., 2019) with thousands of queries), CLIR benchmarks typically comprise only tens to hundreds of queries (Asai et al., 2021; Lawrie et al., 2023b,a; Soboroff, 2023; Adeyemi et al., 2024). This is because creat- ing numerous high-quality multilingual test sets is highly resource-intensive. Larger MQIR bench- marks would be beneficial, allowing for more ro- bust methodological comparisons and fostering ad- vancements in the field. MiLQ currently focuses on English-native lan- guage pairs, excluding non-English/non-English combinations; future inclusion of these diverse pairings is desirable. Furthermore, while realis- tic, MiLQ’s user-crafted queries may not capture all code-switching patterns, as these are shaped by individual cultural and linguistic experiences. Broader participant involvement could enrich fu- ture datasets with more diverse, authentic patterns. Budgetary constraints also limited MiLQ’s initial language and domain scope, suggesting future ex- pansions for wider utility. These limitations and the need for larger test collections highlight promising future directions. Beyond creating larger MQIR benchmarks, key re- search avenues include expanding linguistic diver- sity (with non-English/non-English pairs), investi- gating broader code-switching patterns via more diverse annotators, and leveraging advanced tech- niques like multilingual LLMs to enhance MQIR . Ethical Considerations In our research, we use NeuCLIR22 (Lawrie et al., 2023a), which are licensed under CC BY-NC 4.0. The models GPT-4o (Hurst et al., 2024) are li- censed under OpenAI, Apache-2.0, All models were used strictly for research purposes, and no ar- tifacts were utilized beyond the scope of the study. References Mofetoluwa Adeyemi, Akintunde Oladipo, Xinyu Zhang, David Alfonso-Hermelo, Mehdi Reza- gholizadeh, Boxing Chen, Abdul-Hakeem Omotayo, Idris Abdulmumin, Naome A Etori, Toyib Babatunde Musa, et al. 2024. Ciral: A test collection for clirevaluations in african languages. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval , pages 293–302. Bogumił W Andrzejewski. 1978. The development of a national orthography in somalia and the moderniza- tion of the somali language. Horn of Africa . BW Andrzejewski. 1979. Language reform in soma- lia and the modernization of the somali vocabulary. Northeast African Studies , pages 59–71. Akari Asai, Jungo Kasai, Jonathan | https://arxiv.org/abs/2505.16631v1 |
Clark, Kenton Lee, Eunsol Choi, and Hannaneh Hajishirzi. 2021. XOR QA: Cross-lingual open-retrieval question answering. InProceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies , pages 547–564, Online. Association for Computa- tional Linguistics. Peter Auer. 1999. From codeswitching via language mixing to fused lects: Toward a dynamic typology of bilingual speech. International journal of bilingual- ism, 3(4):309–332. Peter Auer. 2013. Code-switching in conversation: Lan- guage, interaction and identity . Routledge. Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, and Tong Wang. 2018. Ms marco: A human generated machine reading comprehension dataset. Preprint , arXiv:1611.09268. Anshul Bawa, Pranav Khadpe, Pratik Joshi, Kalika Bali, and Monojit Choudhury. 2020. Do multilingual users prefer chat-bots that code-mix? let’s nudge and find out! Proceedings of the ACM on Human-Computer Interaction , 4(CSCW1):1–23. Michael Bendersky and Oren Kurland. 2008. Utiliz- ing passage-based language models for document retrieval. In Advances in Information Retrieval: 30th European Conference on IR Research, ECIR 2008, Glasgow, UK, March 30-April 3, 2008. Proceedings 30, pages 162–174. Springer. Hamed Bonab, James Allan, and Ramesh Sitaraman. 2019. Simulating clir translation resource scarcity using high-resource languages. In Proceedings of the 2019 ACM SIGIR International Conference on Theory of Information Retrieval , pages 129–136. Martin Braschler. 2000. Clef 2000—overview of re- sults. In Workshop of the Cross-Language Evalua- tion Forum for European Languages , pages 89–101. Springer. Martin Braschler. 2002a. Clef 2001 — overview of re- sults. In Evaluation of Cross-Language Information Retrieval Systems , pages 9–26, Berlin, Heidelberg. Springer Berlin Heidelberg. Martin Braschler. 2002b. Clef 2002—overview of re- sults. In Workshop of the Cross-Language Evalu- ation Forum for European Languages , pages 9–27. Springer. Martin Braschler. 2003. Clef 2003–overview of results. InWorkshop of the cross-language evaluation forum for european languages , pages 44–63. Springer. Yunjae J Choi, Minha Lee, and Sangsu Lee. 2023. To- ward a multilingual conversational agent: Challenges and expectations of code-mixing multilingual users. InProceedings of the 2023 CHI conference on human factors in computing systems , pages 1–17. Zhuyun Dai and Jamie Callan. 2019. Deeper text un- derstanding for ir with contextual neural language modeling. In Proceedings of the 42nd international ACM SIGIR conference on research and development in information retrieval , pages 985–988. Amitava Das and Björn Gambäck. 2014. Identifying languages at the word level in code-mixed indian social media text. In Proceedings of the 11th Interna- tional Conference on Natural Language Processing , pages 378–387. Hengyi Fu. 2017. Query reformulation patterns of mixed language queries in different search intents. InProceedings of the 2017 conference on conference human information interaction and retrieval , pages 249–252. Hengyi Fu. 2019. Mixed language queries in online searches: A study of intra-sentential code-switching from a qualitative perspective. Aslib Journal of Infor- mation Management , 71(1):72–89. Pascale Fung, Xiaohu Liu, and Chi-Shun Cheung. 1999. Mixed language query disambiguation. In Proceed- ings of the 37th Annual Meeting of the Association for Computational Linguistics , pages 333–340. Penelope Gardner-Chloros. | https://arxiv.org/abs/2505.16631v1 |
2009. Code-switching . Cambridge university press. Parth Gupta, Kalika Bali, Rafael E Banchs, Monojit Choudhury, and Paolo Rosso. 2014. Query expansion for mixed-script information retrieval. In Proceed- ings of the 37th international ACM SIGIR conference on Research & development in information retrieval , pages 677–686. Zhiqi Huang, Puxuan Yu, and James Allan. 2023. Im- proving cross-lingual information retrieval on low- resource languages via optimal transport distillation. InProceedings of the Sixteenth ACM International Conference on Web Search and Data Mining , pages 1048–1056. Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Os- trow, Akila Welihinda, Alan Hayes, Alec Radford, et al. 2024. Gpt-4o system card. arXiv preprint arXiv:2410.21276 .Muhammad Huzaifah, Weihua Zheng, Nattapol Chan- paisit, and Kui Wu. 2024. Evaluating code-switching translation with large language models. In Pro- ceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) , pages 6381– 6394. Gautier Izacard, Mathilde Caron, Lucas Hosseini, Se- bastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. Unsupervised dense informa- tion retrieval with contrastive learning. Transactions on Machine Learning Research . Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, and Tomas Mikolov. 2016. Fasttext.zip: Compressing text classification models. arXiv preprint arXiv:1612.03651 . Georgi Kapchits. 2019. on the somali temporal lexi- con. Bildhaan: An International Journal of Somali Studies , 19(1):7. Omar Khattab and Matei Zaharia. 2020. Colbert: Effi- cient and effective passage search via contextualized late interaction over bert. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval , pages 39– 48. Garry Kuwanto, Chaitanya Agarwal, Genta Indra Winata, and Derry Tanti Wijaya. 2024. Linguis- tics theory meets llm: Code-switched text generation via equivalence constrained large language models. arXiv preprint arXiv:2410.22660 . Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Ken- ton Lee, et al. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics , 7:453– 466. Dawn Lawrie, Sean MacAvaney, James Mayfield, Paul McNamee, Douglas W Oard, Luca Soldaini, and Eugene Yang. 2023a. Overview of the trec 2022 neuclir track. arXiv preprint arXiv:2304.12367 . Dawn Lawrie, James Mayfield, Douglas W Oard, Eu- gene Yang, Suraj Nair, and Petra Galuš ˇcáková. 2023b. Hc3: A suite of test collections for clir evaluation over informal text. In Proceedings of the 46th Inter- national ACM SIGIR Conference on Research and Development in Information Retrieval , pages 2880– 2889. Robert Litschko, Ekaterina Artemova, and Barbara Plank. 2023. Boosting zero-shot cross-lingual re- trieval by training on artificially code-switched data. arXiv preprint arXiv:2305.05295 . Robert Litschko, Oliver Kraus, Verena Blaschke, and Barbara Plank. 2025. Cross-dialect information re- trieval: Information access in low-resource and high- variance languages. In Proceedings of the 31st Inter- national Conference on Computational Linguistics , pages 10158–10171. Andrew Liu, Edward Xu, Crystina Zhang, and Jimmy Lin. 2025. The impact of incidental multilingual text on cross-lingual transfer in monolingual retrieval. InEuropean Conference on Information Retrieval , pages 165–173. Springer. Qi Liu, Gang Guo, Jiaxin Mao, Zhicheng Dou, | https://arxiv.org/abs/2505.16631v1 |
Ji-Rong Wen, Hao Jiang, Xinyu Zhang, and Zhao Cao. 2024. An analysis on matching mechanisms and token prun- ing for late-interaction models. ACM Transactions on Information Systems , 42(5):1–28. Carol Myers-Scotton. 1997. Duelling languages: Gram- matical structure in codeswitching . Oxford Univer- sity Press. Suraj Nair, Eugene Yang, Dawn Lawrie, Kevin Duh, Paul McNamee, Kenton Murray, James Mayfield, and Douglas W Oard. 2022. Transfer learning ap- proaches for building cross-language dense retrieval models. In European Conference on Information Retrieval , pages 382–396. Springer. Shuyo Nakatani. 2010. Language detection library for java. OpenAI. 2023. Chatgpt. https://chat.openai.com . https://chat.openai.com . Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: Bm25 and be- yond. Foundations and Trends ®in Information Re- trieval , 3(4):333–389. Royal Sequiera, Monojit Choudhury, Parth Gupta, Paolo Rosso, Shubham Kumar, Somnath Banerjee, Sudip Kumar Naskar, Sivaji Bandyopadhyay, Gokul Chittaranjan, Amitava Das, et al. 2015. Overview of fire-2015 shared task on mixed script information retrieval. In FIRE workshops , volume 1587, pages 19–25. Sunayana Sitaram, Khyathi Raghavi Chandu, Sai Kr- ishna Rallabandi, and Alan W Black. 2019. A survey of code-switched speech and language processing. arXiv preprint arXiv:1904.00784 . Ian Soboroff. 2023. The better cross-language datasets. InProceedings of the 46th International ACM SI- GIR Conference on Research and Development in Information Retrieval , pages 3047–3053. EM V oorhees. 2005. Trec: Experiment and evaluation in information retrieval. Xiao Wang, Craig Macdonald, Nicola Tonellotto, and Iadh Ounis. 2023. Reproducibility, replicability, and insights into dense multi-representation retrieval models: from colbert to col. In Proceedings of the 46th International ACM SIGIR Conference on Re- search and Development in Information Retrieval , pages 2552–2561.Genta Winata, Alham Fikri Aji, Zheng Xin Yong, and Thamar Solorio. 2023. The decades progress on code- switching research in nlp: A systematic survey on trends and challenges. In Findings of the Association for Computational Linguistics: ACL 2023 , pages 2936–2978. Eugene Yang, Dawn Lawrie, James Mayfield, Dou- glas W Oard, and Scott Miller. 2024. Translate-distill: Learning cross-language dense retrieval by transla- tion and distillation. In European Conference on Information Retrieval , pages 50–65. Springer. Zheng Xin Yong, Ruochen Zhang, Jessica Forde, Skyler Wang, Arjun Subramonian, Holy Lovenia, Samuel Cahyawijaya, Genta Winata, Lintang Sutawika, Jan Christian Blaise Cruz, et al. 2023. Prompting mul- tilingual large language models to generate code- mixed texts: The case of south east asian languages. InProceedings of the 6th Workshop on Compu- tational Approaches to Linguistic Code-Switching , pages 43–63. A Data Annotation A.1 Details of the Employment and Annotation We recruited bilingual speakers through Upwork2, who were fluent in both English and one of the following languages: Swahili (SW), Somali (SO), Finnish (FI), German (DE), French (FR), Chinese (ZH), Persian (FA), or Russian (RU). These annotators were selected based on their proficiency in both languages and their extensive experience in translation activities between English and their respective languages. We provided the annotators with clear guidelines, as shown in Figure 4. The payment was based on the number of queries, with SW, SO, FI, DE, and FR totaling 302 queries (Title + Description) for $40. For ZH, FA, and | https://arxiv.org/abs/2505.16631v1 |
RU, we created 94, 90, and 88 queries, respectively, with a total cost of $20 per language. Figure 4: Guideline for German-English mixed-language search query annotators. 2https://www.upwork.com A.2 Examples of Title and Description queries in MiLQ This appendix illustrates Title and Description mixed-language queries (MiLQ) from our dataset, derived from native and English sources. The figures highlight code-switched segments and indicate their Code-Mixing Index (CMI), calculated by 1. Source BenchmarkLang (XX)Native Title Query Mixed -Language Title Query (MiLQ ) English Title Query CLEF00 -03SW SO FI DE FRTatizo yaMafuta nchini Serbia Masiibada Saliida Saybeeriyaan Siperian öljykatastrofi Ö lkatastrophe in Sibirien Catastrophe *petrolière en SibérieTatizo yaMafuta nchini Siberia Masiibada saliidda Siberian Siberian öljykatastrofi Ö lkatastrophe in Siberia Catastrophe pétrolière inSiberiaSiberian Oil Catastrophe NeuCLIR22ZH FA RU巴米揚的大佛像 بوداهای بامیان Будды в БамианеBamiyan 大佛像 بوداهای Bamiyan Бамианские статуи ofBuddhaBuddhas of Bamiyan(0.0) (0.0) (0.0) (0.0) (25.0) (0.0) (0.0) (0.0)(20.0) (33.3) (50.0) (33.3) (75.0) (50.0) (50.0) (50.0) Figure 5: Examples of Title queries from the MiLQ dataset. Code-switched segments are highlighted, and CMI values are shown in parentheses. (*Note: Although ’Catastrophe’ is also a French word, it was identified as English by the language model in this instance.) Source BenchmarkLang (XX)Native Description Query Mixed -Languag e Description Query ( MiLQ ) Englis h Description Query CLEF00 -03SW SO FI DE FRPata maelezo kuhusu kupasuka kwa bomba la mafuta nchini Serbia. Hel warbixinta dilaaca tuubada saliida ee Saybeeriya . Etsi tietoja öljyputken murtumisesta Siperiassa. Bruch einer Ö lpipeline in Sibirien Rupture d'un pipeline en Sibérie Pata maelezo kuhusu kupasuka kwa oilpipeline nchini Siberia Hel reports -ka kusaabsan rupture -ka oilpipeline ee Siberia . Etsi tietoja oilpipeline rupture Siperiassa. Information über den Rupture einer Oil Pipeline in Siberia . Rupture d'un oilpipeline en Siberia . Find information on the rupture of an oil pipeline in Siberia. NeuCLIR22ZH FA RU我在找有關阿富汗巴米揚大佛的文章 。 دنبال مقاالتی در مورد بوداهای بامیان هستم Ищу статьи о Буддах в Бамиане我在找有關阿富汗 Bamiyan 大佛的文章 。 دنبال مقاالتی در مورد بوداهای Bamiyan هستم Ищу статьи onBuddhas ofBamiyanI'm looking for articles on Buddhas of Bamiyan(0.0) (0.0) (0.0) (0.0) (0.0) (0.0) (0.0) (0.0)(33.3) (33.3) (50.0) (66.7) (50.0) (11.1) (14.3) (66.7) Figure 6: Examples of Description queries from the MiLQ dataset, corresponding to the same query IDs as the Title examples shown in Figure 5. Code-switched segments are highlighted, and CMI values are indicated in parentheses. A.3 Code-Mixing Index (CMI) The formula for the Code-Mixing Index (CMI) (Das and Gambäck, 2014) is as follows: CMI =( 100× 1−max( wi) n−u ifn > u 0 ifn=u,(1) where wiis the word count in language i,max(wi)is the word count in the primary language, nis the total word count, and uis the number of language-independent tokens (e.g., numbers, hashtags). In our analysis, we treat the primary language as the native language. We used GPT-4o (Hurst et al., 2024) instead of existing tools for more precise language identification. While existing tools such as language-detection (Nakatani, 2010) and fastText (Joulin et al., 2016) have been widely used for language identification, we observed certain inconsistencies in accuracy. Therefore, we leveraged LLMs for more accurate data analysis. First, we tokenize the | https://arxiv.org/abs/2505.16631v1 |
text at the word level using NLTK3. For Chinese text, we apply Jieba4, a specialized tokenizer optimized for Chinese word segmentation. After tokenization, we utilize GPT-4o to classify each token’s language using the prompt template shown in Figure 7. 3https://www.nltk.org/ 4https://github.com/fxsjy/jieba Identify the language of each word in the given list using language codes (e.g., ISO 639 -1, ISO 639 -2, ISO 639 -3). If a word is language -independent (e.g., punctuation, numbers, or symbols), assign it the code 'unknown'. Return a one -to-one mapping of each word with its corresponding language code, followed by a final list of language codes in ord er. Example: Words: ['I', 'möchte', 'ein', 'new', 'laptop', 'kaufen', ',', 'but', "it's", 'too', 'teuer', '.'] One-to-One Matching: 'I' → 'en' 'möchte' → 'de' 'ein' → 'de' 'new' → 'en' 'laptop' → 'en' 'kaufen' → 'de' ',' → 'unknown' 'but' → 'en' "it's" → 'en' 'too' → 'en' 'teuer' → 'de' '.' → 'unknown' Final List of Language Codes: ['en', 'de', 'de', 'en', 'en', 'de', 'unknown', 'en', 'en', 'en', 'de', 'unknown'] Input: Words: {token_list} One-to-One Matching:Figure 7: Prompt template for language identification. A.4 GPT-Evaluation Rubric For GPT-based evaluation, we adopted the Accuracy and Fluency rubrics from Kuwanto et al., using their publicly available prompts and evaluation code framework.5While their assessments utilized GPT-4O- mini, our study employed the more powerful GPT-4o. The model was instructed to evaluate generated code-switched sentences against the original monolingual sentences on a 1 (lowest) to 3 (highest) scale for each criterion. Accuracy This criterion measures how well the generated sentence preserves the meaning and informa- tion of the original sentence, and whether the code-switched terms are used correctly and appropriately. Score 1 (Low): Significant deviation from original meaning; key information missing, altered, or redun- dantly repeated. Code-switched terms incorrect/inappropriate. Introduces new information. Score 2 (Moderate): Minor deviation from original meaning; most key information present but may have slight errors. Most code-switched terms appropriate with minor mistakes. Score 3 (High): Fully preserves original meaning; all key information present and correct. Code-switched terms accurate and appropriately used. Fluency This criterion measures how natural and easy to understand the generated sentence is, consider- ing grammar, syntax, and the smooth integration of code-switching. Score 1 (Low): Sentence is difficult to understand or awkward; poor grammar/syntax in either language. Code-switching disrupts sentence flow. Score 2 (Moderate): Sentence is understandable but may have awkward/unnatural phrasing; acceptable grammar/syntax. Code-switching somewhat smooth but not perfectly integrated. Score 3 (High): Sentence is natural and easy to understand; good grammar/syntax in both languages. Code-switching is smooth and seamless, enhancing flow. 5https://github.com/gkuwanto/ezswitch A.5 Human-Evaluation Guidelines and Rubrics For the human evaluation of mixed-language queries (MiLQ), we again recruited bilingual speakers via Upwork. Eligibility required proficiency in English and one target language at least at the B2 CEFR level, plus prior translation or linguistic experience, ensuring high-quality judgments. Annotators received detailed instructions (see Figure 8) and evaluated MiLQ quality using three criteria: Accuracy, Fluency, and Realism, rated on a 1-3 scale. The payment scheme for this evaluation reflected task complexity and language availability: SO and SW annotators were | https://arxiv.org/abs/2505.16631v1 |
compensated at $20 per annotator; FI, FR, and DE annotators at $30; and FA, ZH, and RU annotators at $15 each. Accuracy and Fluency Rubrics Accuracy and Fluency rubrics mirrored those used in GPT-Evaluation (see Appendix A.4). Accuracy measures how well a MiLQ preserves the original query’s meaning and appropriately integrates code- switched terms. Fluency assesses the naturalness and clarity of language mixing, ensuring smooth integration of both languages. Realism This criterion, specific to human evaluation, assesses the likelihood that a bilingual speaker would naturally produce or use the given MiLQ in a real online search context. Score 1 (Low): Query feels unnatural or forced; unlikely to be used in real search scenarios. Score 2 (Moderate): Query could be used in real searches, but has noticeable awkwardness or unnatural elements. Score 3 (High): Query feels natural and comfortable; would likely be used in real search situations. Figure 8: An example of the detailed annotation guidelines provided to bilingual evaluators, in this case for Somali-English mixed-language search queries. Similar guidelines were adapted for other language pairs. A.6 Insights from Annotator Interviews on Mixed-Language Query Usage To gain deeper insights into why and when bilingual users employ mixed-language queries (MiLQ) in real-world online searches, we conducted semi-structured interviews with all annotators. A common question posed was: " In what situations are mixed-language queries commonly used in real-world online search contexts, and for what reasons? " Table 4 summarizes the key themes derived from their responses. Key Theme Description: Why Users Mix Languages in Queries Relevant Lan- guages Men- tioned Lexical Gaps Native language lacks suitable or clear terms for specific concepts (esp. modern/technical), or English equivalents offer greater preci- sion, familiarity, or avoid awkward/ambiguous translations.Swahili, Somali, French, Finnish, German, Chinese, Persian, Russian Broader Infor- mation AccessEnglish terms are used to retrieve a broader range or larger volume of online results, especially when native-language queries are per- ceived as too restrictive or yield insufficient/biased information.Swahili, Somali, Finnish, German, Chinese, Persian Querying Effi- ciencyEnglish terms are preferred for faster query formulation, due to shorter terms, keyboard convenience, or greater cognitive accessi- bility (i.e., English terms come to mind more readily or are more familiar).Swahili, Somali, French, Finnish, German, Chinese, Russian Grammatical / Orthographic SimplificationEnglish terms are selected to bypass complex native grammatical constructions (e.g., inflections, case agreement for foreign words) or challenges posed by keyboard layouts and non-Latin scripts.Russian Language Moderniza- tionReliance on established English terms arises from insufficient insti- tutional efforts to standardize native terminology for contemporary concepts, especially in digital and tech domainsSomali Table 4: Summary of Key Motivations for Mixed-Language Query Usage (Condensed to 5 Points) from Annotator Interviews In essence, these interviews highlight that bilinguals employ MiLQ for diverse, practical reasons. Key drivers include bridging lexical gaps or seeking terminological precision when native terms are inadequate, especially for modern or technical concepts. Users also mix languages to expand information access, retrieving broader or more diverse results than native-only queries might yield, or to overcome perceived biases. Querying efficiency and fluency are other significant factors, with English often offering faster input or more | https://arxiv.org/abs/2505.16631v1 |
readily accessible terms. Furthermore, mixed-language can serve to simplify grammatical or orthographic complexities inherent in some native languages, or address deficiencies in language modernization where native terminology for contemporary concepts is lacking. It is important to note that the specific motivations and patterns of Mixed-language query usage are often highly speaker- and context-dependent, influenced by individual linguistic backgrounds, cognitive habits, the nature of the information need, and even momentary contextual factors. Understanding these varied drivers is crucial for developing IR systems that can effectively cater to the nuanced and dynamic search behaviors of bilingual users worldwide. A.7 Part-of-Speech Distribution of Code-Switched Words in Queries NOUN (136) 39.5% PROPN (130)37.8% ADJ (43)12.5% PCW (14)4.1%VERB (7)2.0%ADP (5)1.5%OTHER (9)2.6%Native Query NOUN (2380) 48.1% PROPN (720)14.6% ADJ (696)14.1% ADP (397)8.0% VERB (310)6.3%PRON (202)4.1%PCW (71)1.4%OTHER (170)3.4%Mixed-Language Query (MiLQ) Figure 9: POS distribution of English code-switched words in queries from NeuCLIR22 and CLEF00-03 (left) and MiLQ dataset (right). PCW refers to punctuation-combined words. The distribution of English words in both native and mixed-language queries predominantly shows that nouns and proper nouns are the most common parts of speech. However, in our MiLQ dataset, nouns outnumber proper nouns, which contrasts with the distribution observed in native queries. Moreover, our dataset exhibits code-switching not only in nouns and proper nouns but also in a broader range of parts of speech, including adjectives, prepositions, verbs, and pronouns, showing a more diverse pattern of code-switching compared to existing datasets. B Experiment Details B.1 Benchmark Statistics NeuCLIR 22 CLEF00-03 ZH FA RU SW SO FI DE FR # Queries 47 45 44 151 151 151 151 151 # Documents 3.2M 2.2M 4.6M 113K 113K 113K 113K 113K # Passages 19.8M 14.0M 25.1M 1.02M 1.02M 1.02M 1.02M 1.02M Table 5: NeuCLIR22 and CLEF00-03 benchmark statistics. Following previous research (Huang et al., 2023), we use 151 queries from the CLEF C001 – C200 topics, excluding those with no relevant judgments. English documents are sourced from the Los Angeles Times corpus, which includes 113k news articles. For high-resource languages such as Finnish, German, and French, queries are directly provided by the CLEF campaign. In contrast, for low-resource languages, Bonab et al. provided Somali and Swahili translations of English queries. B.2 Evaluation Metrics We evaluate retrieval performance using two standard Information Retrieval metrics: MAP@100 (Mean Average Precision at 100) : Evaluates ranked lists by averaging precision scores after each relevant binary-judged document is retrieved, up to 100 results. Higher scores indicate better overall retrieval. nDCG@20 (normalized Discounted Cumulative Gain at 20) : Assesses ranked lists by measuring cumulative gain from graded-relevance documents within the top 20, discounted by rank and normalized by the ideal gain. Higher scores mean better top-ranking of highly relevant items. B.3 Implementation Details Model Configuration Our retrieval experiments are based on the ColBERT architecture (Khattab and Zaharia, 2020), a multi-vector approach for dense retrieval. We utilized the publicly available PLAID-X implementation6for all model training and inference. Consistent with standard ColBERT practices, most training artifacts and hyperparameters were adopted directly. Our primary modification involved setting the maximum document passage length to 180 tokens. Following | https://arxiv.org/abs/2505.16631v1 |
established methods (Bendersky and Kurland, 2008; Dai and Callan, 2019), documents longer than this threshold were segmented into 180-token passages. During evaluation, the score for each document was determined using the maximum passage score (MaxP) strategy. Model Backbones and Computational Resources We fine-tuned distinct ColBERT models for each source benchmark dataset, selecting multilingual Pre-trained Language Model (mPLM) backbones based on practices in prior relevant research (Yang et al., 2024; Huang et al., 2023). •For NeuCLIR22 (ZH, FA, RU): ColBERT was initialized using the XLM-RoBERTa Large model7. This model contains approximately 561 million parameters. Fine-tuning for these languages was conducted about 48 hours. •For CLEF00-03 (SW, SO, FI, DE, FR): mBERT-base-uncased8was employed as the ColBERT encoder. This model has approximately 179 million parameters. Fine-tuning for these languages took about 24 hours. All models were trained on a system equipped with four NVIDIA A100-80GB GPUs. During the training process, model is trained with 6 passages for each query. Hyperparameters A common set of optimization hyperparameters was used for fine-tuning all models. We employed the AdamW optimizer with a learning rate of 5e-6. All models underwent training for 200,000 steps. The total effective batch size was 64, achieved by using a batch size of 16 per GPU across the four GPUs. 6https://github.com/hltcoe/ColBERT-X 7https://huggingface.co/FacebookAI/xlm-roberta-large 8https://huggingface.co/google-bert/bert-base-multilingual-uncased B.4 Performance in Individual Languages Query Lang BM25 Mono-Distil Mixed-Distil Cross-Distil mContriever NMT →BM25 NMT →Mono-Distil SW 11.56 15.93 26.64 35.56 24.52 43.70 51.00 SW&EN 35.82 38.21 48.04 47.77 32.27 47.76 56.15 EN 48.71 57.93 56.53 50.77 34.88 48.71 57.93 SO 13.14 3.12 11.07 24.91 5.58 38.43 49.28 SO&EN 40.88 34.46 43.69 49.77 25.18 48.44 57.69 EN 48.71 57.93 56.72 49.94 34.88 48.71 57.93 FI 7.02 29.65 40.50 41.99 27.23 45.06 55.14 FI&EN 40.59 45.87 49.94 49.07 32.23 45.50 56.20 EN 48.71 57.93 55.73 51.53 34.88 48.71 57.93 DE 11.70 44.84 47.16 49.69 30.15 45.70 56.60 DE&EN 42.33 56.89 55.43 54.99 32.06 47.58 57.51 EN 48.71 57.93 56.70 52.12 34.88 48.71 57.93 FR 6.95 49.46 51.96 54.95 32.23 47.27 57.02 FR&EN 21.84 54.02 55.66 55.59 33.53 48.17 56.64 EN 48.71 57.93 57.35 53.05 34.88 48.71 57.93 CLIR 10.07 28.60 35.41 41.42 23.94 44.03 53.81 MQIR 36.29 45.89 50.55 51.44 31.05 47.49 56.84 MonoIR 48.71 57.93 56.60 51.48 34.88 48.71 57.93 Table 6: Performance comparison of different retrieval models across multiple language settings for retrieving English documents. This table presents the performance of individual query languages in this scenario. Additionally, XX&EN represents queries mixing the native language and English. The metric used is MAP@100 (%). The best score(s) for each individual language query type (row) are indicated in bold . If there is a unique best score, the second best score(s) are underlined . Query Lang BM25 Mono-Distill Mixed-Distill Cross-Distill mContriever ZH 25.72 46.82 49.59 48.61 32.90 ZH & EN 3.67 41.13 47.46 45.48 19.54 EN 5.74 38.52 48.73 48.91 21.22 FA 34.29 48.97 46.06 47.26 15.26 FA & EN 26.02 48.69 45.36 45.39 12.97 EN 0.07 46.29 47.28 47.99 11.69 RU 36.56 47.97 48.95 49.46 36.86 RU & EN 6.14 44.99 49.71 48.02 32.74 EN 1.11 44.66 51.42 51.30 29.27 MonoIR | https://arxiv.org/abs/2505.16631v1 |
arXiv:2505.16637v2 [cs.CL] 23 May 2025 SSR-Zero: Simple Self-Rewarding Reinforcement Learning for Machine Translation Wenjie Yang, Mao Zheng, Mingyang Song, Zheng Li Tencent Hunyuan leonzxyang@tencent.com Abstract Large language models (LLMs) have recently demonstrated remarkable capabilities in ma- chine translation (MT). However, most ad- vanced MT-specific LLMs heavily rely on ex- ternal supervision signals during training, such as human-annotated reference data or trained reward models (RMs), which are often ex- pensive to obtain and challenging to scale. To overcome this limitation, we propose a Simple Self-Rewarding ( SSR) Reinforce- ment Learning (RL) framework for MT that is reference-free, fully online, and relies solely on self-judging rewards. Training with SSR using 13K monolingual examples and Qwen- 2.5-7B as the backbone, our model SSR-Zero- 7B outperforms existing MT-specific LLMs, e.g., TowerInstruct-13B and GemmaX-28-9B, as well as larger general LLMs like Qwen2.5- 32B-Instruct in English ↔Chinese translation tasks from WMT23, WMT24, and Flores200 benchmarks. Furthermore, by augmenting SSR with external supervision from COMET, our strongest model, SSR-X-Zero-7B, achieves state-of-the-art performance in English ↔Chi- nese translation, surpassing all existing open- source models under 72B parameters and even outperforming closed-source models, e.g., GPT- 4o and Gemini 1.5 Pro. Our analysis highlights the effectiveness of the self-rewarding mecha- nism compared to the external LLM-as-a-judge approach in MT and demonstrates its comple- mentary benefits when combined with trained RMs. Our findings provide valuable insight into the potential of self-improving RL meth- ods. We have publicly released our code, data and models1. 1 Introduction Large language models (LLMs) have recently achieved notable advances in machine translation (MT) (Aryabumi et al., 2024; Rei et al., 2024b; Cui et al., 2025), benefiting greatly from their 1https://github.com/Kelaxon/SSR-Zeroability to scale to extensive training data and ef- fectively leverage pre-trained knowledge. For ex- ample, MT-specific LLMs such as Tower and X- ALMA achieve state-of-the-art (SOTA) translation performance across various languages by employ- ing continual pre-training (CPT) on billions of to- kens from parallel and monolingual data, followed by fine-tuning on high-quality human-annotated data (Alves et al., 2024; Cui et al., 2025). However, relying on extensive, high-quality training datasets is not sustainable as training scales, as they are expensive and difficult to obtain. Another recent trend explores improving LLMs through inference-time reasoning, exemplified by models such as OpenAI o1 (Jaech et al., 2024) and DeepSeek R1 (Guo et al., 2025). These models generate a long chain-of-thought (CoT) before giv- ing final answers, which perform especially well in logic, coding, and mathematics (Guo et al., 2025; Xie et al., 2025; Song et al., 2025), suggesting potential for MT tasks (Chen et al., 2025). Typi- cally, reasoning models adopt an R1-like training paradigm, which optimizes models by Reinforce- ment Learning (RL) algorithms, e.g., GPRO (Shao et al., 2024) and DAPO (Yu et al., 2025) with re- wards denoted by the difference between model output and ground-truth data. Recent work extends these reasoning methods to MT by either design- ing explicit reasoning patterns (Wang et al., 2024, 2025) or allowing models to autonomously learn reasoning steps (Feng et al., 2025). Yet, current approaches still rely heavily on external supervi- sion either | https://arxiv.org/abs/2505.16637v2 |
from costly human annotations or pre- trained reward models distilled from expensive la- beled data, posing ongoing scalability issues. To address this challenge, we propose a Sim- ple Self-Rewarding ( SSR) RL framework for MT, eliminating the need for any external supervision. SSR leverages a self-judging mechanism in which the LLM itself evaluates its translation outputs and derives reward signals used in the GRPO algorithm. Specifically, we train an uninstructed Qwen2.5- 7B model via SSR using 13K monolingual exam- ples (6.5K English and 6.5K Chinese), resulting in SSR-Zero-7B. This model achieves significant improvements in MT quality with gains of 18.11% for English-to-Chinese and 14.74% for Chinese- to-English translation. Extensive experiments on the WMT23, WMT24, and Flores200 benchmarks demonstrate that SSR-Zero-7B surpasses existing MT-specialized LLMs such as TowerInstruct-13B and GemmaX-28-9B, as well as general-purpose LLMs such as Qwen-2.5-32B-Instruct. By fur- ther augmenting SSR with external reward sig- nals from COMET, our strongest model SSR-X- Zero-7B achieves SOTA results for English ↔Chi- nese translation among open-source LLMs under 72B parameters, even outperforming closed-source models such as GPT-4o and Gemini-1.5 Pro. Lastly, we conducted comparative analyzes to further ex- plore the effectiveness of the self-reward mecha- nism versus external reward methods. These in- clude trained MT-evaluation reward models includ- ing COMET and COMETKIWI, as well as frozen LLM-as-a-judge models including Qwen2.5-7B and Qwen2.5-7B-Instruct. Additionally, we ex- amine how incorporating reference data in reward methods affects the training models’ translation quality. In summary, our key contributions are: 1) we de- velop SSR, a fully online self-assessing RL frame- work for MT, eliminating reliance on external re- ward models or reference translations. 2) Our ex- periments demonstrate the effectiveness of SSR. Our model, SSR-Zero-7B, outperforms many exist- ing advanced open-source MT-specific LLMs and larger general LLMs. 3) We illustrate that SSR- generated rewards effectively complement external rewards, resulting in our model SSR-X-Zero-7B achieving SOTA performance in English ↔Chi- nese translation. 4) We provide a detailed analysis comparing SSR with existing external reward meth- ods, offering insights into effective reward selection for MT systems. By open-sourcing our code, data, and model, our work opens promising new direc- tions towards self-improving MT models without costly supervision. 2 Related Work 2.1 Machine Translation with LLMs Recent advances in LLMs have substantially im- proved MT across various pairs of languages(Costa-Jussà et al., 2022; Lu et al., 2024; Work- shop et al., 2022). Many SOTA MT-focused LLMs (Rei et al., 2024b; Cui et al., 2025) employ CPT using extensive mixed parallel and monolingual data (over 10 billion tokens) to achieve outstand- ing MT performance. Moreover, Rei et al. (2024a) demonstrated that expanding the variety of train- ing tasks—such as translation evaluation, MQM- based error-span detection, and named-entity recog- nition—can further improve MT’s capabilities. Fur- thermore, Cui et al. (2025) introduced an optimized sequential data mixing strategy, prioritizing paral- lel data over monolingual data during CPT. Their model achieves highly competitive results compara- ble to Google Translate and GPT-4-turbo. Despite producing impressive translation quality, these ex- isting methods rely heavily on vast amounts of high-quality annotated or curated data. Acquiring and scaling such | https://arxiv.org/abs/2505.16637v2 |
resources has become increasingly expensive and challenging, creating significant bot- tlenecks for the sustainable development of MT models. 2.2 MT via Reinforcement Learning Early MT research employed reinforcement learn- ing (RL) to tackle exposure bias, an issue inherent in supervised fine-tuning (Bengio et al., 2015), as RL optimizes models based on their own predic- tions rather than relying solely on ground-truth inputs. Existing work used RL algorithms such as REINFORCE (Ranzato et al., 2015), Actor-Critic (Bahdanau et al., 2016), and policy gradient meth- ods (Yu et al., 2017), leveraging rule-based metrics (e.g., BLEU, ROUGE) (Ranzato et al., 2015) or trained reward models (Wu et al., 2017) for train- ing. In the era of LLM, DeepSeek-R1/R1-Zero showed that simple RL methods, such as GRPO combined with verifiable rewards, could signifi- cantly enhance reasoning capabilities (Guo et al., 2025). This R1/R1-Zero training paradigm has also been applied to translation tasks recently. For in- stance, He et al. (2025) fine-tuned their model with manually-crafted chain-of-thought data and trained using COMET-based rewards and REINFORCE++. Feng et al. (2025) utilized BLEU, COMETKiwi, and their combination as reward signals, achieving SOTA performance with MT-R1-Zero-Sem. Wang et al. (2025) developed DeepTrans, applying a large LLM-based judge for evaluating both reasoning steps and translations during RL training, improv- ing literary translation. Translation Task: Translate into English: 春天来了 Actor/Judge (Pretrained LLM)① InputCandidate Translations "Spring come." "Spring has come." ... "Spring has arrived."② Generate translations Reward Scores (0-100) 35 60 ... 95⑤ Reward extractionEvaluation Prompts (Referenceless) Score the following translation: 春天来了 "Spring come." Score the following translation: 春天来了 "Spring has come." ... Score the following translation: 春天来了 "Spring has arrived."③ Construct evaluation④ Self-evaluation Improved Model⑥ Update via GRPO ⑦ Iterate trainingFigure 1: Overview of the SSR framework. SSR is an R1-Zero-like RL training method for machine translation, which uses the same model as both actor and judge. It does not require external reward models or human-annotated reference data. Prompts shown here are simplified for clarity. Nevertheless, current RL-based methods for MT still depend heavily on external supervision sig- nals, which often require additional training or are challenging to acquire, especially in a low-resource situation. 2.3 Self-Judging in RL Recent work has investigated self-rewarding mech- anisms, where LLMs generate feedback signals to train themselves (Chen et al., 2024; Wu et al., 2024; Zhang et al., 2025b). This approach holds promise in reducing dependency on human annota- tions or frozen reward models distilled from human judgments. For instance, Chen et al. (2024) fine- tuned a Llama 2-70B model by first using seed instruction data, and subsequently iterating self- instruction sampling, self-judging, and DPO train- ing. Their results demonstrated improvements in both instruction-following and evaluation capabili- ties. Similar self-improving methods such as self- play and self-judging have enhanced math reason- ing (Zhang et al., 2025a; Zhao et al., 2025), visual modality alignment (Zhou et al., 2024), and cross- lingual transfer (Chen et al., 2024; Geng et al., 2024; Yang et al., 2024b). However, these self- judging approaches remain largely underexplored in MT tasks. One exception is Zou et al. (2025), who proposed a self-play framework | https://arxiv.org/abs/2505.16637v2 |
employing Monte Carlo Tree Search to derive preferences based on cross-lingual semantic consistency fromthe model’s own outputs. They then used this data for preference learning. Yet, even with the same base model (Qwen-2.5-7B), their method did not outperform MT-specific LLMs like TowerInstruct as our approach. Compared to existing MT training methods, our approach eliminates the requirement for external supervision, operates fully online, and achieves strong performance even when trained exclusively on monolingual data. Our results demonstrate that powerful pre-trained models inherently possess suf- ficient translation and MT-evaluation capabilities. This finding suggests a promising direction toward developing self-improving MT systems that can be effectively trained without relying on human feedback. 3 Methodology In this section, we first outline the SSR method- ology (§3.1), followed by an introduction of the reward design within the RL framework (§3.2). Fi- nally, we introduce the RL algorithm employed in our work (§3.3). 3.1 Simple Self-Rewarding (SSR) SSR is a fully online, R1-Zero-like RL approach with a novel self-evaluation mechanism that simpli- fies reward signal acquisition. This mechanism leverages a pre-trained LLM that alternates be- tween acting as both an actor and a judge. As illustrated in Figure 1, the pretained model, at each training step, first plays the role of an actor that accepts a batch of translation prompts ( ➀). For each prompt, the model generates a group of N candidate translations ( ➁). These candidate trans- lations are then constructed on LLM-as-a-judge prompts separately ( ➂). Next, the model switches to a judge role, evaluating all prompts to estimate translation quality and generate judgments ( ➃). Each judgment includes a score from 0 to 100, where 0 indicates poor translation and 100 indi- cates perfect translation. We extract reward scores from judgments using regular expressions ( ➄) and then use them in the RL algorithm (i.e., GRPO) to update the actor model’s parameters ( ➅). In to- tal, one translation prompt generates N candidate translations and N reward scores. We iterate Step ➀through ➅multiple times until the model’s per- formance converges ( ➆). Below are the prompts for generating transla- tions (i.e., actor prompt ) and evaluations ( judge prompt ) used in SSR training. The actor prompt builds on Deepseek-R1-Zero’s system prompt (Guo et al., 2025), requiring the model to answer within a specific format (i.e., <answer></answer> ) and think before responding. Actor Prompt: Generating Translations A conversation between User and Assistant. The User asks a question, and the Assistant solves it. The Assistant first thinks about the reasoning process in the mind and then provides the User with the answer. The rea- soning process is enclosed within <think> </think> and answer is enclosed within <answer> </answer> tags, respectively, i.e., <think> reasoning process here </think> <answer> answer here </answer>. User: Translate the following text to {tgt_lang}: {src_text} Assistant: Judge Prompt: Self-Evaluating A conversation between User and Assistant. The User asks a question, and the Assistant solves it. The Assistant first thinks about the reasoning process in the mind and then provides the User with the answer. The rea-soning process is enclosed within <think> </think> | https://arxiv.org/abs/2505.16637v2 |
and answer is enclosed within <answer> </answer> tags, respectively, i.e., <think> reasoning process here </think> <answer> answer here </answer>. User: Score the following translation from {src_lang} to {tgt_lang} on a continuous scale from 0 to 100, where a score of zero means “no meaning preserved” and score of one hundred means “perfect meaning and grammar”. Additionally, give a score of zero if the translation 1) contains irrelevant content, such as interpretations of the translation, 2) does not match the target language, 3) contains multiple translations. {src_lang} source: {src_text} {tgt_lang} translation: {translated_text} Assistant: The judge prompt is modified from GEMBA- DA (Kocmi and Federmann, 2023), a widely-used LLM-as-a-judge template for direct assessment of translation, which achieved SOTA performance in translation quality assessment using GPT-4. Com- pared to GEMBA-DA, our judge prompt includes an “think-before-answer” system instruction. This addition explicitly encourages the model to take advantage of the reasoning capabilities acquired during RL training when evaluating translations. Additionally, we instruct the judge to give a zero score for unwanted candidate translations con- taining irrelevant content or language misalign- ment. During training, only the content within <answer></answer> tags is extracted and incorpo- rated into the judge’s instructions. 3.2 Reward Modeling Our RL training utilizes two types of rewards: self- reward andformat reward . Self Reward This reward estimates the quality of the model’s translation using the training model itself, denoted by: rself=Mself(src, trans ) where Mselfis the model during the training. Us- ing the judge prompt, the model takes both source text and model translation (without reference trans- lations) and generates a judgment containing a score on a 100-point scale. We extract this score from the <answer></answer> tags in the judge’s responses using regular expressions. Format Reward This reward checks whether the model generation follows the format defined in the actor prompt: rformat =( 1,if format is correct 0,if format is incorrect Overall Reward In training, we combine the two types of rewards to train our SSR-Zero model: rall=( 1 +rself,ifrformat̸= 0 0, ifrformat = 0 In addition, we investigate integrating external reward signals to further enhance model perfor- mance. Our strongest model, SSR-X-Zero ( SSR with e Xternal rewards), incorporates rewards com- puted by COMET, an automatic MT evaluation metric (Rei et al., 2022) that scores translation quality using source sentences, machine-generated translations, and reference translations: r′ all=( 1 +rself+rCOMET ,ifrformat̸= 0 0, ifrformat = 0 rCOMET =MCOMET (src, trans, ref ) 3.3 RL algorithm We follow the work of Shao et al. (2024) and Guo et al. (2025) by adopting the Group Related Policy Optimization (GRPO) algorithm for training, as it demonstrates stability and strong performance. Specifically, for each given translation prompt p, the policy model πθoldfirst samples a group of can- didate translations G{oi}G i=1. Then, using the same policy model, we perform the SSR proce- dure described earlier to obtain rewards {ri all}G i=1 for all candidate translations. Next, we compute the advantage for the i-th candidate translation by normalizing the group-level rewards: Ai=ri all−mean {ri all}G i=1 std {ri all}G i=1 Using these advantages, GRPO optimizes the pol- icy by | https://arxiv.org/abs/2505.16637v2 |
maximizing the following objective:JGRPO(θ) =Eq∼P(Q),{oi}G i=1∼πθold(O|p) " 1 GGX i=1min πθ(oi|p) πθold(oi|p)Ai, clipπθ(oi|p) πθold(oi|p),1−ε,1 +ε Ai −βD KL(πθ∥πref)!# , where εandβare hyperparameters, πrefis the reference model, and DKL(πθ∥πref)is the KL di- vergence between πθandπref. 4 Experiments 4.1 Experimental Setup Dataset In this paper, we focus on bidirectional translation between English and Chinese, with po- tential expansion to other language pairs in future work. We use the training dataset released by Feng et al. (2025), originally collected from WMT 2017 through WMT 2020 for English-Chinese sentence pairs. Following their preprocessing, sentences shorter than 30 characters were filtered out. Un- like the original bilingual setup, we use these data monolingually, splitting sentence pairs into sep- arate English and Chinese examples to serve as monolingual source sentences for training. The resulting dataset comprises 13,130 monolingual examples (6,565 in English and 6,565 in Chinese). For testing, we evaluate the translation per- formance on English-to-Chinese (EN-ZH) and Chinese-to-English (ZH-EN) benchmarks from WMT232, WMT243, and FLORES-200 (Costa- Jussà et al., 2022). Metrics Following the settings in Rei et al. (2024b), we adopt two widely used automatic MT- evaluation metrics: the reference-based XCOMET- XXL metric (Guerreiro et al., 2024), and the reference-free COMETKIWI-XXL metric (Rei et al., 2023), both in their largest available model size. Baselines We compare our models with the fol- lowing baseline model categories: Closed-source models , including GPT-4o- 20241120 (Hurst et al., 2024), Claude-3.5-Sonnet- 20240620 (Anthropic, 2024), and Gemini-1.5-Pro. 2https://www2.statmt.org/wmt23/translation-task.html 3https://www2.statmt.org/wmt24/translation-task.html ZH→EN EN →ZH Models WMT23 WMT24 Flores200Avg.WMT23 WMT24 Flores200Avg. KIWI XCM KIWI XCM KIWI XCM KIWI XCM KIWI XCM KIWI XCM Closed-Source LLMs Claude-3.5-Sonnet 81.61 93.06 81.06 90.54 89.41 97.68 88.89 80.15 92.00 80.00 86.31 89.47 94.32 87.04 GPT-4o 80.92 92.15 79.90 89.06 88.94 96.50 87.91 76.71 88.56 77.42 83.95 88.30 93.30 84.71 Gemini-1.5-Pro 80.71 92.44 79.02 88.90 88.15 97.32 87.76 79.80 91.95 79.54 87.11 89.30 94.54 87.04 Open-Source LLMs General Purpose LLMs Qwen3-32B 79.74 90.79 79.20 88.47 87.68 95.75 86.94 76.94 89.75 76.96 84.10 87.45 92.18 84.56 Qwen3-32B 80.28 91.95 79.95 89.53 88.88 97.18 87.96 79.27 91.28 79.51 86.63 89.69 94.07 86.74 Qwen3-8B 78.30 89.03 77.99 86.94 85.82 93.89 85.33 74.94 88.22 75.39 82.25 86.08 91.02 82.98 Qwen3-8B 79.87 91.42 79.58 89.02 88.61 96.55 87.51 78.59 90.90 78.71 85.31 88.90 93.30 85.95 Qwen2.5-72B-Instruct 80.62 92.14 80.46 90.06 88.90 97.28 88.24 78.18 91.34 78.18 85.13 88.04 93.20 85.68 Qwen2.5-32B-Instruct 77.73 89.28 78.77 88.69 87.13 95.50 86.18 77.73 90.23 78.77 83.48 87.13 91.99 84.89 Qwen2.5-7B-Instruct 77.56 89.40 76.71 87.12 86.28 94.06 85.19 73.81 88.11 72.98 80.93 85.18 89.90 81.82 QwQ-32B 74.61 85.12 75.08 84.34 80.88 89.21 81.54 77.33 89.10 78.13 85.03 86.51 90.93 84.51 Gemma2-27B-it 80.32 91.96 79.42 89.14 88.64 96.72 87.70 76.95 90.50 77.38 84.17 87.79 92.51 84.88 Gemma2-9B-it 79.86 91.21 79.25 88.41 88.32 96.25 87.22 75.22 89.66 74.15 81.65 85.95 90.90 82.92 MT-Specific LLMs TowerInstruct-7B-v0.2 77.78 89.13 76.96 85.98 86.95 94.88 85.28 73.53 87.46 70.87 77.53 84.39 88.57 80.39 TowerInstruct-13B-v0.1 78.53 89.90 77.57 87.12 87.30 95.80 86.04 75.56 89.28 73.81 80.81 86.22 90.69 82.73 DeepTrans-7B / / / / / / / 80.01 89.00 78.89 83.85 89.23 92.85 85.64 GemmaX2-28-9B-v0.1 79.40 90.63 78.71 | https://arxiv.org/abs/2505.16637v2 |
88.60 87.85 96.33 86.92 77.10 90.68 75.88 83.33 87.58 92.83 84.57 Ours Qwen2.5-7B 62.62 75.69 69.04 77.33 73.62 85.54 73.97 68.25 81.63 64.28 69.48 82.00 86.07 75.29 SSR-Zero-7B 79.29 92.04 79.04 89.19 87.97 96.70 87.37 79.69 91.18 79.34 85.34 89.25 93.52 86.39 SSR-X-Zero-7B 80.62 91.92 80.56 89.42 88.84 96.62 88.00 81.11 91.56 79.67 86.75 90.08 93.98 87.19 Table 1: Translation quality measured by COMETKIWI-XXL (KIWI) and XCOMET-XXL (XCM) in English- Chinese directions (EN ↔ZH). Bold and underlined indicates the best-performing model, bold only the second- best, and underlined only the third-best. “ ” denotes reasoning models or models operating in thinking mode. Open-source general-purpose LLMs , in- cluding the Qwen3 series (Yang et al., 2025) (Qwen3-32B, Qwen3-8B), Qwen2.5 series (Yang et al., 2024a) (Qwen2.5-72B-Instruct, Qwen2.5- 32B-Instruct, Qwen2.5-7B-Instruct, Qwen2.5-7B), Qwen’s reasoning model QwQ-32B (Team, 2025), and the Gemma2 series (Team et al., 2024) (Gemma2-27B-it and Gemma2-9B-it). Open-source MT-specific LLMs , including the Tower series (Alves et al., 2024) (TowerInstruct- 7B-v0.2 and TowerInstruct-13B-v0.1), GemmaX2- 28-9B-v0.1 (Cui et al., 2025), and DeepTrans-7B (Wang et al., 2025). Implementation Details We use Qwen2.5-7B as the backbone model and adopt the GRPO al- gorithm implemented in the verl4framework. All experiments share the same training settings: a batch size of 128, constant learning rate of 5e-7, rollout number of 16, sampling temperature of 1.0 for generation, and temperature of zero when judg- ing. We set the maximum generation length to 1024 tokens during training. Both KL and entropy 4https://github.com/volcengine/verlcoefficients of GRPO are set to zero, as we ob- served better performance with this configuration. All models are trained for four epochs using eight GPUs, each providing 148 TFLOPs of computa- tional power when optimizing models with BF16 precision. For training SSR-X-Zero-7B, we add an additional GPU to serve the COMET model. We save checkpoints every 20 steps during training and report the best-performing one according to the aggregated average scores of XCOMET-XXL and COMETKIWI-XXL on test sets. Training SSR- Zero-7B takes about 17 hours, while SSR-X-Zero- 7B training takes 42 hours in total. 4.2 Main Results As shown in Table 1, our SSR-Zero-7B model demonstrates strong translation performance com- pared to existing open-source models. Specifically, it achieves an average score of 87.37 in the ZH → EN direction, outperforming all MT-specific base- lines, including GemmaX2-28-9B-v0.1 (86.92), TowerInstruct-13B-v0.1 (86.04) and TowerInstruct- 7B-v0.2 (85.28). Note that DeepTrans-7B only supports EN →ZH translations and erroneously a bFigure 2: Changes in average response length (a) and training rewards (b) of SSR/SSR-X-Zero-7B during GRPO training. produces Chinese output for ZH →EN transla- tion tasks. SSR-Zero-7B also surpasses several larger general-purpose LLMs such as Gemma2- 9B-it (87.22), QwQ-32B (81.54), Qwen2.5-32B- Instruct (86.18) and Qwen3-32B [thinking mode] (86.94). However, it trails behind models including Qwen3-32B [non-thinking mode] (87.96), Qwen3- 8B (87.51), Gemma2-27B-it (87.70) and Qwen2.5- 72B-Instruct (88.24), with the latter achieving the highest score in ZH →EN. In the EN →ZH direction, SSR-Zero-7B achieves a score of 86.39, outperforming all open- source baselines, including Qwen2.5-72B-Instruct (85.93). It only slightly lags behind Qwen3-32B, which achieves 86.74. Compared to closed-source models, SSR-Zero- 7B scores slightly lower in ZH | https://arxiv.org/abs/2505.16637v2 |
→EN translation (87.37) compared to GPT-4o (87.91), Gemini-1.5- Pro (87.76), and Claude-3.5-Sonnet (88.89). How- ever, in the EN →ZH direction, SSR-Zero-7B sur- passes GPT-4o, achieving 86.39 compared to GPT- 4o’s 84.71. Compared with the backbone model (Qwen2.5- 7B), SSR-Zero-7B significantly improves transla- tion performance – from 73.97 to 87.37 (+18.11%) in ZH→EN, and from 75.29 to 86.39 (+14.74%) in EN →ZH. These results clearly demonstrate the effectiveness of leveraging the model’s self- generated rewards to enhance MT performance. Furthermore, augmenting SSR with external re- ward models yields our strongest model, SSR-X- Zero-7B, which obtains average scores of 88.00 in ZH→EN and 87.19 in EN →ZH. It surpasses nearly all open-source baselines in both ZH ↔ EN directions, achieving new SOTA performance among open-source models under 72B parameters. It only slightly trails Qwen2.5-72B-Instruct (88.24) in the ZH ↔EN direction. a bFigure 3: Changes in translation quality during training, measured by the average scores of COMETKIWI-XXL and XCOMET-XXL on the EN →ZH (a) and ZH → EN (b) benchmarks. 4.3 Training Dynamics of SSR We also report how the response length and test set performance evolve during SSR/SSR-X-Zero-7B training. As shown in Fig. 2, we did not observe the increase in output length typical of R1-like training in mathematics (Guo et al., 2025), nor the curve seen in Feng et al. (2025) which first de- creases and then increases. As training progressed, the model quickly reduced the output length from about 200 to 60-70 tokens and did not generate meaningful CoTs. A typical CoT before translation was “<think> I need to translate this sentence from {src_lang} to {tgt_lang}. </think>”. Despite this, we observed an increase trend in performance in the test set as training progressed, as shown in Figure 3. We also noticed that the performance of SSR-Zero-7B for EN →ZH sat- urates after approximately 3 epochs (around 300 steps) and decreases afterward, while its ZH → EN performance converges earlier, at roughly 200 steps. In contrast, SSR-Zero-X-7B demonstrates better stability and continuous improvement during training. Upon inspection, we found that SSR-Zero- 7B began enclosing translated outputs with extra- neous quotation marks (i.e., <answer>“translated text”</answer>) after 300 steps, which our regu- lar expression could not filter out during evalua- tion. This formatting issue led automated metrics XCOMET-XXL and COMETKIWI-XXL to pro- duce lower evaluation scores. This issue was not observed during the SSR-Zero-X-7B’s training. We leave further exploration in maintaining consistent output formatting of SSR training for future work. 5 Comparative Analysis Although SSR and its combination with external reward models (RMs) effectively enhance MT per- ZH→EN EN →ZH Models WMT23 WMT24 Flores200Avg.WMT23 WMT24 Flores200Avg.KIWI XCM KIWI XCM KIWI XCM KIWI XCM KIWI XCM KIWI XCM Qwen2.5-7B 62.62 75.69 69.04 77.33 73.62 85.54 73.97 68.25 81.63 64.28 69.48 82.00 86.07 75.29 w/ External trained MT-evaluation RM: - COMET 80.71 92.44 79.02 88.90 88.15 97.32 87.76 79.80 91.95 79.54 87.11 89.30 94.54 87.04 - COMETKIWI 79.89 91.80 81.04 89.04 89.12 96.48 87.90 81.40 90.82 80.06 84.81 90.11 93.30 86.75 w/ External LLM-as-a-judge RM (Referenceless): - Qwen2.5-7B 78.61 91.30 78.54 87.80 87.96 96.30 86.75 76.31 89.81 | https://arxiv.org/abs/2505.16637v2 |
75.98 82.21 87.28 92.19 83.96 - Qwen2.5-7B-Instruct 79.10 91.58 79.28 88.56 87.98 96.19 87.12 77.03 89.73 76.60 82.16 87.87 92.07 84.24 w/ External LLM-as-a-judge RM (with Reference): - Qwen2.5-7B 79.30 91.11 79.33 88.57 88.27 96.54 87.19 77.90 90.00 77.69 83.43 88.38 92.63 85.01 - Qwen2.5-7B-Instruct 79.10 91.58 79.28 88.56 87.98 96.19 87.12 77.03 89.73 76.60 82.16 87.87 92.07 84.24 Ours SSR-Zero-7B 79.29 92.04 79.04 89.19 87.97 96.70 87.37 79.69 91.18 79.34 85.34 89.25 93.52 86.39 - Ablation: w/ ref 79.67 92.22 79.75 89.45 88.58 96.69 87.73 77.91 90.62 77.63 84.15 88.25 92.96 85.25 SSR-X-Zero-7B 80.62 91.92 80.56 89.42 88.84 96.62 88.00 81.11 91.56 79.67 86.75 90.08 93.98 87.19 Table 2: Translation quality of models trained via RL with different rewarding methods, measured by COMETKIWI- XXL (KIWI) and XCOMET-XXL (XCM) in English-Chinese directions (EN ↔ZH). Bold and underlined indicates the best-performing model, bold only the second-best, and underlined only the third-best. formance, two research questions (RQs) remain unclear: 1) How does self-rewarding compare with widely used external RMs? 2)How does the in- clusion of reference data in RMs affect the final translation performance? To clarify these points, we conducted a detailed analysis, presented below. 5.1 RQ1: SSR vs. External Reward Models Specifically, we compare our method with two categories of external frozen RMs: 1) MT- evaluation trained RMs, including COMET5and COMETKIWI6, and 2) LLM-based judge RMs, including Qwen2.5-7B and Qwen2.5-7B-Instruct, using the same judge prompts employed by SSR. Results The evaluation results are summarized in Table 2. As expected, models trained with specialized MT-evaluation RMs (i.e., COMET or COMETKIWI) outperform SSR-Zero-7B – which relies solely on intrinsic judgments from the train- ing model – in average EN →ZH translation scores. Additionally, these specialized RMs also outper- form all methods using external LLM-as-a-judge approaches based on the 7B-sized Qwen2.5 model. This indicates that dedicated RMs trained on large annotated datasets possess stronger MT evaluation capabilities compared to general-purpose LLMs such as Qwen2.5-7B(-Instruct). Nevertheless, the SSR mechanism provides complementary benefits. 5https://huggingface.co/Unbabel/ wmt22-comet-da 6https://huggingface.co/Unbabel/ wmt22-cometkiwi-daThis is evidenced by SSR-X-Zero-7B, which in- tegrates self-rewarding with COMET supervision, still achieves the highest scores in both translation directions. Furthermore, SSR-Zero-7B substantially outper- forms models with the same backbone trained using external LLM judges of the same size. This indi- cates that, during SSR training, improvements in translation capability may simultaneously enhance a model’s judgment ability. 5.2 RQ2: Reference vs. Referenceless Rewarding We further examine the influence of reference trans- lations on reward signals and their subsequent im- pact on MT performance. Specifically, we intro- duce a variant of SSR-Zero that includes a refer- ence translation in the judge prompt. The reference translation is obtained using the original target sen- tence from the training dataset. We use the same setting for LLM-as-a-judge baselines. Results As shown in Table 2, the trained reference-based RM (COMET) and referenceless RM (COMETKIWI) yield similar results. For LLM-based external judges, explicitly providing reference translations typically leads to slightly higher performance compared to the reference-less setting. In self-reward training, the use of reference translations marginally improves performance in ZH→EN translation (from 87.37 to 87.73, +0.4%), but | https://arxiv.org/abs/2505.16637v2 |
lowers the results for EN →ZH translation (from 86.39 to 85.25, -1.3%). In general, intro- ducing reference translations to different reward methods does not consistently improve the model’s performance, except when using external LLMs as judges. In particular, the addition of external ref- erences does not provide significant performance gains for SSR. 6 Conclusion In this work, we propose SSR, a simple yet effective reinforcement learning approach for ma- chine translation. SSR does not rely on external reward models (RMs) or reference data; instead, it leverages the actor model itself as a judge to gen- erate rewards and optimize its performance using online GRPO training. Initialized from an unin- structed Qwen2.5-7B backbone, our SSR-Zero-7B model outperforms many open-source MT-specific LLMs such as TowerInstruct-13B and larger gen- eral LLMs like Qwen2.5-32B-Instruct across dif- ferent English ↔Chinese translation benchmarks. Our analysis shows that SSR is more effective than using the same-size external LLM-as-a-judge mod- els. Although SSR alone slightly underperforms dedicated RMs (i.e., COMET and COMETKIWI) trained on extensive annotated MT-evaluation data, combining SSR with these RMs yields additional improvements. Our best-performing model, SSR- X-Zero-7B, incorporates SSR with COMET and achieves state-of-the-art results in English ↔Chi- nese translation benchmarks. These findings pro- vide in-depth insight into reward selection for MT via RL and highlight that strong pre-trained LLMs inherently possess reliable MT evaluation capabili- ties, which can be leveraged to enhance their trans- lation performance. Our work demonstrates the po- tential for developing self-improving RL methods that reduce the dependency on external supervision from humans or trained RMs. Limitations While our work demonstrates the effectiveness of self-reward training for MT, the generalizability of this technique across different languages, model architectures, and model sizes remains unexplored. Specifically, our experiments are limited to pairs of English-Chinese languages. Therefore, it re- mains unknown whether SSR-based training is ef- fectively generalized to lower-resource languages beyond English and Chinese. Furthermore, previ- ous research has indicated that R1-Zero-like train- ing shows varying levels of performance betweendifferent model families (Gandhi et al., 2025). It is thus unclear whether SSR can consistently in- centivize strong MT capabilities from weaker pre- trained models or models with sizes other than 7B parameters. Moreover, our current focuses on zero-shot prompting leaves room for exploring the impact of alternative prompting methods, such as Chain-of-Thought (CoT) and few-shot prompting, for both SSR and external LLM-as-a-judge reward models. However, recent work by Qian et al. (2024) suggests that both CoT and 5-shot prompting do not outperform zero-shot prompting in MT evaluation using 7B models with similar evaluation prompts. Finally, recent research (Liu et al., 2025) indicates that LLM-as-a-judge frameworks can benefit from test-time scaling techniques such as voting (Liu et al., 2025). We leave an exploration of these techniques in the context of SSR-based training for future work. References Duarte M Alves, José Pombal, Nuno M Guerreiro, Pe- dro H Martins, João Alves, Amin Farajian, Ben Pe- ters, Ricardo Rei, Patrick Fernandes, Sweta Agrawal, and 1 others. 2024. Tower: An open multilingual large language model for translation-related tasks. arXiv preprint arXiv:2402.17733 . Anthropic. 2024. [link]. Viraat Aryabumi, John | https://arxiv.org/abs/2505.16637v2 |
Dang, Dwarak Talupuru, Saurabh Dash, David Cairuz, Hangyu Lin, Bharat Venkitesh, Madeline Smith, Jon Ander Campos, Yi Chern Tan, and 1 others. 2024. Aya 23: Open weight releases to further multilingual progress. arXiv preprint arXiv:2405.15032 . Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2016. An actor-critic algorithm for sequence prediction. arXiv preprint arXiv:1607.07086 . Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. Advances in neural information processing systems , 28. Andong Chen, Yuchen Song, Wenxin Zhu, Kehai Chen, Muyun Yang, Tiejun Zhao, and 1 others. 2025. Eval- uating o1-like llms: Unlocking reasoning for transla- tion through comprehensive analysis. arXiv preprint arXiv:2502.11544 . Zixiang Chen, Yihe Deng, Huizhuo Yuan, Kaixuan Ji, and Quanquan Gu. 2024. Self-play fine-tuning con- verts weak language models to strong language mod- els.arXiv preprint arXiv:2401.01335 . Marta R Costa-Jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, and 1 others. 2022. No language left behind: Scaling human-centered machine translation. arXiv preprint arXiv:2207.04672 . Menglong Cui, Pengzhi Gao, Wei Liu, Jian Luan, and Bin Wang. 2025. Multilingual machine trans- lation with open large language models at prac- tical scale: An empirical study. arXiv preprint arXiv:2502.02481 . Zhaopeng Feng, Shaosheng Cao, Jiahan Ren, Jiayuan Su, Ruizhe Chen, Yan Zhang, Zhe Xu, Yao Hu, Jian Wu, and Zuozhu Liu. 2025. Mt-r1-zero: Advancing llm-based machine translation via r1- zero-like reinforcement learning. arXiv preprint arXiv:2504.10160 . Kanishk Gandhi, Ayush Chakravarthy, Anikait Singh, Nathan Lile, and Noah D Goodman. 2025. Cognitive behaviors that enable self-improving reasoners, or, four habits of highly effective stars. arXiv preprint arXiv:2503.01307 . Xiang Geng, Ming Zhu, Jiahuan Li, Zhejian Lai, Wei Zou, Shuaijie She, Jiaxin Guo, Xiaofeng Zhao, Yinglu Li, Yuang Li, and 1 others. 2024. Why not transform chat large language models to non-english? arXiv preprint arXiv:2405.13923 . Nuno M Guerreiro, Ricardo Rei, Daan van Stigt, Luisa Coheur, Pierre Colombo, and André FT Martins. 2024. xcomet: Transparent machine translation eval- uation through fine-grained error detection. Transac- tions of the Association for Computational Linguis- tics, 12:979–995. Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shi- rong Ma, Peiyi Wang, Xiao Bi, and 1 others. 2025. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 . Minggui He, Yilun Liu, Shimin Tao, Yuanchang Luo, Hongyong Zeng, Chang Su, Li Zhang, Hongxia Ma, Daimeng Wei, Weibin Meng, and 1 others. 2025. R1-t1: Fully incentivizing translation capa- bility in llms via reasoning learning. arXiv preprint arXiv:2502.19735 . Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, and 1 others. 2024. Gpt-4o system card. arXiv preprint arXiv:2410.21276 . Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richard- son, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, and 1 others. 2024. Openai o1 system card. arXiv preprint arXiv:2412.16720 .Tom Kocmi and Christian Federmann. 2023. Large lan- guage models are state-of-the-art | https://arxiv.org/abs/2505.16637v2 |
evaluators of trans- lation quality. In Proceedings of the 24th Annual Conference of the European Association for Machine Translation , pages 193–203, Tampere, Finland. Euro- pean Association for Machine Translation. Zijun Liu, Peiyi Wang, Runxin Xu, Shirong Ma, Chong Ruan, Peng Li, Yang Liu, and Yu Wu. 2025. Inference-time scaling for generalist reward model- ing. arXiv preprint arXiv:2504.02495 . Yinquan Lu, Wenhao Zhu, Lei Li, Yu Qiao, and Fei Yuan. 2024. LLaMAX: Scaling linguistic horizons of LLM by enhancing translation capabilities beyond 100 languages. In Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 10748–10772, Miami, Florida, USA. Association for Computational Linguistics. Shenbin Qian, Archchana Sindhujan, Minnie Kabra, Diptesh Kanojia, Constantin Or ˘asan, Tharindu Ranas- inghe, and Frédéric Blain. 2024. What do large lan- guage models need for machine translation evalua- tion? arXiv preprint arXiv:2410.03278 . Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level train- ing with recurrent neural networks. arXiv preprint arXiv:1511.06732 . Ricardo Rei, José G. C. de Souza, Duarte Alves, Chrysoula Zerva, Ana C Farinha, Taisiya Glushkova, Alon Lavie, Luisa Coheur, and André F. T. Martins. 2022. COMET-22: Unbabel-IST 2022 submission for the metrics shared task. In Proceedings of the Seventh Conference on Machine Translation (WMT) , pages 578–585, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics. Ricardo Rei, Nuno M. Guerreiro, Josà ©Pombal, Daan van Stigt, Marcos Treviso, Luisa Coheur, José G. C. de Souza, and André Martins. 2023. Scaling up CometKiwi: Unbabel-IST 2023 submission for the quality estimation shared task. In Proceedings of the Eighth Conference on Machine Translation , pages 841–848, Singapore. Association for Computational Linguistics. Ricardo Rei, Jose Pombal, Nuno M. Guerreiro, João Alves, Pedro Henrique Martins, Patrick Fernandes, Helena Wu, Tania Vaz, Duarte Alves, Amin Fara- jian, Sweta Agrawal, Antonio Farinhas, José G. C. De Souza, and André Martins. 2024a. Tower v2: Unbabel-IST 2024 submission for the general MT shared task. In Proceedings of the Ninth Con- ference on Machine Translation , pages 185–204, Mi- ami, Florida, USA. Association for Computational Linguistics. Ricardo Rei, José Pombal, Nuno M Guerreiro, João Alves, Pedro Henrique Martins, Patrick Fernandes, Helena Wu, Tania Vaz, Duarte Alves, Amin Fara- jian, and 1 others. 2024b. Tower v2: Unbabel-ist 2024 submission for the general mt shared task. In Proceedings of the Ninth Conference on Machine Translation , pages 185–204. Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, and 1 others. 2024. Deepseek- math: Pushing the limits of mathematical reason- ing in open language models. arXiv preprint arXiv:2402.03300 . Mingyang Song, Mao Zheng, Zheng Li, Wenjie Yang, Xuan Luo, Yue Pan, and Feng Zhang. 2025. Fastcurl: Curriculum reinforcement learning with progressive context extension for efficient training r1-like reason- ing models. arXiv preprint arXiv:2503.17287 . Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupati- raju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, and 1 others. 2024. Gemma 2: Improving open language models at a practical size. arXiv preprint arXiv:2408.00118 . Qwen Team. 2025. Qwq-32b: Embracing the power of | https://arxiv.org/abs/2505.16637v2 |
reinforcement learning. Jiaan Wang, Fandong Meng, Yunlong Liang, and Jie Zhou. 2024. Drt: Deep reasoning translation via long chain-of-thought. arXiv preprint arXiv:2412.17498 . Jiaan Wang, Fandong Meng, and Jie Zhou. 2025. Deep reasoning translation via reinforcement learn- ing. arXiv preprint arXiv:2504.10187 . BigScience Workshop, Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili ´c, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luc- cioni, François Yvon, and 1 others. 2022. Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100 . Lijun Wu, Li Zhao, Tao Qin, Jianhuang Lai, and Tie- Yan Liu. 2017. Sequence prediction with unlabeled data by reward function learning. In IJCAI , pages 3098–3104. Tianhao Wu, Weizhe Yuan, Olga Golovneva, Jing Xu, Yuandong Tian, Jiantao Jiao, Jason Weston, and Sain- bayar Sukhbaatar. 2024. Meta-rewarding language models: Self-improving alignment with llm-as-a- meta-judge. arXiv preprint arXiv:2407.19594 . Tian Xie, Zitian Gao, Qingnan Ren, Haoming Luo, Yuqian Hong, Bryan Dai, Joey Zhou, Kai Qiu, Zhi- rong Wu, and Chong Luo. 2025. Logic-rl: Un- leashing llm reasoning with rule-based reinforcement learning. arXiv preprint arXiv:2502.14768 . An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, and 1 others. 2025. Qwen3 technical report. arXiv preprint arXiv:2505.09388 .An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Day- iheng Liu, Fei Huang, Haoran Wei, and 1 others. 2024a. Qwen2.5 technical report. arXiv e-prints , pages arXiv–2412. Wen Yang, Junhong Wu, Chen Wang, Chengqing Zong, and Jiajun Zhang. 2024b. Language imbalance driven rewarding for multilingual self-improving. arXiv preprint arXiv:2410.08964 . Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In Proceedings of the AAAI conference on artificial intelligence , volume 31. Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue, Tiantian Fan, Gaohong Liu, Lingjun Liu, Xin Liu, and 1 others. 2025. Dapo: An open-source llm reinforcement learning system at scale. arXiv preprint arXiv:2503.14476 . Qingyang Zhang, Haitao Wu, Changqing Zhang, Peilin Zhao, and Yatao Bian. 2025a. Right ques- tion is already half the answer: Fully unsuper- vised llm reasoning incentivization. arXiv preprint arXiv:2504.05812 . Shimao Zhang, Xiao Liu, Xin Zhang, Junxiao Liu, Zheheng Luo, Shujian Huang, and Yeyun Gong. 2025b. Process-based self-rewarding language mod- els.ArXiv , abs/2503.03746. Andrew Zhao, Yiran Wu, Yang Yue, Tong Wu, Quentin Xu, Matthieu Lin, Shenzhi Wang, Qingyun Wu, Zi- long Zheng, and Gao Huang. 2025. Absolute zero: Reinforced self-play reasoning with zero data. arXiv preprint arXiv:2505.03335 . Yiyang Zhou, Zhiyuan Fan, Dongjie Cheng, Sihan Yang, Zhaorun Chen, Chenhang Cui, Xiyao Wang, Yun Li, Linjun Zhang, and Huaxiu Yao. 2024. Cali- brated self-rewarding vision language models. arXiv preprint arXiv:2405.14622 . Wei Zou, Sen Yang, Yu Bao, Shujian Huang, Jiajun Chen, and Shanbo Cheng. 2025. Trans-zero: Self- play incentivizes large language models for multilin- gual translation without parallel data. arXiv preprint arXiv:2504.14669 . | https://arxiv.org/abs/2505.16637v2 |
arXiv:2505.16648v1 [cs.CL] 22 May 2025Collaboration among Multiple Large Language Models for Medical Question Answering 1stKexin Shang College of Computing & Informatics Drexel University Philadelphia, USA ks4252@drexel.edu2ndChia-Hsuan Chang College of Computing & Informatics Drexel University Philadelphia, USA shane.chang.tw@gmail.com3rdChristopher C. Yang College of Computing & Informatics Drexel University Philadelphia, USA chris.yang@drexel.edu Abstract —Empowered by vast internal knowledge reservoir, the new generation of large language models (LLMs) demonstrate untapped potential to tackle medical tasks. However, there is insufficient effort made towards summoning up a synergic effect from multiple LLMs’ expertise and background. In this study, we propose a multi-LLM collaboration framework tailored on a medical multiple-choice questions dataset. Through post-hoc analysis on 3 pre-trained LLM participants, our framework is proved to boost all LLMs reasoning ability as well as alleviate their divergence among questions. We also measure an LLM’s confidence when it confronts with adversary opinions from other LLMs and observe a concurrence between LLM’s confidence and prediction accuracy. Index Terms —Large Language Model, Medical Question An- swering, AI collaboration, Multi-agent, Prompting. I. I NTRODUCTION The rapid evolution of generative AI and specifically large language models (LLMs) has spurred interest in adoption of these technologies in various fields. In the medical domain, several applications of LLMs have been studied extensively, including medical question answering (QA), clinical decision support, information extraction from clinical notes, and more. In addition to large general purpose LLMs such as Ope- nAI’s GPT, Google’s Gemini, Meta’s Llama, and Anthropic’s Claude, multiple domain-specific LLMs have emerged, fine- tuned for specific tasks. For clinical tasks, these include Med42-70B [1],ClinicalCamel-70B [2],Meditron-70B [3], and MedAlpaca-13B [4], among others. While some of these LLMs may have overlap in their fine-tuning datasets, they adopt distinct data preprocessing and training methods and vary in both size and architecture. Different studies have applied these various LLMs to the task of medical QA and compared their performance, but few have adopted multi-LLM approaches for this task. Additionally, some studies have proposed different mechanisms to enhance or improve the reasoning of LLMs for medical QA, but these improvements are typically based on the generations of the same LLM. Given the critical role of clinical reasoning in medical tasks, and the variable performance and reasoning of different LLMs, there is potential to explore whether multi-LLM collaboration for medical QA can reduce errors and improve performance. To explore potential collaboration approach to enhance the performance of multiple LLMs in a medical QA datasetUSMLE, we design and evaluate a framework where LLMs work collaboratively by sharing their reasonings on a medical question. Our results validate that this process of collaboration efficiently mitigate the disagreement among LLMs as well as leading to individual and overall improved performance in the task. Besides, in the setting of conflicting answers from the other LLMs, we consider an LLM’s tendency to “concede” (change its answer) or to “insist” (not change its answer) as a measure of its inherent “confidence”, observing a positive pattern between LLM’s reasoning capability and its confidence degree. Lastly, we observe that all LLMs show a gap in consistency between questions they answered correctly and incorrectly. | https://arxiv.org/abs/2505.16648v1 |
II. R ELATED WORK A. Medical QA with LLMs Multiple studies have evaluated and reported on the use of LLMs in medical QA using different LLMs and datasets. These include medical textbook questions on ChatGPT [5], medical licensing exam questions on ChatGPT, GPT-3, and GPT-4 [6]–[9], and real world patient questions on ChatGPT and on Google Bard [10], [11]. These studies have primarily focused on evaluating the performance of specific LLMs with some additionally analyzing the reasoning provided. Lucas et al. [12] proposed a mechanism to improve the reasoning and consistency of LLMs in medical QA using an ensemble reasoning approach. However, this method, while promising, also involves LLMs reviewing their own reason- ings, which can cause LLM hallucinations and is susceptible to confirmation bias [13], [14]. Additionally, it is compu- tationally demanding, requiring multiple calls to the API. Multi-LLM collaborations on medical QA have not been extensively explored as a mechanism for improved reasoning and performance. B. LLM collaboration Collaboration is commonly understood as the act of working together to complete a given task or achieve a common goal. In the context of LLMs, the concept of collaboration can take on different forms. Feng et al. [15] propose two multi-LLM collaboration-based pathways: “cooperate” and “compete” to identify and mitigate knowledge gaps in LLMs. They enable “abstention”, where an LLM, after interacting with other LLMs, should abstain from answering a question incorrectly. In the cooperation setting, a “judge” LLM will compare feedback from other LLMs to determine if a particular LLM should abstain from answering. In the compete setting, an LLM is challenged by others with alternative answers and if on average it is “swayed” by these conflicting answers then it should abstain. Through this abstention mechanism, their multi-LLM collaboration has the potential to mitigate hallucinations. Zhang et al. [16] take a psycho-social view to exploring LLM collaboration. They conceptualize LLM agents as having the traits of either being “easy-going” or “overconfident” and craft prompts to enable these traits. They then define two “thinking patterns” in multi-round LLM conversations: “debate” and “reflection” and evaluate different collaborative strategies by simulating a “machine society” comprising dif- ferent LLM agents and permuting the two thinking patterns. Fang et al. [17] propose “CounterFactual Multi-Agent De- bate (CFMAD)”, a framework where a “critic” LLM is crafted to always hold counterfactual viewpoints to the “assistant” LLM. These two LLMs debate on every possible choice option of a multiple-choice question, and their debate transcript is subsequently examined by a third-party “judge” LLM to decide the final answer. Their experiment results suggest that CFMAD helps reduce or eliminate hallucinations in LLMs. In the domain of medical QA, Yang et al. [18] investigate an ensemble approach, LLM-Synergy, comparing two different ensembling methods and finding improvements in accuracy. While their method does not involve the constituent LLMs exchanging information, they leverage the diversity and ca- pabilities of different LLMs for medical QA. However, their approach does not incorporate the LLMs reasoning. The highlighted studies propose and evaluate various forms of multi-LLM collaboration, using differing mechanisms. However, we note two gaps. While some | https://arxiv.org/abs/2505.16648v1 |
of the collaboration mechanisms involve debate or exchanging opinions on a question, they may not use the chain-of-thought [19], [20] ap- proach to obtain the LLM reasoning for the generated answer, which has been demonstrated to improve model performance. Additionally, there is a lack of studies specifically looking at medical QA where clinical reasoning is a crucial part of any decision making and question answering. III. M ETHODS AND MATERIALS A. Data Source and Preparation We use a medical question dataset from the sample exams of the United States Medical Licensing Examination (USMLE). This exam contains three steps that assess medical profession- als’ foundational knowledge across medical science, clinical medicine, biomedical science and other domain skills [21]. Each step has different emphasis: •Step 1 assesses important concepts of the sciences basic to the practice of medicine. •Step 2 assesses an examinee’s ability to apply medical knowledge, skills, and understanding of clinical scienceTABLE I: Sample Size of USMLE Sub-dataset USMLE Sample Questions Step 1 Step 2 Step 3 Number of questions 87 100 118 essential for the provision of patient care under supervi- sion. •Step 3 assesses the medical knowledge, understanding of biomedical and clinical science essential for the unsuper- vised practice of medicine. Unlike the actual USMLE exam, the sample exams only comprise multiple-choice questions, providing a comprehen- sive dataset to serve our purpose of multi-LLM collaboration study in medical QA. Since all LLMs used in this study are text-only models, we follow the data processing pro- posed in the previous work [12] to filter out questions with complex structure such as having tables, charts, text-based patient records or pictures. After streamlining our dataset, the distribution of questions is shown in Table I. Algorithm 1 Zero-shot Chain-of-Thought with Self- Consistency (ZS-CoT-SC) Require: a question q, an LLM ϕ, a repeat number n, a summarizer ψ 1:fori∈ {1, n}do ▷Apply self-consistency 2: ri=ϕ(Treasoning (q)) 3: yi=ϕ(Tanswer (q, ri)) 4:end for 5:ˆyq,ϕ=MajorityV ote ({yi|1≤i≤n})▷Pick the most consistent answer 6:ˆrq,ϕ={ri|1≤i≤n, yi= ˆyq,ϕ} ▷Collect reasonings associated with the majority vote 7:ˆsq,ϕ=ψ(Tsummary (ˆrq,ϕ)) ▷Summarize the list of reasonings 8:return Rq,ϕ= (ˆyq,ϕ,ˆsq,ϕ) Algorithm 2 Collaboration Procedure Require: a set of disagreed questions Qdis, a set of LLMs Φ, a summarizer ψ, initial predictions {Rq,ϕ|q∈Qdis, ϕ∈ Φ} 1:while Pcon≤80% do 2: forq∈Qdisdo 3: Trans q={Rq,ϕ|ϕ∈Φ} 4: forϕ∈Φdo 5: Rq,ϕ = ZS-COT-SC (q+Trans q, ϕ, ψ) ▷Re-apply Algorithm 1 with a different prompt Treasoning review 6: end for 7: end for 8: Update QdisandPconbased on the updated predic- tions 9:end while B. Iterative Collaboration Framework (ICF) We propose a multi-LLM Iterative Collaboration Frame- work (ICF) to facilitate the information exchange of multiple- choice questions among LLMs. As shown in Fig. 1, ICF consists of two parts: Zero-shot Chain-of-Thought with Self- consistency (ZS-CoT-SC) and Collaboration Loop. 1) Zero-shot Chain-of-Thought with Self-consistency (ZS- CoT-SC): At the first step of ICF, we apply self-consistency decoding strategy and zero-shot chain-of-thought prompting [20] (ZS-CoT) to LLM inference. We define this process as “Zero-shotChain-of- Thought with Self-Consistency (ZS-CoT- SC)” (Fig.1 a.). Its pseudocode is shown in Algorithm 1. Denoting Qa collection of questions, for each question | https://arxiv.org/abs/2505.16648v1 |
q∈Q, we use a base LLM ϕto generate n= 10 responses (i.e., line 1 - line 3). During this process, ZS-CoT [20] is utilized to first elicit a step-wise reasoning with prompt template Treasoning and then a letter choice with prompt template Tanswer . Next, the most consistent letter choice out ofngenerations will be selected as the majority vote ˆyq,ϕforq given ϕ(i.e., line 5). After identifying ˆyq,ϕ, all of its associated reasonings will be extracted and concatenated in line 6. To reduce the information redundancy of the concatenated rea- sonings, they are summarized into one paragraph, ˆsq,ϕ, by an external summarizer LLM ψwith prompt template Tsummary (i.e., line 7). At the end of ZS-CoT-SC step, each question qis paired with a consolidated context Rq,ϕcontaining its majority voteˆyq,ϕand the summary of its reasoings ˆsq,ϕ(i.e., line 8). The prompt templates Treasoning ,Tanswer , andTsummary can be found in Fig. 2 and 3. Having collected results of Qacross all LLM participants in ZS-CoT-SC step, we determine if LLMs have consensus ordisagreement onqby aligning their majority votes: if all LLMs propose the same majority vote towards, they are said to reach consensus onq; else, if at least one LLM proposes a different majority vote than others, they are said to have disagreement onq. Following this process, Qis further classified into two subsets after ZS-CoT-SC: •Qcon: Question set where all LLMs have consensus •Qdis: Question set where LLMs have disagreement On the top of this classification scheme, we denote the consensus rate Pconacross all LLMs as: Pcon=|Qcon| |Q|∗100% (1) Pconis used as a termination criterion to decide if team members are diverging enough on dataset Qto necessitate an opinion exchange in the later collaboration loop (Fig.1, b). We set Pcon≥80% at this point to control the iteration of collaboration loop appropriately: if LLMs fail to reach at least 80% consensus rate at the end of ZS-CoT-SC, a collaboration loop will kick off. 2) Collaboration Loop: In the next procedure, collabora- tion loop (Algorithm 2), LLMs resolve their divergence on all disagreed questions by exchanging their initial predictiongenerated from the preceding ZS-CoT-SC step. For each disagreed question q∈Qdis, we aggregate predictions across all LLMs and their associated summarized reasoning into an integrated transcript Trans q(i.e., line 3). This co-produced transcript allows the entire team share expansive reasoning pathways from each other. Then ZS-CoT-SC is reused to have each base model re- viewed each disagreed question and the corresponding tran- script. Unlike using two sequential prompts Treasoning and Tanswer in the initial ZS-CoT-SC, we use a single prompt template Treasoning review (Fig. 2) for ZS-CoT-SC to guide LLMs to critically examine reasonings from all team members and re-decide their answers (i.e., line 5). As an LLM’s new answer might be or not be the same, we update the Qdisand re-calculate new consensus percentage Pcon(i.e., line 8). This collaboration loop will continue to circulate Qdisamong LLMs until Pconmeets the termination criterion. C. LLM Participants It is our purpose to construct an LLM team with diversity, hoping every model can leverage its expertise and fill up the knowledge gap of its | https://arxiv.org/abs/2505.16648v1 |
teammates in various medical fields. Therefore, three open-source LLMs from unique backgrounds are selected to assemble a multi-LLM system: med42-70B- v1[1], ClinicalCamel-70B [2], and Mixtral 8x7B [22]. For brevity, we will refer to their short name, Med42, ClinCamel, and Mixtral in the rest of this paper. The first model, Med42, has 70B parameters and is specif- ically refined from Llama-2-70B [23] architecture for med- ical tasks. It is fine-tuned with 411,064 medical samples and 295,649 general samples arranging from clinical articles, dialogue, medical professional tests to clinical reports and so on. Another medical specialized LLM, ClinCamel has the same parameter size of 70B and uses the same base architecture Llama-2-70B with difference that it is fined-tuned from an extensive data collection of clinical dialogue and utilizes QLoRA technique [24] during fine-tune. Mixtral is a general purpose mixture of experts (MoE) model with 46.7B parameters. In addition to demonstrating strong performance in multilingual understanding tasks, Mixtral has a 10240-token long context window, which makes it an ideal LLM to retrieve and summarize information. Thus, we create two separate instances of Mixtral for different purposes: one is a team member in ICF and the other is an external helper agent ψ who summarizes opinions from other two teammates (line 7, Algorithm 1). All three models are reported to either match or surpass the performance of OpenAI’s GPT-3.5 LLM on most benchmarks [1], [2], [22]. As LLMs can be very sensitive to prompt formatting [25], we tailor each model the most suitable prompts for zero- shot and collaboration inference through testing a variation of prompting formats on randomly selected 30 sample questions. The best prompts gain highest accuracy and is thus selected as template for this experiment. We set the temperature at 1 Fig. 1: ICF Framework consisted of two parts: (a) ZS-CoT-SC and (b) Collaboration Loop for all LLMs and retain all other parameters at their default values. We quantize three LLMs to int8 precision to save computa- tional resource and conduct our experiment on two NVIDIA A40 GPUs for Mixtral inference and four NVIDIA A40 GPUs for Med42 and ClinCamel inference. D. Metrics 1) Accuracy: We use prediction accuracy as a metric of LLM’s reasoning capacity because it is the major metric for the medical Q&A task [17], [26], [27]. Accuracy is calculated as the frequency of correctly predicted multiple-choice questions divided by the total question number in our dataset. 2) Confidence: Each time a base LLM reviews conflicting reasonings and makes new decision in the collaboration loop, it may “concede” (change its answer) or “insist” (not change itsanswer). We consider this tendency of an LLM as its intrinsic “confidence”. We define Qinsist ϕ⊆Qdisas disagreed questions where LLM ϕinsists to its original answer and notice LLMs’ choice is significantly affected by an group-level factor: whether there isteammate support . Our preliminary experiment shows a base LLM “insists” less frequently when all other LLMs against it but more frequently if at least one LLM supports it. In other words, an LLM possess different “confidence” level depending on teammates support level. To address this gap, | https://arxiv.org/abs/2505.16648v1 |
we first classify Qdisby the level of teammate support: •S+: disagreed questions where at least one teammate hold the same answer as the base model (with support) •S−: disagreed questions where none of teammate propose the same answer as the base model (without support) ZS-CoT-SC Prompt Treasoning <|System| >: You are an expert medical profes- sional who helps to reason multiple choice questions. <|Question| >:{question } {choices } <|Assistant| >: Let us think step by step. First, Step-wise reasoningfrom base LLM ... ... Tanswer Therefore, among choice {first letter }through {last letter}, the answer (letter) is: A Collaboration Loop Prompt Treasoning review <|System| >: You are a medical professional who helps resolve disagreements among other experts re- garding a medical question by critically reviewing their reasoning. <|Prompter| >:{question } {choices } {answers and reasonings integrated from all LLMs } After reviewing experts’ choices and reasonings, which do you agree with and why? Please first output the answer (letter) and then your reasoning: <|Assistant| >: Fig. 2: Prompt templates of Med42 in ZS-CoT-SC and collab- oration loop of ICF. (Upper) In ZS-CoT-SC, two sequential prompts Treasoning andTanswer jointly formulate the basic ZS-CoT template, where the highlighted part is base LLM’s response. Then each ZS-CoT template is applied 10 times on a question via self-consistency. (Lower) Collaboration loop applies Treasoning review for every recursive ZS-CoT- SC, presenting all LLM’s reasoning pathways and ask for re- decision at the same time.Summarization Prompt Tsummary <s>[INST] You are an expert medical professional who helps to summarize opinions from a panel of experts regarding a medical multiple choice question. Majorityanswers andaggregated 10reasonings from onebase LLM Please read all the response above. Then extractively summarize their opinions into one paragraph. [/INST]Summary: Fig. 3: Prompt templates of summerizer LLM ψ. In this ex- periment, we deploy Mixtral to summarize repetitive reasoning pathways from self-consistency sampling. The highlighted part is a integrated context of the majority vote and n= 10 generated reasonings from one LLM participant in ICF. Then we intersect Qinsist ϕ with S+and S-respectively, creating a vector pinsist ϕ : pinsist ϕ = |S+∩Qinsist ϕ| |S+| |S−∩Qinsist ϕ| |S−| (2) Ranging from 0 to 1, larger component of pinsist ϕ indicates the LLM is more consistent to insist under specific situation S+or S-. Given by we only have two levels in teammate support, we average the pinsist ϕ vector to measure the overall confidence ofϕ: Confidence =Ppinsist ϕ |pinsist ϕ|(3) So far, we have built a standard confidence metric for a model ϕin Equation 3. It can have maximum confidence as 1 if it insists every time in every scenario and minimal confidence as 0 if it always compromises in case of any conflict. 3) Consistency: Proposed by Wang et al. [28], self- consistency is a decoding strategy induces an LLM to generate a diverse set of reasoning paths and determine the most consistent answer. Through experiments, we observed that consistency level is a unique characteristic varying across different LLMs. We use arithmetic average metric to measure an LLM’s consistency in a multiple-choice task. To calculate the consistency, | https://arxiv.org/abs/2505.16648v1 |
we denote Count (ˆyq,ϕ) =|{yi|1≤i≤n, yi= ˆyq,ϕ}| as the frequency of the majority vote ˆyq,ϕof a question qby the LLM ϕ, where nis the parameter of self-consistency con- trolling the number of repeats. For example, Count (ˆyq,ϕ) = 6 TABLE II: Consensus Convergence Consensus Rate ( Pcon) USMLE ( |Q|) ZS-CoT-SC Collaboration Step 1 (87) 56.32 86.21 ( ↑29.89) Step 2 (100) 41.00 74.00 ( ↑33.00) Step 3 (118) 55.08 87.29 ( ↑32.21) Average 50.82 82.62 ( ↑31.80) refers to there are six votes supporting the majority vote for the question qpredicted by LLM ϕ. With this, we define the consistent level of an LLM as the arithmetic average of Count (ˆyq,ϕ)across the dataset Q, adjusted by a repeat number n: consistency of LLM ϕ=P q∈QCount(ˆyq,ϕ) n |Q|(4) An LLM can have a maximum consistency level as 1 if it consistently gives out the same letter choice per multiple- choice question no matter how many times of repetition. IV. R ESULTS AND ANALYSIS A. Consensus Convergence At the initial ZS-SC-SC round, three LLMs only reach consensus on 50.82% questions on average from Qconsists of 305 sample questions, but this consensus rate surges to 82.62% after two collaboration loop iterations (Table II), which asserts our hypothesis in that collaboration mitigate the disagreements among LLMs. B. Accuracy of Med42, ClinCamel, and Mixtral Taking the accuracy of ZS-CoT-SC as baseline, we ob- serve every LLM has gained solid improvement after going through two collaboration loops (Table III). This improvement resonates with the conclusion from prior studies regarding LLM’s surprising team-working ability [26]. It is worth noting that our simple framework does not relay on back-and-forth communication among LLMs yet still proved to boost their individual performance. C. Confidence In table IV, we summarize the confidence of Med42, ClinCamel, and Mixtral. For each base model, its pinsist ϕ is determined by checking how many original answers at ZS- CoT-SC still remain unchanged after two collaboration loops in disregard for disagreement from other teammates. Our results suggest Med42 is the most confident model with 0.51 confidence, followed by Mixtral (0.49). ClinCamel is the most soft-minded one with only 0.23 confidence. D. Consistency In Table V, we measure LLMs’ average count of majority vote as a metric of their consistency defined in equation 4. Due to the significant gap we observed, we separate the mea- surement in two situations: when the question was answered correctly and incorrectly, and use difference (∆)to indicateTABLE III: Accuracy(%) of Med42, ClinCamel, and Mixtral (a) Med42 USMLE ZS-CoT-SC Collaboration Step 1 81.61 85.06 (↑3.45) Step 2 64.00 70.00 ( ↑6.00) Step 3 70.34 76.27 ( ↑5.93) Average 71.48 76.72 ( ↑5.24) (b) ClinCamel USMLE ZS-CoT-SC Collaboration Step 1 65.52 80.46 ( ↑14.94) Step 2 64.00 67.00 ( ↑3.00) Step 3 75.42 78.81 (↑3.39) Average 68.85 75.41 ( ↑6.56) (c) Mixtral USMLE ZS-CoT-SC Collaboration Step 1 72.41 79.31 ( ↑6.90) Step 2 64.00 74.00 (↑10.00) Step 3 77.12 78.81 (↑1.69) Average 71.47 77.38 ( ↑5.91) TABLE IV: Confidence Level of Med42, ClinCamel, and Mixtral ZS-CoT-SC vs Collaboration Pinsist (%) Med42 ClinCamel Mixtral W/O support | https://arxiv.org/abs/2505.16648v1 |
( S−) 45.33 12.99 42.24 W/ support ( S+) 68.10 52.11 69.90 Confidence 0.57 0.33 0.56 the variation of consistency between two situations. Results from both ZS-CoT-SC and collaboration are presented. First, all LLM non-exclusively tend to maintain high self- consistency level when their majority vote hit the ground truth but become unstable when the majority vote miss a shot. Secondly, LLMs with greater capability and confidence such as Med42 and Mixtral also show greater ∆while ClinCamel, which slightly underperforms the other two models in both accuracy and confidence, also has less segregation of confidence between correct and incorrect questions. Thirdly, compared to ZS-CoT-SC, the ∆surges for Med42 and Mixtral after two collaboration loops but slightly decline for ClinCamel. In other words, stronger and more confident LLMs become even more affirmative with the help from teammates but it is the opposite to those weaker and easy- going LLMs – them might fail to improve their consistency via such a collaboration work mode since ClinCamel, as an example, seems loss its perception of those questions it’s able to answer correctly or not. TABLE V: Consistency of Med42, ClinCamel, and Mixtral (a)ZS-CoT-SC Average Majority Vote Count Med42 ClinCamel Mixtral When Correct 6.01 5.02 7.46 When Incorrect 1.95 1.59 1.65 Difference ∆ 4.06 3.43 5.81 (b)Collaboration Average Majority Vote Count Med42 ClinCamel Mixtral When Correct 8.63 4.63 9.22 When Incorrect 0.34 1.79 0.42 Difference ∆ 8.29 2.84 8.8 V. D ISCUSSION This paper explores whether collaboration mitigates LLMs’ divergence and improves their reasoning performance in medi- cal multiple-choice questions through a tailored workflow ICF. The results confirm both of our hypothesis and highlight some overseen facts which are discussed below. A. Confidence Plays a Role in LLM Collaboration Prior to our study, little research has cast light on LLM’s confidence in inter-LLM collaboration and the importance confidence in such an LLM team-working setting has been understudied. A relative pioneer study in this field [29] use the same LLM as the backbone to create three agents, either prompted to be “overconfident” or “easy-going”, to build up multiple LLM “societies” and compared their accuracy. However, this prompting-based confidence customization lacks quantitative design and is not comparable between different backbone LLMs. Feng et al. [27] in their work on AbstainQA briefly allude to the idea of LLM “confidence” in multi-LLM collaboration, stating that if an LLM, on average, changes its generations when presented with conflicting information, then it demon- strates low confidence. They suggest that LLMs, by design, should abstain from generating outputs with low confidence. However, they do not evaluate the role or effect of LLM confidence on multi-LLM collaboration. In our study, we do observe a pattern that suggests LLM’s confidence’s interplay within a team. First of all, during ZS-CoT-SC, more confident LLMs yield higher accuracy when working solely, but this trend vanished after two rounds of collaboration. However, in terms of improvement or progress in teamwork, less confident LLM benefits more from collaboration with more accuracy increase before and after collaboration. Taking both Table III and Table IV into account, ClinCamel gained substantial accuracy growth | https://arxiv.org/abs/2505.16648v1 |
of 6.56% with only 0.23 confidence while Mixtral and Med42 gained lower accuracy improvement of 5.91% and 5.24% respectively, which corresponds to their higher confidence of0.49 and 0.51. We utilize the Spearman’s rank correlation [30] to compute a rank-based correlation coefficient rsbetween accuracy improvement and confidence and obtain rs=−0.5. While this moderate negative association is consistent with our observation, due to the small sample size (three LLMs) in our experiment, this test does not have statistical significance (P value = 0.67) and thus we prudentially report this result as an optional reference only. We discuss this “weaker LLMs make greater progress” phenomenon from a perspective of LLM’s innate assessment ability: weaker LLM, although is not good at reasoning in its own way, still possess an acute sense when judging stronger ones’ reasoings and successfully align itself with the correct answer in most cases. Resonate to our finding, a study by Khan et al. [31] unveils non-weaker models (non-experts) achieving 32% higher accuracy when overseeing stronger models (experts) debate each other than predicting alone. They envision this collaboration a potential solution to aligning models in the absence of ground truth. B. ICF Efficiency It is noteworthy that our framework is computationally light compared to related studies. Refer to Zhang et al. [16], conversation-based LLM col- laboration has to deal with the trade-off between conversation length and computational cost: while LLM’s conformity gen- erally grows with the conversation round increases, it demands larger consumption accordingly. However, other studies indi- cate that conversation length has a more dedicated effect on LLM’s joint performance. Fang et al. [17] state accuracy turns out to decrease with the increase of debate rounds. Du et al. [26] also point out that as debates get longer, LLMs struggle to fully process input information. Unlike the mentioned LLM collaboration frameworks that requires multiple rounds of reciprocal communication between LLMs, the “collaboration loop” in ICF (Fig. 1, b) has an unidirectional circular structure that allows information flow through each base model once per round. This design greatly streamlines the pipeline and reduces the redundant encoding process of LLM inference. In addition, to prevent conversation transcripts from being too bulky to process, we set up a termination criterion Pconto exit the collaboration loop on time when the LLMs no longer divergent. Lastly, the external summarizer LLM helps to condense the repetitive information from the lengthy concatenated paragraphs generated by self- consistency, compacting the input length in advance while preserving essential information. With fewer tokens and sim- pler workflow to jointly predict each medical QA question, ICF fulfills its mission well and efficiently, improving LLM insiders’ accuracy while offering a straightforward way to measure their confidence. C. Consistency Adds to LLM’s Interpretability Refer to Table V, in our post-hoc analysis, LLMs exhibit an evident gap in consistency with questions they predicted correctly and incorrectly. This interesting trait offers us a new perspective to interpret LLM’s behavior: the inconsistent answers might be signals of hallucination issue. When an LLM lacks factual knowledge about the task, it tends to answer a question with false content. Since | https://arxiv.org/abs/2505.16648v1 |
the generation of false content is relatively random, it is easily detected through self-consistency sampling as low consistency. Holding this assumption, we reckon it is practical to con- sider LLMs’ consistency a potential indicator of its correct- ness, which could be utilized in unsupervised learning. VI. C ONCLUSION In summary, our study ensembles three LLMs in a crafted framework (ICF) for medical QA tasks, where LLMs re- cursively exchange reasonings on disagreed questions until reaching a consensus threshold. We discover that collaboration boosts all LLM’s accuracy as well as rapidly converge their disagreement. We propose an arithmetic metric for LLM’s confidence and find that LLMs with higher confidence tend to yield higher accuracy but low-confident LLM gains more improvement in accuracy than overconfident LLMs. Last, LLMs’ self-consistency is highly imbalanced among questions they predict successively and not. VII. A CKNOWLEDGMENTS This work was supported in part by the National Science Foundation under the Grants IIS-1741306 and IIS-2235548, and by the Department of Defense under the Grant DoD W91XWH-05-1-023. This material is based upon work sup- ported by (while serving at) the National Science Foundation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. In addition, we would like to express our sincere gratitude to Mary M. Lucas at Drexel University College of Computing and Informatics for her assistance in reviewing, verifying, and providing insightful feedback to this work. REFERENCES [1] C. Christophe, P. K. Kanithi, P. Munjal, T. Raha, N. Hayat, R. Rajan, A. Al-Mahrooqi, A. Gupta, M. U. Salman, G. Gosal, B. Kanakiya, C. Chen, N. Vassilieva, B. B. Amor, M. A. Pimentel, and S. Khan, “Med42 – evaluating fine-tuning strategies for medical llms: Full-parameter vs. parameter-efficient approaches,” 2024. [Online]. Available: https://arxiv.org/abs/2404.14779 [2] A. Toma, P. R. Lawler, J. Ba, R. G. Krishnan, B. B. Rubin, and B. Wang, “Clinical camel: An open expert-level medical language model with dialogue-based knowledge encoding,” 2023. [Online]. Available: https://arxiv.org/abs/2305.12031 [3] Z. Chen, A. Hern ´andez-Cano, A. Romanou, A. Bonnet, K. Matoba, F. Salvi, M. Pagliardini, S. Fan, A. K ¨opf, A. Mohtashami, A. Sallinen, A. Sakhaeirad, V . Swamy, I. Krawczuk, D. Bayazit, A. Marmet, S. Mon- tariol, M.-A. Hartley, M. Jaggi, and A. Bosselut, “Meditron-70b: Scaling medical pretraining for large language models,” 2023. [4] T. Han, L. C. Adams, J.-M. Papaioannou, P. Grundmann, T. Oberhauser, A. L ¨oser, D. Truhn, and K. K. Bressem, “Medalpaca – an open-source collection of medical conversational ai models and training data,” 2023. [Online]. Available: https://arxiv.org/abs/2304.08247[5] Y . Kumah-Crystal, S. Mankowitz, P. Embi, and C. U. Lehmann, “ChatGPT and the clinical informatics board examination: the end of unproctored maintenance of certification?” Journal of the American Medical Informatics Association , vol. 30, no. 9, pp. 1558–1560, Jun. 2023, eprint: https://academic.oup.com/jamia/article- pdf/30/9/1558/51141429/ocad104.pdf. [Online]. Available: https://doi.org/10.1093/jamia/ocad104 [6] T. H. Kung, M. Cheatham, A. Medenilla, C. Sillos, L. De Leon, C. Elepa ˜no, M. Madriaga, R. Aggabao, G. Diaz-Candido, J. Maningo, and V . Tseng, “Performance of ChatGPT on USMLE: Potential for AI-assisted medical | https://arxiv.org/abs/2505.16648v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.