text
string
source
string
0.8282 0.8182 0.9059 Xgboost 0.9153 0.8676 0.8963 0.8807 0.9740 Deep Learning RNN 0.8290 0.8474 0.6273 0.7181 0.8742 Bilstm 0.8384 0.8798 0.6263 0.7297 0.8843 Bilstm -Attention 0.8422 0.8887 0.6306 0.7364 0.8760 Roberta Machine Learning RF 0.7849 0.7985 0.8046 0.7972 0.8775 SVM 0.7864 0.7939 0.8151 0.8004 0.8665 Logistic 0.8238 0.8163 0.8038 0.8081 0.9164 Ridge 0.8337 0.8207 0.8246 0.8216 0.9255 Xgboost 0.9190 0.8675 0.9059 0.8857 0.9755 Deep Learning RNN 0.8403 0.8643 0.6539 0.7413 0.8926 Bilstm 0.8309 0.8381 0.6357 0.7196 0.8833 Bilstm -Attention 0.8102 0.8190 0.5772 0.6753 0.8221 The evaluation results revealed that, in general, machine learning models outperformed deep learning models in terms of predictive performance. Specifically, the accuracy of machine learning models ranged from 0.7849 to 0.9636, with ROC AUC values spanning from 0.8557 to 0.9923. In contrast, deep learning models demonstrated relatively weaker performance, with accuracy values between 0.7463 and 0.8422, and ROC AUC values ranging from 0.7455 to 0.8926. Furthermore, machine learning models utilizing LIWC feat ure set consistently outperformed those based on word embeddings. For example, the SVM model, incorporating LIWC features, achieved the highest accuracy (0.9636) and ROC AUC (0.9923). By contrast, the XGBoost model, which incorporates word embeddings deriv ed from BERT or RoBERTa, exhibited optimal performance within its respective class of models, achieving accuracies of 0.9153 and 0.9190, alongside ROC AUC values of 0.9740 and 0.9755, respectively. Overall, the SVM model with the LIWC feature set emerged a s the top performer, demonstrating outstanding classification accuracy and discriminative power. These findings underscored the significant advantage of combining LIWC features with machine learning algorithms for predictive tasks, particularly in evaluati ng MI performance. 2) Comparison Analysis of UPQ Between GPT -4 and Human Therapists with Clients The predictive model , an SVM classifier based on LIWC features , was utilized to assess the UPQ of responses generated by GPT -4. Given that the UPQ of human therapists was represented as a binary variable, the McNemar test was employed for statistical analysis, alongside visualizations of the UPQ distribution for both groups. The odds ratio (OR) was additionally utilized as an effect size measure to quantify the differences in UPQ between the two groups. A s indicated in Fig. 2 and Table III, human therapists achieved a significantly higher UPQ (52.69% vs. 38.45%, χ² = 25.886, p < 0.001) . The OR value suggested that the probability of high UPQ in the human the rapist group was 37% greater than that in the GPT -4 group. These results address RQ1. Fig. 2. This figure compares the UPQ scores between human therapists and GPT -4, presenting the frequency percentages of low and high UPQ values. TABLE III COMPARISON OF HIGH UPQ PERFORMANCE BETWEEN HUMAN THERAPISTS AND GPT -4 USING MCNEMAR TEST, INCLUDING CHI-SQUARE (χ²), P-VALUE (P), AND ODDS RATIO (OR) Project N UPQ McNemar Test Human High UPQ GPT-4 High UPQ χ² P OR Human therapist vs GPT -4 632 333(52.69%) 243(38.45%) 25.886 <0.001 1.47 B. Evaluating GPT -4’s Linguistic Behavior in Motivational Interview via Intrinsic Metrics by Explanatory Modeling To gain deeper insights into and evaluate GPT -4’s MI performance, we
https://arxiv.org/abs/2505.17380v1
developed a computational evaluation framework. Initially, the global SHAP method was employed for sensitivity analysis and local SHAP method for feature pruning, leading to the construct ion of a more refined predictive model. These results were then integrated with relevant theoretical paradigms to establish a comprehensive, client - centered computational evaluation framework tailored for generative AI. This framework enabled an in -depth q uantitative assessment of responses generated by GPT -4 and human therapists. 1) Extraction of intrinsic metrics: Identifying psychological linguistic cues that influence UPQ through explainable ML The initial feature set in this study consisted of 127 variabl es, which was subsequently reduced to 21 through feature selection with RFECV. In this section, the global and local SHAP methods were utilized for sensitivity analysis and feature selection to enhance the interpretability of the predictive model. ( a ) ( b ) Fig. 3. This figure presents the composition ratio of top predictive features ranked by SHAP value importance (left) and ROC AUC performance against the number of top features (right) for feature selection evaluation . First, SHAP values were computed to rank features based on their relative importance, as illustr ated in Fig. 3 (a). Following this, forward stepwise feature selection approach was performed, where features were incrementally added to the model in descending orde r of their SHAP importance. Given the class imbalance in the dataset, the performance of the updated model was evaluated using the ROC AUC metric after each feature addition. As illustrated in Fig. 3 (b), the model achieved a local optimum with 17 features, reaching an accuracy of 0.9703 and an ROC AUC of 0.9918, resulting in a more refined and interpretable model. As demonstrated in Table IV, the accuracy of the pruned model slightly outperformed that of the original model (0.9636), accompanied by a modest increase in ROC AUC, rising from 0.9898 to 0.9918 in the pruned model, further underscoring the enhanced performance and efficiency of the pruned model. Table IV COMPARISON OF MODEL PERFORMANCE BEFORE AND AFTER FEATURE PRUNING Model Accuracy ROC AUC Original SVM Model 0.9636 0.9898 Pruned SVM Model 0.9703 0.9918 As illustrated in Fig. 4, a local interpretability analysis utilizing SHAP values was conducted to provide a more comprehensive understanding of both the magnitude and direction of each feature's influence on UPQ, su bsequent to the global SHAP analysis. Fig. 4 This figure presents the SHAP -based local interpretability analysis of feature contributions, including feature importance and directional effects. Table V THE IMPACT OF INTRINSIC METRICS : IMPORTANCE RANKING AND INFLUENCE ON UPQ Ranks of Intrinsic Metrics Categories of Intrinsic Metric Intrinsic Metric Relation Between Features and the UPQ Importance Scores 1 Advice (without permission) advice Negative 5.540897 2 REC (Complex Reflection) complex Positive 1.930714 3 Analytical Thinking article Negative 1.746483 4 Professionalism Apostrophes Negative 1.710157 5 RES (Simple Reflection) simple Positive 1.700861 6 Analytical Thinking impersonal pronouns Positive 1.685375 7 Direct need Negative 1.457068 8 Empathy Comma Negative 1.14083 9 Empathy Perception Positive 0.7562 10 Warn health Negative 0.605685 11 Empathy work Positive 0.502621 12 Confrontation conflict Negative 0.484878
https://arxiv.org/abs/2505.17380v1
13 Warn substances Negative 0.434961 14 Raise Concern (without permission) discrepancy Negative 0.39475 15 Negotiation LSM Negative 0.35326 16 Structure time Negative 0.261476 17 Warn Physical Negative 0.182461 As present ed in Table V, t hese intrinsic metrics were aligned with the MI dimensions from existing literature, facilitating an exploration of the relationships between the intrinsic and extrinsic metrics. Based on the importance scores, the features were ranked in descending order to establish a UPQ -centered computational evaluation framework for MI. This analysis delineates how intrinsic metrics shape the MI quality. The “advice” feature in the “Advice (without permission)” category exhibited a strong negative correlation with UPQ, recording the highest importance score (5.540897). In contrast, the “Reflection” category revealed positive correlations with UPQ for both “Complex Reflection” (1.930714) and “Simple Refl ection” (1.700861). In the “Analytical Thinking” dimension, “article” showed a negative association with UPQ (1.746483), whereas “impersonal pronouns” were positively linked to UPQ (1.685375). The “Apostrophes” feature under the “Professionalism” category was negatively associated with UPQ (1.710157). Within the “Direct” category, the “need” feature demonstrated a negative relationship with UPQ (1.457068). Regarding the “Empathy” category, both “Perception” and “work” (0.502621) were positively correlated w ith UPQ (0.7562), while “Comma” (1.14083) exhibited negative associations. The “LSM” in “Negotiation” category showed a negative correlation with the UPQ (0.35326). In the “Warn” category, the features such as “health”, “substances”, and “Physical” were ne gatively associated with UPQ, with importance scores of 0.605685, 0.434961, and 0.182461, respectively. Additional negatively correlated features included “conflict” within the “Confrontation” category (0.484878) and “discrepancy” in the “Raise Concern (wi thout permission)” category (0.39475). The “time” feature under the “Structure” category also showed a negative association with UPQ (0.261476). In summary, features with higher importance scores and negative associations, such as “advice”, “articles” and “Apostrophes”, exerted a significant influence on UPQ, while features positively correlated with UPQ, such as “Complex Reflection” and “Simple Reflection”, facilitated an enhancement in UPQ. 2) Evaluating GPT -4’s performance Motivational Interview in by intri nsic metrics To examine the impact of intrinsic metrics on UPQ, paired - sample t -tests were employed to compare the deployment of MI strategies and linguistic cues between human therapists and GPT -4, thereby elucidating differences in their performance across these intrinsic metrics. The results are summarized in Table VI. Table VI COMPARATIVE ANALYSIS OF GPT -4 AND HUMAN THERAPISTS : PERFORMANCE ON INTRINSIC AND EXTRINSIC METRICS1 Categories Categories of Intrinsic Metric Intrinsic Metric Relation Between Features and the UPQ Cohen’s d (GPT -4- Human Therapists) Importance Scores MIIN Advice (without permission) advice Negative -0.234*** 5.540897 Direct need Negative 0.151*** 1.457068 Warn Physical Negative -0.195*** 0.182461 health Negative -0.169*** 0.605685 substances Negative -0.125** 0.434961 Raise Concern (without permission) discrepancy Negative 0.632*** 0.39475 Confrontation conflict Negative 0.068 0.484878 MICO Reflection RES (simple reflection) Positive -0.218*** 1.700861 REC (complex reflection) Positive 0.166*** 1.930714 Other Empathy Perception Positive 0.396*** 0.7562 Comma Negative -0.571*** 1.14083 work Positive -0.384*** 0.502621 Professionalism Apostrophes Negative 0.782*** 1.710157 Negotiation LSM Negative 0.478*** 0.35326 Structure time Negative 0.014 0.261476 Analytical Thinking impersonal pronouns Positive -0.060 1.685375 article Negative 0.540***
https://arxiv.org/abs/2505.17380v1
1.746483 First, as illustrated in Table VI, although GPT -4’s overall performance on intrinsic metrics marginally lagged behind that of human therapists, it exhibited a distinct advantage in the MIIN behaviors. Specifically, GPT -4 outperformed human therapists in “Advice (without permission)” dimension (Cohen’s d = –0.234, p < 0.001), and across all subcategories of “Warn”, including “Physical” (Cohen’s d = –0.195, p < 0.001), “Health” (Cohen’s d = –0.169, p < 0. 001), and “Substances” (Cohen’s d = –0.125, p < 0.01). However, in the MI process, GPT -4’s increased expression frequency in the “Direct” dimension (Cohen’s d = 0.151, p < 0.001) and the “Raise Concern (without permission)” dimension (Cohen’s d = 0.632, p < 0.001) detrimentally influenced UPQ. Second, GPT -4’s performance on MICO behaviors aligns with that of human therapists, with a particular emphasis on the “Reflection” dimension in this study. Notably, GPT -4 employed “Complex Reflection” at a significant ly higher frequency compared to human therapists (Cohen’s d = 0.166, p < 0.001), which led to a marked improvement in the UPQ. In 1 Asterisks in Table VI denote levels of statistical significance: * for p < 0.05, ** for p < 0.01, and *** for p < 0.001. contrast, GPT -4 showed a significant reduction in the use of “Simple Reflection” (Cohen’s d = -0.218, p < 0.001), which negati vely impacted the UPQ. Third, GPT -4’s performance in other behaviors generally lagged behind that of human therapists. In the “Empathy” dimension, GPT -4 demonstrated an increase in the use of “Perception” expressions (Cohen’s d = 0.396, p < 0.001) and a decrease in the use of “Comma” (Cohen’s d = –0.571, p < 0.001). However, GPT -4 displayed a marked reduction in its engagement with “Work” discussions (Cohen’s d = –0.384, p < 0.001). In the “Professionalism” dimension, GPT -4 demonstrated a higher frequency o f “Apostrophes” (Cohen’s d = 0.782, p < 0.001). In the “Negotiation” dimension, GPT -4 showed an increase in LSM (Cohen’s d = 0.478, p < 0.001), and in the “Analytical Thinking” dimension, there was a marked increase in the use of “Articles” (Cohen’s d = 0. 540, p < 0.001), all of which may have hindered the enhancement of UPQ. Furthermore, GPT -4 showed no significant difference in the use of “impersonal pronouns” compared to human therapists (Cohen’s d = –0.060, p > 0.05). In the same time, in the “Structure ” dimension, GPT -4’s use of “Time” (Cohen’s d = 0.014, p > 0.05) did not differ significantly from that of human therapists. From the perspective of feature importance in predicting UPQ, human therapists generally outperformed GPT -4 on most key intrinsic m etrics with substantial impacts on UPQ, such as “Simple Reflection”, “Apostrophes”, “articles”, and “need”. However, GPT -4 surpassed human therapists only in “Advice” and “Complex Reflection”. These results indicate that while GPT -4 exhibited notable stren gths in specific intrinsic metrics, its overall performance remained inferior to that of human therapists. Consequently, to improve GPT -4’s performance on extrinsic metrics, it is pivotal to prioritize enhancements
https://arxiv.org/abs/2505.17380v1
in its linguistic cue sand MI strategies, given that these foundational metrics constitute the cornerstone for meaningful advancements in both overall assessment outcomes and therapeutic effectiveness. These results address RQ2. C. Validation of the Integrative computational Evaluation Framework To validate the pivotal role of the computational evaluation framework in improving GPT -4’s MI performance, this study employed zero -shot prompting strategies informed by prior findings to design the CoT prompt engineering method grounded in the framework. Responses generated by GPT -4 using this customized prompt were systematically compared to those of human therapists. 1) Developing zero -shot prompts: Utilizing insights from UPQ - centered explanatory modeling to enhance GPT -4’s responses Table VII PROMPT FRAMEWORK FOR ENHANCING GPT-4’S UPQ IN MOTIVATIONAL INTERVIEW Dimens ions of MI Categories of Intrinsic Metric Strategies promoting UPQ Strategies prompting Intrinsic Metrics Customized prompts Increase Usage Decrease Usage MIIN Advice (without permission) Seek consent and encourage client -driven solutions to foster collaboration. - advice Avoid advice without permission to boost autonomy, reduce resistance, and strengthen the alliance. Direct Reduce the use of directive language to enhance collaboration and autonomy, and decrease resistance. - need Avoid phrases like ‘need’ or ‘must,’ and foster collaboration to support client -driven solutions. Warn Avoid warning behaviors that focus on risks, as they may increase resistance. - physical Foster collaboration and explore the client’s motivations, minimizing emphasis on health risks like smoking or drinking. health substance Raise Concern (Without Permission) Always seek permission before raising concerns to align with client readiness and respect their autonomy. - discrepancy Avoid excessive focus on discrepancies and always seek permission before addressing them. Confrontation Foster a collaborative, non - judgmental environment to encourage open dialogue. - conflict Avoid conflict -related language, as they damage the therapeu tic alliance and increase resistance. MICO Reflection Use reflective listening to validate feelings, deepen exploration, and strengthen collaboration. RES (Simple Reflection) - Combine simple reflections to rephrase client words and foster rapport with complex reflections to infer deeper meanings and evoke change talk. REC (Complex Reflection) Other Empathy Show high empathy to build trust, reduce resistance, and foster motivation for change. Perception Comma Show empathy by reflecting emotions, perceptions, and work -related topics. Avoid long pauses (Comma). work - Professionalism Use formal, precise language to build trust and strengthen the alliance. - Apostrophes Avoid casual elements like overused apostrophes that undermine credibility. Negotiation Negotiation aligns goals, resolves ambivalence, and fosters commitment to change. - LSM Instead of overemphasizing LSM, focus more on the tasks within the negotiation strategy during the MI process Structure Use structure and timelines sparingly to avoid disrupting - time Minimize rigid transitions and time focus to support flow or reducing autonomy. engagement. Analytical Thinking Overuse of analytical language can make therapy feel impersonal, weakening empathy and therapeutic alliance. impersonal pronouns article Analytical thinking, marked by high article use and low pronoun use, may overly focus on logical clarity, reducing emotional connection. As demonstrated in Table VII, this study aimed to enhance the response quality of GPT -4 in the MI task by designing a customized prompt
https://arxiv.org/abs/2505.17380v1
based on the computational evaluat ion framework. Leveraging a CoT methodology, the customized prompt was systematically integrated with the original RACE prompt to create a tailored zero -shot prompt. Comprehensive details regarding both the original and customized prompts are provided in s upplementary materials. 2) Comparative analysis of the UPQ between GPT -4-Prompted and Human Therapists in MI To validate the effectiveness of the computational evaluation framework in enhancing GPT -4’s response performance on the extrinsic metric, a McNemar t est was conducted to examine whether significant differences in UPQ existed across three groups of respons es, with OR as the effect size. The results, presented in Fig. 5 and Table VIII, revealed a statistically significant improvement in UPQ for GPT -4-Prompted responses c ompared to standard GPT -4 responses at a significance level of 0.05 ( ɑ = 0.05) (46.84% vs. 38.45%, χ² = 0.292, p = 0.002). However, compared to human therapists, the UPQ of GPT -4 enhanced by customized zero -shot prompts remained marginally lower (46.84% vs . 52.69%, χ² =4.966, p = 0.026). The OR values indicated that while GPT -4-Prompted responses were 21% more likely to achieve high UPQ compared to standard GPT -4 responses, they were 19% less likely to do so than human therapists. Fig. 5 This figure compares the UPQ scores between human therapists , GPT-4 and GPT -4-Prompted , presenting the frequency percentages of low and high UPQ values. Table VIII COMPARISON OF HIGH UPQ PERFORMANCE BETWEEN GPT -4-PRMOPTED AND HUMAN THERAPISTS AS WELL AS GPT -4 USING MCNEMAR TEST, INCLUDING CHI-SQUARE (χ²), P-VALUE (P), AND ODDS RATIO (OR) Project N McNemar Test χ² P OR GPT-4-Prompted vs Human Therapists 632 4.966 0.026 0.81 GPT-4-Prompted vs GPT -4 632 0.292 0.002 1.21 3) Comparison Analysis in the Intrinsic Metric between GPT - 4-Prompted and Human Therapists To assess whether customized prompts enhance GPT -4’s performance on intrinsic metrics, a paired -sample t -test was conducted to compare the use of intrinsic metrics between GPT - 4-Prompted and standard GPT -4 responses, as well as between GPT -4-Prompted responses and those of human therapists in MI. Results are detailed in Table IX. First, a comparative analysis between GPT -4-Prompted and human therapists revealed that, although GPT -4-Prompted remained less proficient than human therapists in overall intrinsic metrics, it exhibited domain -specific advantages. Specifically, within the MIIN dimension, GPT -4-Prompted demonstrated a substantial reduction in the use of “Advice (without permission)” ( -0.478***). Additionally, GPT -4- Prompted reduced the frequency of “Warn” asso ciated with substance -related vocabulary ( -0.138***). Next, in the other behaviors dimension, GPT -4-Prompted achieved a marked increase in the use of “Perception” in the Empathy dimension (0.559***). Second, compared with the baseline GPT -4, the customized prompting methodology demonstrated clear enhancements in intrinsic metrics. Specifically, in the MIIN dimension, a significant reduction in “Advice (without permission)” ( - 0.258***) usage positively influenced the UPQ. In the MICO dimension, the use of “S imple Reflection” (0.093*) increased. In the other behaviors, GPT -4-Prompted exhibited significant increases in “Perception” (0.146***) and “Work”
https://arxiv.org/abs/2505.17380v1
(0.136***) discussions, reflecting higher levels of empathy. Additionally, a marked reduction in “Apostrophes ” (-0.732***) contributed to enhanced professionalism. Furthermore, the decrease in LSM (-0.145***) facilitated improvements in UPQ scores. At the same time, the reduction in the use of “Articles” ( -0.147***) effectively lowered the prevalence of “Analytic al Thinking”. Third, the comparative analysis highlighted that the customized prompting framework is pivotal in enhancing GPT - 4’s performance on intrinsic metrics. However, GPT -4- Prompted exhibited notable limitations in specific dimensions, particularly i n dimensions such as “Direct”, “Warn”, “Raise Concern (without permission)”, and “Structure”, where it underperformed compared to both the baseline GPT -4 and human therapists. Addressing these deficits in intrinsic metrics is critical to improving GPT -4-Prompted’s performance on extrinsic metrics. Table IX COMPARATIVE ANALYSIS OF MI PERFORMANCE : EVALUATING GPT -4-PROMPTED , GPT -4, AND HUMAN THERAPISTS ON INTRINSIC AND EXTRINSIC METRICS1 Categories Categories of Intrinsic Metric Intrinsic Metric Relation Between Features and the UPQ Cohen’s d (GPT - 4-Prompted - Human Therapists) Cohen’s d (GPT -4- Prompted -GPT - 4) Importance Scores MIIN Advice (without permission) advice Negative -0.478*** -0.258*** 5.540897 Direct need Negative 0.376*** 0.257*** 1.457068 Warn Physical Negative 0.190*** 0.190*** 0.182461 health Negative 0.000 0.216*** 0.605685 substances Negative -0.138*** -0.055 0.434961 Raise Concern (without permission) discrepancy Negative 0.723*** 0.094* 0.39475 Confrontation conflict Negative 0.035 -0.043 0.484878 MICO Reflection RES (simple reflection) Positive -0.133*** 0.093* 1.700861 REC (complex reflection) Positive 0.039 -0.143*** 1.930714 Other Empathy Perception Positive 0.559*** 0.146*** 0.7562 Comma Negative 0.451*** 1.734*** 1.14083 work Positive -0.335*** 0.136*** 0.502621 Professionalism Apostrophes Negative 0.201*** -0.732*** 1.710157 Negotiation LSM Negative 0.367*** -0.145*** 0.35326 Structure time Negative 0.163*** 0.158*** 0.261476 Analytical Thinking impersonal pronouns Positive -0.479*** -0.444*** 1.685375 article Negative 0.406*** -0.147*** 1.746483 Ⅴ. DISCUSSION This study utilized a dataset containing nearly 600 real MI dialogues. First, based on an integrative modeling framework, machine learning, deep learning, and NLP techniques were employed to predict human responses, thereby developing a predicti ve model to assess the UPQ of the LLM within the context of MI. Next, grounded in the MI theory model proposed by Miller and Rollnick, explanatory modeling was conducted to identify key MI strategies and psychological linguistic cues employed by therapists that influence UPQ. Following this, building on the outcomes of the explanatory modeling, we proposed a computational evaluation framework that integrates both extrinsic and intrinsic metrics. Additionally, leveraging this computational evaluation framewo rk, we tailored CoT prompts for the LLM and collected the responses enhanced by the customized prompts. Subsequently, a comparison of MI outcomes between the LLM, human therapists, and the original unprompted LLM was conducted to validate the effectiveness of the computational evaluation framework in enhancing GPT - 4’s response quality within the context of MI. A. Principal Findings The experimental results indicated that the LLM demonstrated exceptional MI performance, with significant improvements observed when customized prompts derived from the 1 Asterisks in Table IX denote levels of statistical significance: * for p < 0.05, ** for p < 0.01, and *** for p < 0.001. comprehensive computational evaluation framework were applied. Overall,
https://arxiv.org/abs/2505.17380v1
the methodologies employed in this study effectively enhanced the LLM’s MI performance while ensuring transparency and interpretability of the optimization process. The comprehensive computational evaluation framework combined the extrinsic metric with intrinsic metrics, revealing the intricate relationship between therapist skills and interview outcomes. This approach provided new analytical tools and perspectives for a deeper understanding and optimization of MI practices. 1) GPT -4's Capabilities in MIs Compared to Human Experts LLMs hold significant potential to enhance access to mental health support through scalable interventions capable of reaching extensive populations [34], [22] . To illustrate this potential, developers and end -users have shared anecdotal evidence on social media and other platforms, suggesting that LLMs, such as ChatGPT, exhibit remarkable attributes akin to human therapists, including therapeutic alliance and active listening skills [21], [104] . Consequently, there has been growing consideration among both developers and users to deploy these models as alternatives to human therapists and established, evidence -based psychotherapeutic modalities. As a result, an increasing number of individuals are turning to LLM - based interventions to address their mental health concerns [19], [21], [104] . However, our research indicates that LLMs, such as GPT -4, perform significantly b elow human therapists in terms of the UPQ, particularly in facilitating high -quality MI interactions. Human therapists scored substantially higher on the UPQ metric (52.69% versus 38.45%, χ² = 25.886, p < 0.001), highlighting the persistent limitations of LLMs in managing complex psychological interactions and eliciting emotional resonance. These results suggest that LLMs still face significant challenges in delivering effective MI, especially when dealing with intricate emotional and interpersonal dynamics . This underscores the need for further research to ensure the provision of high -quality care and to address the ethical and practical challenges associated with integrating LLMs into mental health services. These findings align with expert concerns regard ing the lack of robust evidence supporting the efficacy of LLMs and the potential risks linked to their use in mental health support contexts [22], [23], [24] . Ensuring that mental health interventions maintain high standards of care remains paramount, and ongoing studies are essential to validate the role of LLMs in this sensitive and critical field. 2) The Role of the UPQ -Centered Evaluation Framework in Assessing GPT -4’s Performance in the Processes and Outcomes of M is This study categorizes the intrinsic metrics into three primary types —MIIN behaviors, MICO behaviors, and other behaviors [105] —as introduced in the previous sections. MIIN behaviors refer to actions that conflict with the core principles of Mot ivational Interviewing (MI), thereby exerting a detrimental effect on UPQ. MICO behaviors, in contrast, align with MI principles [38] and help improve UPQ by fostering client engagement and collaboration. Other behaviors are those that do not fit neatly into either category but still influence the therapy process. This research indicates that certain LLM -driven behaviors function as a “double -edged sword” in psychotherapy applications. On the positive side, GPT -4’s emphasis on complex reflections —a critical MICO behavior —helps deepen the exploration of client issues. This aligns closely with high
https://arxiv.org/abs/2505.17380v1
- quali ty MI sessions [106] . Moreover, the incorporation of perception -related terminology heightens empathy, as it allo ws therapists to accurately reflect the client’s lived experiences, thus enhancing self -reflection and motivation [107], [108], [109], [110] . On the negative side, however, GPT -4 may also over-utilize behaviors characteristic of low -quality therapy. These include excessive use of directive vocabulary (“have to”, “need”, “must”), which can be perceived as authoritarian (Direct) and raise client distress when combined with terms like “discrepancy.” Such MIIN behaviors undermine UPQ by increasing client anxiety. In addition, overus ing apostrophes can lead to an overly informal tone [111] , while excessive use of definite articles [112] and LSM [113] may shift the focus away fr om rapport -building or negotiation tasks. These actions deviate from recommended therapeutic practices [114], [115] and risk damaging the therapeutic alliance and quality of care. A moderate reduction in some MIIN behaviors can be advantageous. For instance, fewer “warning” expressions about health or substance use may lower client resistance, while minimizing frequent silent pauses can help sustain empathic engagement [116], [117] . These adjus tments can elicit more self-motivated client statements [108] . However, overly curtailing beneficial MICO behaviors —particularly simple reflections —poses a threat to MI efficacy. Simple reflections help therapists grasp the client’s perspective and daily challenges [118], [119] . When GPT -4 reduces simple reflections excessively, it risks missing crucial nuances in the client’s work or personal life, thereby dam pening empathic capacity, and increasing relational resistance. Balancing the frequency of simple versus complex reflections is thus critical to optimize the therapeutic alliance and support client autonomy. Our findings also reveal that general -purpose L LMs, including GPT -4, often overemphasize problem -solving while underemphasizing crucial MI behaviors such as open -ended questioning. Although problem -solving skills can enhance user satisfaction in general contexts, they may inadvertently replicate low -quality therapy traits. This imbalance possibly stems from Reinforcement Learning from Human Feedback (RLHF) processes, which primarily optimize for short -term or generic user metrics [120], [121] . Within mental health settings, however, successful MI relies on long -term engagement and sustained behavio ral changes [122] . While GPT -4 may reduce comma usage or utilize more perception -related expressions — both beneficial for MI —excessive reliance on contractions (apostrophes ), articles, and demand -related vocabulary like “need” could undermine client autonomy and rapport. To accommodate diverse human preferences and domain -specific requirements, a multi -stage RLHF alignment is necessary, considering not only immediate “useful ness” but also adherence to psychotherapeutic guidelines [120], [121] . Recent studies support the idea of multi -value alignment [123], [124] , underscoring the importance of integrating human -centered approaches for healthcare scenarios. 3) The Facilitating Role of the Integrative computational Evaluation Framework in Prompting LLM Performa nce in Mis As LLMs become increasingly integrated into both novel and existing mental health interventions —spanning commercial sectors [12], [14] and academic environments [13], [15], [125] — establishing transparent an d reliable evaluation methodologies is critically important. The framework proposed in this study offers a
https://arxiv.org/abs/2505.17380v1
promising preliminary exploration toward achieving this objective. Based on the UPQ -centered evaluation, it is evident that GPT -4 can further optimiz e its Simple Reflections, empathic expressions, and professional communication style to align more closely with core MI principles (e.g., avoiding unsolicited advice and excessive emphasis on client needs). Strengthening these aspects will help maintain cl ient autonomy while reducing relational distance, thereby fostering a more collaborative therapeutic alliance. In terms of MICO (Motivational Interviewing Consistent) behaviors, GPT -4 with customized prompts increased the use of Simple Reflection —one of the core OARS (Open -ended questions, Affirmations, Reflections, and Summaries) techniques —thereby enhancing its ability to understand and respond to clients’ core issues. The model also maintained a high frequency of Complex Reflection, r eflecting deeper exploration of the client’s inner experiences. Additionally, it excelled in Empathy -related word usage (e.g., increased Perception vocabulary), signifying improved attunement to clients’ emotional and cognitive states. Concerning professio nalism and analytical thinking, GPT -4 with customized prompts reduced the use of apostrophes (thus less informal) and articles (lowering excessive analytical focus), both of which contributed to more supportive and natural clinician –client interactions. However, GPT -4 with customized prompts still exhibits shortcomings in certain MIIN behaviors —notably Direct commands, Warn, and Raise Concern (without permission) — where its performance remains below that of human therapists. These behaviors can undermine cl ient autonomy or increase resistance, thereby reducing UPQ. Furthermore, the Structure dimension shows room for optimization, suggesting that GPT - 4 may benefit from more refined session organization and smoother transitions in line with MI guidelines. By t argeting these gaps, future iterations of GPT -4 can strengthen its alignment with established MI principles and better replicate the nuanced approaches used by experienced human therapists. Overall, this evaluation framework —emphasizing human – AI collabora tive design and customized zero -shot prompts — demonstrates that LLMs can significantly narrow the gap with human therapists, both on extrinsic (UPQ) and intrinsic (MIIN/MICO/other behaviors) metrics. By integrating modeling insights and prompt engineering w ith feedback from mental health experts [13], [126] , developers can ensure that these systems adhere to core psychotherapy considerations of quality, safety, and ethics [127], [128] . This approach not only refines GPT -4’s performance but also underscores the critical importance of long -term and domain -specific alignment in mental health applications —thereby maintaining clari ty, precision, and client well -being in academic and clinical contexts. B. Limitations and Future Research Despite the progress made, several limitations remain, which highlight opportunities for future research. First, key intrinsic metrics critical to UP Q were systematically identified based on a UPQ -centered computational evaluation framework. This enabled targeted enhancements in the LLM’s MI performance on both extrinsic and intrinsic metrics through customized zero - shot prompts. Despite these advancem ents, the LLM still exhibited notable deficiencies in reflective listening and empathetic expression, particularly in complex emotional contexts. These limitations hindered its ability to build emotional resonance, foster trust, and drive self -awareness an d behavioral change, ultimately constraining the depth and effectiveness of MI interactions.
https://arxiv.org/abs/2505.17380v1
Comparative analysis further revealed that an excessive reliance on informal language diminished the perceived professionalism of the LLM, whereas an overemphasis on analytical thinking exacerbated emotional detachment from clients. To mitigate these issues, the LLM should limit the use of apostrophes to enhance its professionalism, while increasing the use of impersonal pronouns or reducing articles to mitigate ana lytical thinking, fostering stronger relational connections with clients. Additionally, although customized prompts have significantly improved LLM’s performance in MI, there are still gaps in its MIIN behavior and structural dimensions. In terms of MIIN behavior, the LLM tended to employ direct commands, warnings, and unsolicited expressions of concern, which contravened fundamental MI principles of collaboration and client autonomy. Future research should focus on advancing prompt engineering techniques a nd refining model fine -tuning to address these deficiencies and further optimize the LLM’s ability to adhere to MI principles. Second, this study employed machine learning -based automated evaluation methods which, while effective in reducing evaluator bia ses and minimizing cultural influences, fell short of comprehensively capturing interviewers’ subjective experiences. Although these methods offered a standardized and scalable evaluation framework, they lacked alignment with real -world interactions and of ten failed to accurately reflect clients’ authentic feedback or the MI quality. To overcome these limitations, future research should consider combining automated and manual evaluation approaches, thereby enhancing the robustness of the assessment process and provide a more nuanced and accurate representation of the MI effectiveness. Third, the dataset utilized in this study was derived from authentic MI dialogues, however, several limitations must be acknowledged. First, sample selection bias may result i n an overrepresentation of certain groups, thereby restricting the generalizability of the findings [129] . Second, social desirability bias could lead to distorted responses from participants, potentially compromising the authenticity of the data [1]. Lastly, cultural diversity issues remain a significant challenge, introducing variability in the quality of MI services [130] . Racial and ethnic minoritie s (REM), along with other cultural minority groups, are disproportionately likely to prematurely terminate therapy [131] and are less likely to access high -quality care[132] . In conc lusion, despite customized prompts, LLM remains less effective than human therapists in managing MIIN behaviors due to insufficient training in nuanced therapeutic communication [133] . Enhancing its proficiency will require integrating targeted t raining data and refining prompt engineering to better align with MI principles [59]. Furthermore, automated evaluation methods lack the capacity to capture real - world complexities, emphasizing the importance of integrating manual approaches to enhance the precision and robustness of MI assessment. Finally, dataset biases and cultu ral diversity limitations constrain the generalizability of MI findings, underscoring the need for more inclusive sampling and bias - mitigation strategies in future research. C. Theoretical and Practical Implications 1) Theoretical Significance From the theore tical significance, this study developed a computational evaluation framework to identify key differences in verbal behaviors between LLMs and human therapists during MI. This framework validated and extended Miller and Rollnick’s MI theoretical model with in human -AI contexts, laying the foundation for the development of
https://arxiv.org/abs/2505.17380v1
more comprehensive and high -fidelity LLM behavior evaluation frameworks in psychotherapy. Secondly, since the introduction of Eliza, a simulated psychotherapist [134] , in the 1960s, the concept of therapy chatbots offering mental health support has consistently attracted the interest of clinicians, researchers, and the general public. The findings of this study demonstrate that LLMs perform comparably to human therapists in MI, underscori ng their groundbreaking capability to simulate human intelligence and engage in meaningful interactions. These discoveries broaden the scope of AI -human interaction research and underscore the growing significance of machine behavior and machine psychology within the field. 2) Practical Significance From the practical significance, first, the study demonstrated that LLMs approached the MI performance of human therapists, highlighting their potential for mental health applications. As global demand for mental health support continues to exceed the avai lability of professional therapists, LLMs like ChatGPT —offering therapeutic capabilities, efficiency, cost - effectiveness, and accessibility —emerge as promising tools to help bridge the gap between the supply and demand for mental health services. Second, a computational evaluation framework developed, applied, and validated in this study offers an effective means for rapidly and objectively assessing AI performance in MI. This framework can assist regulatory bodies and professional organizations in establi shing relevant standards and guidelines to ensure the quality and safety of LLM applications. Furthermore, it demonstrates how human expertise has been explicitly integrated to guide LLM behavior through human -AI collaboration. As a transparent and publicl y available tool, this framework plays a critical role in guiding and enhancing the capabilities of LLMs, offering new insights into their responsible and ethical deployment in sensitive domains such as psychotherapy. Furthermore, the implementation of cu stomized prompts offers practical guidance for the real -world deployment of AI systems. These prompts can enhance the professionalism and effectiveness of LLMs, ensuring alignment with MI core principles while improving the quality of interactions with clients. Finally, this study pioneers the integration of prompt engineering within an integrative modeling paradigm, validating the potential of combining predictive modeling, explanatory modeling, and field experiments in a computational social science rese arch framework. This comprehensive approach enables precise prediction, in -depth understanding, and strategic intervention of LLM language behaviors, thereby advancing the emerging field of human -LLM interaction behavior studies. D. Ethical Risks and Poten tial Biases Before applying LLMs in the field of MI, it is critical to address several ethical considerations, including data privacy, algorithmic bias, discrimination, transparency, interpretability, and the absence of a comprehensive ethical and legal framework [135] . These ethical risks and inherent biases are particularly pronounced when evaluating the comparative capabilities of human therapists and LLMs in MI. First, data privacy is a paramount concern, as it involves sensitive personal emotions, psychological states, and private client information. When leveraging LLMs for MI, it is crucial to ensure that real -world data collection an d processing adhere to privacy regulations and are safeguarded throughout the processss [136] . Improper data handling could lead to the leakage of sensitive information, thereby violating clien ts’ privacy rights. Second, algorithmic
https://arxiv.org/abs/2505.17380v1
bias and discrimination are key ethical considerations. The training data used for LLM may contain inherent biases, which could be amplified during the model’s analysis and generation process [137], [138], [139] . In the context of MI, such biases could result in misunderstand ings or unfair treatment of specific groups, potentially compromising the fairness and effectiveness of the counseling process. In contrast, while human therapists may also exhibit subjective biases, their professional training equips them to mitigate thes e biases through self -reflection and supervision. Additionally, transparency and interpretability are crucial for building trust among stakeholders [138] . Although this study has made some progress in improving transparency and interpretability, several key ethical ch allenges remain unresolved. For instance, the decision -making process of LLMs is inherently complex and opaque, making it challenging for both clients and regulators to understand the rationale behind the advice provided. In contrast, human therapists can clearly articulate their counseling methods and thought processes, fostering greater understanding and trust. Finally, the absence of a comprehensive ethical and legal framework poses a significant challenge. As generative AI technologies evolve at a rapi d pace, existing ethical and legal norms may fail to keep up with their applications [135] . This misalignment could create regulatory gap s, hindering efforts to effectively protect clients’ rights. In contrast, human therapists operate under well -defined professional ethical guidelines and legal regulations, which provide explicit standards for their conduct. Ⅵ. CONCLUSION Overall, this study, based on the integrated modeling methodology, combined predictive and explanatory modeling results with the MI theoretical model to develop a UPQ - centered evaluation framework. Theis framework bridges extrinsic metrics and intrinsic metrics, uncover ing key intrinsic factors that influence extrinsic outcomes. Leveraging the framework’s insights, customized prompts were designed to enhance GPT -4’s performance in motivational interviewing. A comparative analysis revealed that while GPT -4’s overall perfo rmance on intrinsic and extrinsic metrics slightly lagged behind that of human therapists, its MICO behaviors were comparable. Notably, with the implementation of customized prompts, GPT -4 demonstrated significant improvements across all metrics, approachi ng the performance levels of human therapists. This study offers preliminary evidence of LLMs’ capabilities in MI. The findings suggest a cautiously optimistic outlook for applying advanced LLMs in MI interventions, though persistent challenges, particular ly in analyzing complex emotions and expressing empathy. The proposed framework emphasizes the integration of human expertise into LLMs top enhance their effectiveness and support the development of trustworthy AI -based MI services. These findings provide direction for further optimizing the application of LLMs in MI and offer important theoretical and practical insights for future research . REFERENCE [1] W. R. Miller and S. Rollnic k, Motivational Interviewing: Helping People Change . Guilford Press, 2012. [2] T. B. Brown et al. , “Language Models are Few -Shot Learners,” Jul. 22, 2020, arXiv : arXiv:2005.14165. doi: 10.48550/arXiv.2005.14165. [3] OpenAI et al. , “GPT -4 Technical Report,” Mar. 04, 2024, arXiv : arXiv:2303.08774. doi: 10.48550/arXiv.2303.08774. [4] H. Touvron et al. , “Llama 2: Open Foundation and Fine -Tuned Chat Models,” Jul. 19, 2023, arXiv
https://arxiv.org/abs/2505.17380v1
: arXiv:2307.09288. doi: 10.48550/arXiv.2307.09288. [5] S. Bubeck et al. , Sparks of A rtificial General Intelligence: Early experiments with GPT -4. 2023. doi: 10.48550/arXiv.2303.12712. [6] X. Sun et al. , “Eliciting Motivational Interviewing Skill Codes in Psychotherapy with LLMs: Joint 30th International Conference on Computational Linguis tics and 14th International Conference on Language Resources and Evaluation, LREC -COLING 2024,” 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC -COLING 2024 - Main Conference Proceedings , pp. 5609 –5621, 2024. [7] A. Hosny and H. J. W. L. Aerts, “Artificial intelligence for global health,” Science , vol. 366, no. 6468, pp. 955 –956, Nov. 2019, doi: 10.1126/science.aay5189. [8] World Health Organization, “Mental Health and COVID -19: Early evidence of the pandemic’s impact: Scientific brief, 2 March 2022.” Accessed: Dec. 25, 2024. [Online]. Available: https://www.who.int/publications/i/item/WHO -2019 -nCoV -Sci_Brief - Mental_health -2022.1 [9] M. Olfson, “Building The Mental Health Workforce Capacity Needed To Treat Adults With Serious Mental Illnesses,” Health Affairs , vol. 35, pp. 983–990, Jun. 2016, doi: 10.1377/hlthaff.2015.1619. [10] A. Sickel, J. Seacat, and N. Nabors, “Mental health stigma update: A review of consequences,” Advances in Mental Health , vol. 12, pp. 202 – 215, Dec. 2014, doi: 10.1080/18374905.2014.11081898. [11] G. World Health Organization, “Global Health Estimates 2021: Deaths by Cause, Age, Sex, by Country and by Region, 2000 -2021.” Accessed: Dec. 06, 2024. [Online]. Available: https://ww w.who.int/data/gho/data/themes/mortality -and-global -health - estimates/ghe -leading -causes -of-death [12] J. Hamilton, “Why generative AI (LLM) is ready for mental healthcare.” Accessed: Dec. 25, 2024. [Online]. Available: https://www.linkedin.com/pulse/why -generative -ai-chatgpt -ready - mental -healthcare -jose-hamilton -md [13] E. Stade et al. , “Large language models could change the future of behavioral healthcare: a proposal for responsible development and evaluation,” npj Mental Health Research , vol. 3, Apr. 202 4, doi: 10.1038/s44184 -024-00056 -z. [14] Youper, “Youper: Artificial Intelligence For Mental Health Care.” Accessed: Dec. 25, 2024. [Online]. Available: https://www.youper.ai/ [15] A. Sharma, A. Miner, D. Atkins, and T. Althoff, A Computational Approach to Understanding Empathy Expressed in Text -Based Mental Health Support . 2020. doi: 10.48550/arXiv.2009.08441. [16] A. Sharma et al. , Cognitive Reframing of Negative Thoughts through Human -Language Model Interaction . 2023. doi: 10.48550/arXiv.2305.02466. [17] I. W. Lin, A. Sharma, C. M. Rytting, A. S. Miner, J. Suh, and T. Althoff, “IMBUE: Improving Interpersonal Effectiveness through Simulation and Just-in-time Feedback with Human -Language Model Interaction,” Feb. 19, 2024, arXiv : arXiv:2402.12556. doi: 10.48 550/arXiv.2402.12556. [18] A. Sharma, K. Rushton, I. Lin, T. Nguyen, and T. Althoff, Facilitating Self-Guided Mental Health Interventions Through Human -Language Model Interaction: A Case Study of Cognitive Restructuring . 2024, p. 29. doi: 10.1145/3613904.3 642761. [19] N. Al -Sibai, “OpenAI Employee Says She’s Never Tried Therapy But ChatGPT Is Pretty Much a Replacement For It.” Accessed: Dec. 25, 2024. [Online]. Available: https://futurism.com/the -byte/openai -employee - chatgpt -therapy [20] Cairo -TenThirteen, “Using ChatGPT as a therapist?,” r/ChatGPTPro. Accessed: Dec. 25, 2024. [Online]. Available: www.reddit.com/r/ChatGPTPro/comments/126rtvb/using_chatgpt_as_a_t herapist/ [21] Mike2800, “ChatGPT is better than my therapist, holy shit.,” r/ChatGPT. Accessed: D ec. 25, 2024. [Online]. Available: www.reddit.com/r/ChatGPT/comments/zr5e17/chatgpt_is_better_than_m y_therapist_holy_shit/ [22] M. D. Choudhury,
https://arxiv.org/abs/2505.17380v1
S. R. Pendse, and N. Kumar, “Benefits and Harms of Large Language Models in Digital Mental Health,” Nov. 07, 20 23, arXiv : arXiv:2311.14693. doi: 10.48550/arXiv.2311.14693. [23] S. Tate, S. Fouladvand, J. H. Chen, and C. -Y. A. Chen, “The ChatGPT therapist will see you now: Navigating generative artificial intelligence’s potential in addiction medicine research and p atient care,” Addiction , vol. 118, no. 12, pp. 2249 –2251, Dec. 2023, doi: 10.1111/add.16341. [24] D. Veen et al. , “Adapted large language models can outperform medical experts in clinical text summarization,” Nature Medicine , vol. 30, pp. 1 –9, Feb. 2024, d oi: 10.1038/s41591 -024-02855 -5. [25] A. T. Beck, Cognitive therapy and the emotional disorders . in Cognitive therapy and the emotional disorders. Oxford, England: International Universities Press, 1976, p. 356. [26] J. Wright et al. , “Computer -Assisted Cog nitive -Behavior Therapy for Depression: A Systematic Review and Meta -Analysis,” The Journal of Clinical Psychiatry , vol. 80, Mar. 2019, doi: 10.4088/JCP.18r12188. [27] A. Belz and E. Reiter, “Comparing Automatic and Human Evaluation of NLG Systems,” in 11th Conference of the European Chapter of the Association for Computational Linguistics , D. McCarthy and S. Wintner, Eds., Trento, Italy: Association for Computational Linguistics, Apr. 2006, pp. 313 –320. Accessed: Dec. 06, 2024. [Online]. Available: https:/ /aclanthology.org/E06 -1040 [28] J. Novikova, O. Dušek, A. Cercas Curry, and V. Rieser, “Why We Need New Evaluation Metrics for NLG,” in Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing , Copenhagen, Denmark: A ssociation for Computational Linguistics, 2017, pp. 2241 – 2252. doi: 10.18653/v1/D17 -1238. [29] J. L. Cochran and N. H. Cochran, The Heart of Counseling , 0 ed. Routledge, 2015. doi: 10.4324/9781315884066. [30] N. Flemotomos, V. Martinez, Z. Chen, T. Creed, D. Atkins, and S. Narayanan, “Automated quality assessment of cognitive behavioral therapy sessions through highly contextualized language representations,” PLOS ONE , vol. 16, p. e0258639, Oct. 2021, doi: 10.1371/journal.pone.0258639. [31] T. M. Vallis, B. F. Shaw, and K. S. Dobson, “The Cognitive Therapy Scale: Psychometric properties.,” Journal of Consulting and Clinical Psychology , vol. 54, no. 3, pp. 381 –385, Jun. 1986, doi: 10.1037/0022 -006X.54.3.381. [32] K. Papineni, S. Roukos, T. Ward, and W. -J. Zhu, “BLEU: a method for automatic evaluation of machine translation,” in Proceedings of the 40th Annual Meeting on Association for Computational Linguistics - ACL ’02 , Philadelphia, Pennsylvania: Associatio n for Computational Linguistics, 2001, p. 311. doi: 10.3115/1073083.1073135. [33] C. Callison -Burch, M. Osborne, and P. Koehn, “Re -evaluating the Role of Bleu in Machine Translation Research,” in 11th Conference of the European Chapter of the Association f or Computational Linguistics , D. McCarthy and S. Wintner, Eds., Trento, Italy: Association for Computational Linguistics, Apr. 2006, pp. 249 –256. Accessed: Dec. 06, 2024. [Online]. Available: https://aclanthology.org/E06 -1032 [34] F. Doshi -Velez and B. Kim , “Towards A Rigorous Science of Interpretable Machine Learning,” Mar. 02, 2017, arXiv : arXiv:1702.08608. doi: 10.48550/arXiv.1702.08608. [35] J. M. Hofman et al. , “Integrating explanation and prediction in computational social science,” Nature , vol. 595, no. 7866, pp. 181
https://arxiv.org/abs/2505.17380v1
–188, Jul. 2021, doi: 10.1038/s41586 -021-03659 -0. [36] S. Rollnick and W. R. Miller, “What is Motivational Interviewing?,” Behav. Cogn. Psychother. , vol. 23, no. 4, pp. 325 –334, Oct. 1995, doi: 10.1017/S135246580001643X. [37] J. Hettema, J. Steele, and W. R. Miller, “Motivational Interviewing,” Annu. Rev. Clin. Psychol. , vol. 1, no. 1, pp. 91 –111, Apr. 2005, doi: 10.1146/annurev.clinpsy.1.102803.143833. [38] O. Akinrolie, S. Strachan, S. Webber, H. Chan, K. Messner, and R. Barclay, “Counsellors’ verbal behaviours and skills that elicit participants’ change or sustain talk in virtual motivational interviewing for physical activity among older adults,” Applied Psychology: Health and Well -Being , vol. 16, May 2024, doi: 10.1111/aphw.12551. [39] J. Gaume, N. Bertholet, M. Faouzi, G. Gmel, and J. -B. Daeppen, “Counselor motivational interviewing skills and young adult change talk articulation during brief motivational interventions,” Journal of Substance Abuse Treatment , vol. 39, no. 3, pp. 272 –281, Oct. 2010, doi: 10.1016/j.jsat.2010.06.010. [40] G. A. Bennett, H. A. Roberts, T. E. Vaughan, J. A. Gibbins, and L. Rouse, “Evaluating a method of assessing competence in Motivational Interviewing: A study usin g simulated patients in the United Kingdom,” Addictive Behaviors , vol. 32, no. 1, pp. 69 –79, Jan. 2007, doi: 10.1016/j.addbeh.2006.03.022. [41] T. B. Moyers, L. N. Rowell, J. K. Manuel, D. Ernst, and J. M. Houck, “The Motivational Interviewing Treatment In tegrity Code (MITI 4): Rationale, Preliminary Reliability and Validity,” Journal of Substance Abuse Treatment , vol. 65, pp. 36 –42, Jun. 2016, doi: 10.1016/j.jsat.2016.01.001. [42] L. Kramer Schmidt, K. Andersen, A. S. Nielsen, and T. B. Moyers, “Lessons le arned from measuring fidelity with the Motivational Interviewing Treatment Integrity code (MITI 4),” Journal of Substance Abuse Treatment , vol. 97, pp. 59 –67, Feb. 2019, doi: 10.1016/j.jsat.2018.11.004. [43] J. R. Thyrian et al. , “Adherence to the principl es of Motivational Interviewing, clients’ characteristics and behavior outcome in a smoking cessation and relapse prevention trial in women postpartum,” Addictive Behaviors , vol. 32, no. 10, pp. 2297 –2303, Oct. 2007, doi: 10.1016/j.addbeh.2007.01.024. [44] S. Kohler and A. Hofmann, “Can Motivational Interviewing in Emergency Care Reduce Alcohol Consumption in Young People? A Systematic Review and Meta -analysis,” Alcohol and Alcoholism , vol. 50, no. 2, pp. 107–117, Mar. 2015, doi: 10.1093/alcalc/agu098. [45] M. B. Madson et al. , “Measuring client perceptions of motivational interviewing: factor analysis of the Client Evaluation of Motivational Interviewing scale,” Journal of Substance Abuse Treatment , vol. 44, no. 3, pp. 330 –335, Mar. 2013, doi: 10.1016/j.jsa t.2012.08.015. [46] J. Houck, “Motivational Interviewing Skill Code (MISC) 2.1,” vol. 2. [47] W. R. Miller and S. Rollnick, “The effectiveness and ineffectiveness of complex behavioral interventions: Impact of treatment fidelity,” Contemporary Clinical Tri als, vol. 37, no. 2, pp. 234 –241, Mar. 2014, doi: 10.1016/j.cct.2014.01.005. [48] L. Wallace and F. Turner, “A Systematic Review of Psychometric Evaluation of Motivational Interviewing Integrity Measures,” Journal of Teaching in the Addictions , vol. 8, no. 1–2, pp. 84 –123, Nov. 2009, doi: 10.1080/15332700903396655. [49] T. P. Nagarhalli, V. Vaze, and N. K. Rana, “Impact of Machine Learning in Natural Language Processing:
https://arxiv.org/abs/2505.17380v1
A Review,” in 2021 Third International Conference on Intelligent Communication Technolo gies and Virtual Mobile Networks (ICICV) , Feb. 2021, pp. 1529 –1534. doi: 10.1109/ICICV50876.2021.9388380. [50] N. Nazareth and Y. V. Ramana Reddy, “Financial applications of machine learning: A literature review,” Expert Systems with Applications , vol. 219 , p. 119640, Jun. 2023, doi: 10.1016/j.eswa.2023.119640. [51] J. Xiao, J. Wang, W. Bao, T. Deng, and S. Bi, “Application progress of natural language processing technology in financial research,” Financial Engineering and Risk Management , vol. 7, no. 3, pp . 155 –161, Jun. 2024, doi: 10.23977/ferm.2024.070320. [52] D. Kucak, V. Juricic, and G. Dambic, “Machine Learning in Education - a Survey of Current Research Trends,” in DAAAM Proceedings , 1st ed., vol. 1, B. Katalinic, Ed., DAAAM International Vienna, 201 8, pp. 0406 –0410. doi: 10.2507/29th.daaam.proceedings.059. [53] P. J. Hershberger et al. , “Advancing Motivational Interviewing Training with Artificial Intelligence: ReadMI,” AMEP , vol. Volume 12, pp. 613 – 618, Jun. 2021, doi: 10.2147/AMEP.S312373. [54] Z. E. Imel et al. , “Design feasibility of an automated, machine -learning based feedback system for motivational interviewing.,” Psychotherapy , vol. 56, no. 2, pp. 318 –328, Jun. 2019, doi: 10.1037/pst0000221. [55] A. Vaswani et al. , “Attention is All you Need, ” in Advances in Neural Information Processing Systems , Curran Associates, Inc., 2017. Accessed: Dec. 10, 2024. [Online]. Available: https://papers.nips.cc/paper_files/paper/2017/hash/3f5ee243547dee91fbd 053c1c4a845aa -Abstract.html [56] C. Ziems, W. Held, O . Shaikh, J. Chen, Z. Zhang, and D. Yang, “Can Large Language Models Transform Computational Social Science?,” Computational Linguistics , vol. 50, no. 1, pp. 237 –291, Mar. 2024, doi: 10.1162/coli_a_00502. [57] A. Radford, J. Wu, R. Child, D. Luan, D. Amode i, and I. Sutskever, “Language Models are Unsupervised Multitask Learners,” 2019. Accessed: Dec. 10, 2024. [Online]. Available: https://www.semanticscholar.org/paper/Language -Models -are- Unsupervised -Multitask -Learners -Radford - Wu/9405cc0d6169988371b2755e573 cc28650d14dfe [58] J. Devlin, M. -W. Chang, K. Lee, and K. Toutanova, BERT: Pre -training of Deep Bidirectional Transformers for Language Understanding . 2018. doi: 10.48550/arXiv.1810.04805. [59] Y. Hua et al. , “Large Language Models in Mental Health Care: a Scoping Review,” Aug. 21, 2024, arXiv : arXiv:2401.02984. doi: 10.48550/arXiv.2401.02984. [60] Z. Ma, Y. Mei, and Z. Su, “Understanding the Benefits and Challenges of Using Large Language Model -based Conversational Agents for Mental Well -being Support,” Ju l. 28, 2023, arXiv : arXiv:2307.15810. doi: 10.48550/arXiv.2307.15810. [61] G. Cameron et al. , “Assessing the Usability of a Chatbot for Mental Health Care,” 2019, pp. 121 –132. doi: 10.1007/978 -3-030-17705 -8_11. [62] Z. Guo, A. Lai, J. H. Thygesen, J. Farri ngton, T. Keen, and K. Li, “Large Language Model for Mental Health: A Systematic Review,” Feb. 18, 2024. doi: 10.2196/preprints.57400. [63] F. Farhat, ChatGPT as a Complementary Mental Health Resource: A Boon or a Bane . 2023. doi: 10.20944/preprints202307. 1479.v1. [64] D. Ganguli et al. , “Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned,” Nov. 22, 2022, arXiv : arXiv:2209.07858. doi: 10.48550/arXiv.2209.07858. [65] I. Augenstein et al. , “Factuality Challenges in the Era of Large Language Models, ” Oct. 10, 2023, arXiv : arXiv:2310.05189. doi:
https://arxiv.org/abs/2505.17380v1
10.48550/arXiv.2310.05189. [66] B. Oakley, A. Knafo -Noam, and M. Mcgrath, “Pathological Altruism - An Introduction,” Pathological Altruism , Dec. 2011, doi: 10.1093/acprof:oso/9780199738571.003.0014. [67] Q. X and W. B, “AI’s Role in Improving Social Connection and Oral Health for Older Adults: A Synergistic Approach,” JDR clinical and translational research , vol. 9, no. 3, Jul. 2024, doi: 10.1177/23800844231223097. [68] J. Edu, C. Mulligan, F. Pierazzi, J. Pola kis, G. Suarez -Tangil, and J. Such, “Exploring the security and privacy risks of chatbots in messaging services: 22nd ACM Internet Measurement Conference, IMC 2022,” IMC 2022 - Proceedings of the 2022 ACM Internet Measurement Conference , pp. 581–588, Oct. 2022, doi: 10.1145/3517745.3561433. [69] M. A. Pimentel, C. Christophe, T. Raha, P. Munjal, P. K. Kanithi, and S. Khan, “Beyond Metrics: A Critical Analysis of the Variability in Large Language Model Evaluation Frameworks,” Jul. 29, 2024, arXiv : arXiv:2407 .21072. doi: 10.48550/arXiv.2407.21072. [70] Y. Chang et al. , “A Survey on Evaluation of Large Language Models,” ACM Trans. Intell. Syst. Technol. , vol. 15, no. 3, p. 39:1 -39:45, 2024, doi: 10.1145/3641289. [71] J. Zhou et al. , “Instruction -Following Evaluation for Large Language Models,” Nov. 14, 2023, arXiv : arXiv:2311.07911. doi: 10.48550/arXiv.2311.07911. [72] K. Peng, R. E. Nisbett, and N. Y. C. Wong, “Validity problems comparing values across cultures and possible solutio ns,” Psychological Methods , vol. 2, no. 4, pp. 329 –344, 1997, doi: 10.1037/1082 -989X.2.4.329. [73] A. Clark and S. Lappin, “The Handbook of Computational Linguistics and Natural Language Processing,” 2010, pp. 197 –220. doi: 10.1002/9781444324044.ch8. [74] L. Chen et al. , “Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators,” Oct. 11, 2023, arXiv : arXiv:2310.07289. doi: 10.48550/arXiv.2310.07289. [75] J. C. Eichstaedt et al. , “The emotional and mental health impact of the murder of George Floyd on the US population,” Proc. Natl. Acad. Sci. U.S.A. , vol. 118, no. 39, p. e2109139118, Sep. 2021, doi: 10.1073/pnas.2109139118. [76] S. Moore, D. N. Amatya, M. M. Chu, and A. D. Besterman, “Catatonia in autism and other neuro developmental disabilities: a state -of-the-art review,” npj Mental Health Res , vol. 1, no. 1, p. 12, Sep. 2022, doi: 10.1038/s44184 -022-00012 -9. [77] J. C. Eichstaedt et al. , “Facebook language predicts depression in medical records,” Proc. Natl. Acad. Sci . U.S.A. , vol. 115, no. 44, pp. 11203 –11208, Oct. 2018, doi: 10.1073/pnas.1802331115. [78] R. M. Foxx, “The maintenance of behavioral change: The case for long - term follow -ups,” American Psychologist , vol. 68, no. 8, pp. 728 –736, 2013, doi: 10.1037/a003371 3. [79] T. A. Koleck, C. Dreisbach, P. E. Bourne, and S. Bakken, “Natural language processing of symptoms documented in free -text narratives of electronic health records: a systematic review,” Journal of the American Medical Informatics Association , vol. 2 6, no. 4, pp. 364 –379, Apr. 2019, doi: 10.1093/jamia/ocy173. [80] N. J. Carson et al. , “Identification of suicidal behavior among psychiatrically hospitalized adolescents using natural language processing and machine learning of electronic health records,” PLoS ONE , vol. 14, no. 2, p. e0211116, Feb. 2019, doi:
https://arxiv.org/abs/2505.17380v1
10.1371/journal.pone.0211116. [81] K. Z. Smith, P. H. Smith, J. M. Violanti, P. T. Bartone, and G. G. Homish, “Posttraumatic Stress Disorder Symptom Clusters and Perpetration of Intimate Partner Viole nce: Findings From a U.S. Nationally Representative Sample,” Journal of Traumatic Stress , vol. 28, no. 5, pp. 469–474, Oct. 2015, doi: 10.1002/jts.22048. [82] T. H. McCoy, V. M. Castro, A. Cagan, A. M. Roberson, I. S. Kohane, and R. H. Perlis, “Sentiment M easured in Hospital Discharge Notes Is Associated with Readmission and Mortality Risk: An Electronic Health Record Study,” PLoS ONE , vol. 10, no. 8, p. e0136341, Aug. 2015, doi: 10.1371/journal.pone.0136341. [83] O. Kjell Schwartz, “Towards Well -Being Meas urement with Social Media Across Space, Time and Cultures: Three Generations of Progress.” Accessed: Dec. 29, 2024. [Online]. Available: https://worldhappiness.report/ed/2023/towards -well-being -measurement - with-social -media -across -space -time-and-cultures -three-generations -of- progress/ [84] K. Jaidka, S. Giorgi, H. A. Schwartz, M. L. Kern, L. H. Ungar, and J. C. Eichstaedt, “Estimating geographic subjective well -being from Twitter: A comparison of dictionary and data -driven language methods,” Proc. Natl. Acad. Sci. U.S.A. , vol. 117, no. 19, pp. 10165 –10171, May 2020, doi: 10.1073/pnas.1906364117. [85] J. B. Edgcomb and B. Zima, “Machine Learning, Natural Language Processing, and the Electronic Health Record: Innovations in Mental Health Services Research,” Psychiatric Services , vol. 70, p. appi.ps.2018004, Feb. 2019, doi: 10.1176/appi.ps.201800401. [86] J. Bruthans, “The past and current state of the Czech outpatient electronic prescription (eRecept),” International Journal of Medical Informatics , vol. 123, pp . 49–53, Mar. 2019, doi: 10.1016/j.ijmedinf.2019.01.003. [87] H. Xie, A. Ainsworth, and L. Caldwell, “Grandparent(s) coresidence and physical activity/screen time among Latino children in the United States.,” Families, Systems, & Health , vol. 39, no. 2, pp . 282 –292, Jun. 2021, doi: 10.1037/fsh0000601. [88] M. Rahimian, J. L. Warner, S. K. Jain, R. B. Davis, J. A. Zerillo, and R. M. Joyce, “Significant and Distinctive n -Grams in Oncology Notes: A Text-Mining Method to Analyze the Effect of OpenNotes on Clini cal Documentation,” JCO Clin Cancer Inform , vol. 3, pp. 1 –9, Jun. 2019, doi: 10.1200/CCI.19.00012. [89] D. C. Atkins, M. Steyvers, Z. E. Imel, and P. Smyth, “Scaling up the evaluation of psychotherapy: evaluating motivational interviewing fidelity via stat istical text classification,” Implementation Sci , vol. 9, no. 1, p. 49, Dec. 2014, doi: 10.1186/1748 -5908 -9-49. [90] C. Blease, J. Torous, and M. Hägglund, “Does Patient Access to Clinical Notes Change Documentation?,” Front. Public Health , vol. 8, p. 5778 96, Nov. 2020, doi: 10.3389/fpubh.2020.577896. [91] P. Scalia, A. James. O’Malley, M. -A. Durand, P. P. Goodney, and G. Elwyn, “Presenting time -based risks of stroke and death for Patients facing carotid stenosis treatment options: Patients prefer pie chart s over icon arrays,” Patient Education and Counseling , vol. 102, no. 10, pp. 1939 – 1944, Oct. 2019, doi: 10.1016/j.pec.2019.05.004. [92] S. Sharma, S. Dutta, E. Albini, F. Lecue, D. Magazzeni, and M. Veloso, “REFRESH: Responsible and Efficient Feature Reselection guided by SHAP values,” in Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society ,
https://arxiv.org/abs/2505.17380v1
in AIES ’23. New York, NY, USA : Association for Computing Machinery, 2023, pp. 443 –453. doi: 10.1145/3600211.3604706. [93] Y. Tausczik and J. Pennebaker, “The Psychological Meaning of Words: LIWC and Computerized Text Analysis Methods,” Journal of Language and Social Psychology , vol. 2 9, pp. 24 –54, Mar. 2010, doi: 10.1177/0261927X09351676. [94] M. Ireland, R. Slatcher, P. Eastwick, L. Scissors, E. Finkel, and J. Pennebaker, “Language Style Matching Predicts Relationship Initiation and Stability,” Psychological science , vol. 22, pp. 39 –44, Jan. 2011, doi: 10.1177/0956797610392928. [95] L. Tavabi et al. , “Multimodal Automatic Coding of Client Behavior in Motivational Interviewing,” Proceedings of the ... ACM International Conference on Multimodal Interaction. ICMI (Conference) , vol. 2020, p. 406, Oct. 2020, doi: 10.1145/3382507.3418853. [96] W. R. Miller and G. S. Rose, “Toward a Theory of Motivational Interviewing,” Am Psychol , vol. 64, no. 6, pp. 527 –537, Sep. 2009, doi: 10.1037/a0016830. [97] X. Lin, Z. Wu, J. Chen, L. Huang, and Z. Shi, “A Credit Scoring Model Based on Integrated Mixed Sampling and Ensemble Feature Selection: RBR_XGB,” Journal of Internet Technology , vol. 23, no. 5, Art. no. 5, Sep. 2022. [98] S. Lundberg and S. -I. Lee, “An unexpected unity among methods for interpreting model predictions,” Nov. 2016, doi: 10.48550/arXiv.1611.07478. [99] A. Hamdard and H. Lodin, “Effect of Feature Selection on the Accuracy of Machine Learning Model,” INTERNATIONAL JOURNAL OF MULTIDISCIPLINARY RESEARCH AND ANALYSIS , vol. 06, Sep. 2023, doi: 10.47191/ijmra/v6 -i9-66. [100] L. Wu, Interpretable prediction of heart disease based on random forest and SHAP . 2023. doi: 10.1117/12.2682322. [101] S. Kotsiantis, “Feature selection for machine learning classification problems: A recent overview,” Artificial Intelligence Review - AIR, vol. 42, Jun. 2011, doi: 10.1007/s10462 -011-9230 -1. [102] S. Y and M. Challa, “A Comparative Analysis of Explainable AI Techniques for Enhanced Model Interpretability,” in 2023 3rd International Conference on Pervasive Com puting and Social Networking (ICPCSN) , Jun. 2023, pp. 229 –234. doi: 10.1109/ICPCSN58827.2023.00043. [103] L. H. Li, J. Hessel, Y. Yu, X. Ren, K. -W. Chang, and Y. Choi, “Symbolic Chain -of-Thought Distillation: Small Models Can Also ‘Think’ Step-by-Step,” Ap r. 15, 2024, arXiv : arXiv:2306.14050. doi: 10.48550/arXiv.2306.14050. [104] Cairo -TenThirteen, “Using ChatGPT as a therapist?,” r/ChatGPTPro. Accessed: Dec. 30, 2024. [Online]. Available: www.reddit.com/r/ChatGPTPro/comments/126rtvb/using_chatgpt_as_a_t herapist/ [105] T. Moyers, W. Miller, and S. Hendrickson, “How Does Motivational Interviewing Work? Therapist Interpersonal Skill Predicts Client Involvement Within Motivational Interviewing Sessions,” Journal of consulting and clinical psychology , vol. 73, p p. 590 –8, Aug. 2005, doi: 10.1037/0022 -006X.73.4.590. [106] S. Tollison, C. Lee, T. Neil, N. Olson, and M. Larimer, “Questions and Reflections: The Use of Motivational Interviewing Microskills in a Peer-Led Brief Alcohol Intervention for College Students,” Behavior therapy , vol. 39, pp. 183 –94, Jun. 2008, doi: 10.1016/j.beth.2007.07.001. [107] P. Amrhein, W. Miller, T. Moyers, and D. Ernst, “Manual for the Motivational Interviewing Skill Code (MISC),” Motivational Interviewing Skill Code v. 2.1 , Jan. 2008, [Online]. Available: https://digitalcommons.montclair.edu/psychology -facpubs/27 [108] B. Xiao, D. Can, P. G. Georgiou, D. Atkins, and S. S. Narayanan, “Analyzing the language of therapist empathy in Motivational
https://arxiv.org/abs/2505.17380v1
Interview based psychotherapy,” in Proceedin gs of The 2012 Asia Pacific Signal and Information Processing Association Annual Summit and Conference , Dec. 2012, pp. 1 –4. Accessed: Dec. 17, 2024. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/6411762 [109] K. I. Pollak et al. , “Phys ician Empathy and Listening: Associations with Patient Satisfaction and Autonomy,” J Am Board Fam Med , vol. 24, no. 6, pp. 665 –672, Nov. 2011, doi: 10.3122/jabfm.2011.06.110025. [110] L. Tavabi et al. , “Therapist Empathy Assessment in Motivational Intervie ws,” in 2023 11th International Conference on Affective Computing and Intelligent Interaction (ACII) , Sep. 2023, pp. 1 –8. doi: 10.1109/ACII59096.2023.10388176. [111] J. H. N. Lee, H. Chui, T. Lee, S. Luk, D. Tao, and N. W. T. Lee, “Formality in psychotherapy: How are therapists’ and clients’ use of discourse particles related to therapist empathy?,” Front Psychiatry , vol. 13, p. 1018170, 2022, doi: 10.3389/fpsyt.2022.1 018170. [112] A. H. Qiu and D. Tay, “A Mixed -Method Comparison of Therapist and Client Language across Four Therapeutic Approaches,” Journal of Constructivist Psychology , vol. 36, pp. 1 –24, Jan. 2022, doi: 10.1080/10720537.2021.2021570. [113] M. Ireland an d M. Henderson, “Language Style Matching, Engagement, and Impasse in Negotiations,” Negotiation and Conflict Management Research , vol. 7, pp. 1 –16, Feb. 2014, doi: 10.1111/ncmr.12025. [114] A. Horvath, “The alliance,” Psychotherapy Theory Research & Practi ce, vol. 38, pp. 365 –372, Dec. 2001, doi: 10.1037//0033 - 3204.38.4.365. [115] C. Fairburn and Z. Cooper, “Therapist competence, therapy quality, and therapist training,” Behaviour research and therapy , vol. 49, pp. 373 – 8, Jun. 2011, doi: 10.1016/j.brat.2011 .03.005. [116] J. H. N. Lee et al. , Durational Patterning at Discourse Boundaries in Relation to Therapist Empathy in Psychotherapy . 2022, p. 5252. doi: 10.21437/Interspeech.2022 -722. [117] T. Moyers and T. Martin, “Therapist influence on client language during motivational interviewing sessions,” Journal of substance abuse treatment , vol. 30, pp. 245 –51, May 2006, doi: 10.1016/j.jsat.2005.12.003. [118] S. A. Cole, D. Sannidhi, Y. T. Jadotte, and A. Rozanski, “Using motivational interviewing and brief actio n planning for adopting and maintaining positive health behaviors,” Progress in Cardiovascular Diseases , vol. 77, pp. 86 –94, Mar. 2023, doi: 10.1016/j.pcad.2023.02.003. [119] Krishna Pillai A., “Use of Empathy by Healthcare Professionals Learning Motivatio nal Interviewing: A Qualitative Analysis,” thesis, 2010. Accessed: Dec. 17, 2024. [Online]. Available: https://etd.auburn.edu//handle/10415/2112 [120] Y. Bai et al. , “Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback,” Apr. 12, 2022, arXiv : arXiv:2204.05862. doi: 10.48550/arXiv.2204.05862. [121] L. Ouyang et al. , “Training language models to follow instructions with human feedb ack,” Mar. 04, 2022, arXiv : arXiv:2203.02155. doi: 10.48550/arXiv.2203.02155. [122] M. J. Lambert, S. L. Garfield, and A. E. Bergin, Eds., Bergin and Garfield’s handbook of psychotherapy and behavior change , Sixth edition. Hoboken, N.J: Wiley, 2013. [123] T. Sorensen et al. , “A Roadmap to Pluralistic Alignment,” Aug. 20, 2024, arXiv : arXiv:2402.05070. doi: 10.48550/arXiv.2402.05070. [124] S. Chakraborty et al. , “MaxMin -RLHF: Alignment with Diverse Human Preferences,” Dec. 26, 2024, arXiv : arXiv:2402.08925. doi: 10.48550/arXiv.2402.08925. [125] M. Sharma et al. , “Towards Understanding Sycophancy
https://arxiv.org/abs/2505.17380v1
in Language Models,” Oct. 27, 2023, arXiv : arXiv:2310.13548. doi: 10.48550/arXiv.2310.13548. [126] N. C. Chung, G. Dyer, and L. Brocki, “Challenges of Large Language Models for Mental Health Counseling,” Nov. 23, 2023, arXiv : arXiv:2311.13857. doi: 10.48550/arXiv.2311.13857. [127] D. H. Barlow, J. T. Levitt, and L. F. Bufka, “The dissemination of empirically supported treatments: A view to the future,” Behaviour Resear ch and Therapy , vol. 37, no. Suppl 1, pp. S147 –S162, 1999, doi: 10.1016/S0005 -7967(99)00054 -6. [128] J. Waltz, M. E. Addis, K. Koerner, and N. S. Jacobson, “Testing the integrity of a psychotherapy protocol: Assessment of adherence and competence,” Journal of Consulting and Clinical Psychology , vol. 61, no. 4, pp. 620 –630, 1993, doi: 10.1037/0022 -006X.61.4.620. [129] B. Lundahl and B. L. Burke, “The effectiveness and applicability of motivational interviewing: a practice‐friendly review of four meta‐ analyse s,” J Clin Psychol , vol. 65, no. 11, pp. 1232 –1245, Nov. 2009, doi: 10.1002/jclp.20638. [130] H. Oh and C. Lee, “Culture and motivational interviewing,” Patient Educ Couns , vol. 99, no. 11, pp. 1914 –1919, Nov. 2016, doi: 10.1016/j.pec.2016.06.010. [131] L. R. Snowden, “Barriers to Effective Mental Health Services for African Americans,” Ment Health Serv Res , vol. 3, no. 4, pp. 181 –187, Dec. 2001, doi: 10.1023/A:1013172913880. [132] D. Satcher, “Mental Health: Culture, Race, and Ethnicity —A Supplement to Men tal Health: A Report of the Surgeon General,” 2001, Accessed: Dec. 12, 2024. [Online]. Available: http://hdl.handle.net/1903/22834 [133] Z. Xie et al. , “Few -shot Dialogue Strategy Learning for Motivational Interviewing via Inductive Reasoning,” Mar. 23, 20 24, arXiv : arXiv:2403.15737. doi: 10.48550/arXiv.2403.15737. [134] J. Weizenbaum, “ELIZA —a computer program for the study of natural language communication between man and machine,” Commun. ACM , vol. 9, no. 1, pp. 36 –45, 1966, doi: 10.1145/365153.365168. [135] J. Haltaufderheide and R. Ranisch, “The ethics of ChatGPT in medicine and healthcare: a systematic review on Large Language Models (LLMs),” npj Digit. Med. , vol. 7, no. 1, pp. 1 –11, Jul. 2024, doi: 10.1038/s41746 -024-01157 -x. [136] A. A. Parray, Z. M. Inam, D. Ramonfaur, S. S. Haider, S. K. Mistry, and A. K. Pandya, “ChatGPT and global public health: Applications, challenges, ethical considerations and mitigation strategies,” Global Transitions , vol. 5, pp. 50 –54, 2023, doi: 10.1016/j.glt.2023.05.001. [137] M. Abbasian et al. , “Foundation metrics for evaluating effectiveness of healthcare conversations powered by generative AI,” npj Digit. Med. , vol. 7, no. 1, pp. 1 –14, Mar. 2024, doi: 10.1038/s41746 -024-01074 -z. [138] C. Diaz -Asper, M. K. Hauglid, C. C handler, A. S. Cohen, P. W. Foltz, and B. Elvevåg, “A framework for language technologies in behavioral research and clinical applications: Ethical challenges, implications, and solutions.,” American Psychologist , vol. 79, no. 1, pp. 79 –91, Jan. 2024, doi: 10.1037/amp0001195. [139] L. Weidinger et al. , “Taxonomy of Risks posed by Language Models,” in 2022 ACM Conference on Fairness, Accountability, and Transparency , Seoul Republic of Korea: ACM, Jun. 2022, pp. 214 –229. doi: 10.1145/3531146.3533088.
https://arxiv.org/abs/2505.17380v1
arXiv:2505.17387v1 [cs.CL] 23 May 2025WINGPT-3.0 T ECHNICAL REPORT WiNGPT Team∗ Winning Health AI Research Correspondence to wair@winning.com.cn ABSTRACT Current Large Language Models (LLMs) exhibit significant limitations, notably in structured, in- terpretable, and verifiable medical reasoning, alongside practical deployment challenges related to computational resources and data privacy. This report focused on the development of WiNGPT-3.0, the 32-billion parameter LLMs, engineered with the objective of enhancing its capacity for medical reasoning and exploring its potential for effective integration within healthcare IT infrastructures. The broader aim is to advance towards clinically applicable models. The approach involved a multi-stage training pipeline tailored for general, medical, and clinical reasoning. This pipeline incorporated supervised fine-tuning (SFT) and reinforcement learning (RL), leveraging curated Long Chain-of- Thought (CoT) datasets, auxiliary reward models, and an evidence-based diagnostic chain simulation. WiNGPT-3.0 demonstrated strong performance: specific model variants achieved scores of 66.6 on MedCalc and 87.1 on MedQA-USMLE. Furthermore, targeted training improved performance on a clinical reasoning task from a baseline score of 58.1 to 62.5. These findings suggest that reinforcement learning, even when applied with a limited dataset of only a few thousand examples, can enhance medical reasoning accuracy. Crucially, this demonstration of RL’s efficacy with limited data and computation paves the way for more trustworthy and practically deployable LLMs within clinical workflows and health information infrastructures. Keywords Large Language Models ·Reinforcement Learning ·Medical Reasoning ·Evidence-Based Reasoning 1 Introduction Current Large Language Models (LLMs) demonstrate significant limitations when performing medical reasoning that must be structured, interpretable, and verifiable. While LLMs effectively process large volumes of text, they lack the practical experience and complex diagnostic skills, such as clinical thinking, that human clinicians develop. Furthermore, their reasoning often relies on pattern recognition instead of dynamic, multi-factor clinical assessment; consequently, this reasoning becomes difficult to trace and validate. Deploying LLMs in real-world healthcare settings adds several hurdles. High computational demands frequently clash with the limited hardware budgets of many institutions, while stringent privacy regulations often require on-premise data processing. Consequently, providers must balance model accuracy, cost, and accessibility to deliver equitable care. Large reasoning models also tend to hallucinate—producing plausible yet incorrect content—which poses serious clinical risks. Moreover, medical knowledge and guidelines evolve rapidly; without frequent and costly retraining, an LLM’s knowledge base quickly becomes outdated. Adapting LLMs to diverse regional or institutional medical standards presents an additional complex challenge. Such concerns are not merely our observations but a consensus voiced by the healthcare professionals who collaborate with us. To overcome these obstacles, we introduce WiNGPT, a medical-focused LLM series begun in 2023. The latest version, WiNGPT-3.0, builds on its predecessor WiNGPT-2.8 and employs the 32-billion-parameter Qwen-2.5 architecture [1]. This scale strikes a balance between robust reasoning power and practical feasibility for on-site deployment. Reinforcement learning on curated clinical case collections encourages the model to emulate clinical thinking. WiNGPT- ∗Authors are listed in section 8 3.0 is tightly integrated into the WiNEX Health Information System, where it assists clinicians through a triad of safeguards: physician review, structured knowledge bases, and customizable rule templates. These mechanisms improve clinical appropriateness, accuracy, and interpretability, thereby supporting evidence-based
https://arxiv.org/abs/2505.17387v1
practice. WiNGPT-3.0’s development prioritizes deep integration into hospital workflows. Clinicians, administrators, and techni- cal teams achieve this integration through a collaborative process. Together, these teams define requirements, manage system integration, and fine-tune the model using institution-specific data and operational processes. Furthermore, several features ensure its alignment with specific institutional standards and operational processes: pre-configured EMR templates, established rules, and continuous learning from user interactions. This deeply integrated and customizable approach improves the practical utility, reliability, and regulatory compliance of LLMs in clinical settings. A case study illustrating WiNGPT-3.0’s diagnostic-support capabilities—complete with evidence-based search integration—appears in Appendix C. 2 Related Work LLMs can significantly enhance their reasoning capabilities by synergistically combining long chain-of-thought (Long-COT) reasoning with reinforcement learning (RL). However, manually annotating the necessary Long-COT data for complex tasks is often prohibitively expensive, posing a critical challenge. To address this data generation bottleneck, researchers are exploring knowledge distillation techniques, applying them to strong foundation models such as DeepSeek-R1 [ 2]—a leading open-source model known for its reasoning capabilities—to potentially extract high-quality Long-COT datasets. In parallel, to improve training efficiency, algorithms like GRPO [ 3] have substantially reduced the computational complexity and hardware requirements for RL training, making such training more technically feasible in resource-constrained environments. Beyond these approaches focused on data generation and training optimization for specific capabilities, techniques such as model merging constitute an actively evolving area. This methodology, generally implemented during the post-training phase [ 4], offers a complementary strategy by combining the parameters of several models, each previously fine-tuned for distinct domain knowledge, to create a more versatile and capable resultant model. Recent general domain research challenges the widely held assumption of a strong positive correlation between parameter scale and model performance. For instance, 32-billion-parameter models, such as Skywork-OR1-32B- Preview [ 5] and QWQ-32B [ 6], now achieve performance in mathematical reasoning and code generation comparable to significantly larger models like DeepSeek-R1. This success is largely attributed to multi-stage RL training strategies. Such advancements underscore the potential of ’compact models.’ They also create opportunities for the substantive integration of medical domain expertise and the adaptation of these models for clinical scenarios. The objective is for such adapted models to maintain high performance while demonstrating enhanced domain-specific comprehension capabilities. Illustrating this trend in the medical domain, the Huatuogpt-O1 model [ 7] utilizes RL training augmented by a dynamic scoring mechanism leveraging a GPT-4o validator; this combination has demonstrated strong performance in verifiable medical reasoning tasks. Nevertheless, this approach faces practical challenges, primarily prohibitive validator costs and system stability risks. To address such issues, more recent methods like Search-R1 [ 8] and RAGEN [ 9] integrate knowledge base retrieval mechanisms with RL training paradigms. This integration enables models to autonomously decide when to invoke external knowledge. Such an approach not only substantially enhances model performance but also effectively mitigates the propensity for hallucination common in LLMs. Consequently, these advancements indicate a promising technical pathway for constructing reliable medical AI systems. 3 Key Features • Achieves state-of-the-art performance in medical-specific reasoning using a 32B parameter model. •Implements a data generation methodology that integrates
https://arxiv.org/abs/2505.17387v1
key techniques such as distillation, reward modelling, and chain-of-thought prompting to efficiently produce high-quality medical datasets under resource constraints. •Introduces a novel reinforcement learning approach tailored for complex clinical reasoning scenarios, including clinical diagnosis simulation and evidence-based search integration. • Develops the WiNGPT-3.0 benchmark for rigorous evaluation of advanced clinical reasoning capabilities. 2 4 Method We constructed a large-scale dataset comprising nearly 2 million questions. This dataset encompasses multiple categories (including mathematics, programming, general knowledge, and healthcare) and is provided in both Chinese and English. To generate data suitable for SFT and RL training, we employed a multi-stage data processing pipeline, which included techniques such as long chain-of-thought (Long-COT) answer generation, data filtering and classification, and data sampling. Figure 1 illustrates the overall workflow from top to down. Furthermore, to support these data processing efforts, we developed and trained three specialized auxiliary models: a Preference-based Reward Model, a Verifier-based Reward Model, and a Think-Tracing Model. Section 4.4 provides further details on these auxiliary models 4.1 Data Curation Long Chain-of-Thought Answer Construction : To acquire long chain-of-thought (Long-COT), we used DeepSeek-R1 [2] for distillation, generating candidate responses for each question. For some medical datasets containing answers but lacking thought processes (approximately 24K), we utilized the Think-Tracing model to generate candidate chains of thought. Data Filtering and Classification : Considering the verifiability of results, we divided the dataset into two categories: a verifiable dataset (approximately 0.8M) and an unverifiable dataset (approximately 1.2M). In terms of dataset review, we mainly focused on two aspects: firstly, conducting N-gram repetitiveness filtering based on the content of candidate responses, as well as format error filtering based on <think>/</think> tags; secondly, for the verifiable dataset, selecting based on the correctness of the answers; for the unverifiable dataset, using the reward model for answer evaluation and data selection according to the evaluation scores. After completing data filtering, we categorized the data based on difficulty into three classes: Basic (low knowledge density or complexity, requiring only basic common sense or simple concepts, direct reasoning process, clear steps); Intermediate (moderate knowledge density or complexity, requiring certain professional knowledge, theories, or formula support, involving multi-step logical deduction); Advanced (high knowledge density or complexity, requiring deep professional knowledge and background theory, involving interdisciplinary integration, complex analysis, or innovative thinking). Data Sampling : From the Advanced category, we selected 60,000 pieces of data across different domains as the candidate dataset for the reinforcement learning phase. For the SFT training set, we sampled based on difficulty, giving higher sampling ratios to samples of higher difficulty while ensuring diversity balance across all languages and task categories. SFT Data : After completing sampling, we obtained approximately 1.4 million training data entries in total. Among these, general knowledge data accounted for 50%, mathematical data 18%, programming data 14%, and medical data also accounted for 18%. This distribution ensures broad domain coverage while also fully considering key professional knowledge. RL Data : Using the supervised fine-tuned model, we generated N responses for each question in the candidate dataset, removing samples where the model answered either all correctly or all incorrectly.
https://arxiv.org/abs/2505.17387v1
Subsequently, we sampled from lowest to highest number of correct answers among the N responses. The final RL dataset consists of: 1.Verifiable dataset of 8.3K entries, primarily including types such as mathematics, medical calculations, and medical quality control; 2.Unverifiable dataset of 7K entries, mainly comprising types like medical Q&A, medical diagnosis, medical record generation, medical safety, and general Q&A; 3.Clinical thinking dataset of 1.5K entries, aimed at training the model to simulate doctors’ clinical diagnostic reasoning processes, thereby assisting clinical decision-making. 4.2 Training Pipeline S1Supervised Fine-Tuning : We performed supervised fine-tuning on the Qwen2.5-32B model [ 1] using the standard cross-entropy loss function. This phase involved two epochs on the fine-tuning dataset under a 16K context length configuration. S2Reinforcement Learning for General Reasoning : During the reasoning reinforcement learning stage, the primary goal was to enhance WiNGPT’s logical reasoning capabilities in medical scenarios. We employed a rule-based GRPO [ 3] algorithm to update model parameters using a verifiable dataset. During training, we encouraged the model to prioritize generating shorter responses. To achieve this, we introduced a length 3 Figure 1: WiNGPT-3.0 Workflow 4 penalty mechanism: no penalty was applied if the response was correct and under 8K tokens, while responses exceeding 8K tokens incurred a cosine-based length penalty, with a maximum penalty value of 0.5. After completing this training phase, we explored an extra approach to enhance the model’s ability to balance general knowledge and medical knowledge. We merged the WiNGPT-3.0-S2 with QwQ-32B [ 6] at a weight ratio of 1:1 before Medical RL was subsequently applied. S3Reinforcement Learning for Medical Alignment : The medical reinforcement learning stage aimed to improve the model’s capability and stability in diverse medical scenarios. To enhance general-domain performance, a small amount of general-domain Q&A data was incorporated during training. We utilized a model reward mechanism with reference answers, where each question was paired with a reference answer, and Verifier-Based Rewards Model (VRM) was prompted to score the model’s response based on this reference. This approach allowed flexible handling of tasks where rule-based rewards were impractical. At the end, We have completed the training of two models: WiNGPT-3.0-S3 and WiNGPT-3.0-S3-Merged. These models are derived from WiNGPT-3.0-S2 and WiNGPT-3.0-S2-Merged, respectively. S4Reinforcement Learning for Clinical Reasoning : This stage focused on strengthening the model’s clinical decision-making ability by integrating a knowledge base interaction mechanism to improve diagnostic accuracy. Training samples were constructed from real-world clinical diagnostic data, requiring the model to retrieve and reason with knowledge base information during diagnosis. The training adopted the same reward mechanism as the medical reinforcement learning stage, where Verifier-Based Rewards Model (VRM) evaluated the logical completeness of the diagnostic process and the accuracy of conclusions. [10] 4.3 Evidence based Diagnostic Chain In medical education and training, evidence-based thinking plays a crucial role. Diagnosis and treatment decisions can only be applied to patients after identifying supporting evidence and undergoing thorough discussion and evaluation. In reinforcement learning, this approach can be analogized to mathematical problems, where the model is required to provide the final answer in a specified format (e.g., within a designated box), enabling
https://arxiv.org/abs/2505.17387v1
reliable rule-based verification of correctness. Based on this concept, we have simulated a clinical diagnostic workflow as shown in Figure 2 and constructed an evidence-based auxiliary diagnosis dataset for training clinical reasoning models. For this purpose, we selected comprehensive data from an electronic medical record (EMR) database, including key components of patient records: chief complaint, present illness, medical history, physical examination, relevant auxiliary tests, and final diagnostic results. Inspired by the work of [ 10], we designed a multi-step auxiliary diagnostic pathway consisting of three primary stages: initial consultation summary, information requirements and diagnosis. We first generated the initial consultation summary, incorporating the patient’s chief complaint, present illness, and medical history. Additional data, such as physical examination and auxiliary tests, were organized into a separate examination module. The reasoning model evaluates whether an auxiliary diagnostic conclusion can be derived based on the initial consultation summary. If an auxiliary diagnosis cannot be determined, additional relevant examinations are required. To facilitate this process, we designed an autonomous agent that queries the necessary data and provides the correspond- ing examination results. This iterative process repeats multiple times until a final auxiliary diagnosis is successfully generated. 4.4 Auxiliary Model Preference-Based Rewards : Our reward model, WiNGPT-32B-RM, is initialized from the WiNGPT-2.8-32B model (itself based on Qwen-2.5-32B [ 1]). We modified its final output layer to a linear head that directly outputs a single scalar logit. This scalar logit quantifies the model’s preference strength for a given "prompt + response" pair. For training WiNGPT-32B-RM, we employ the Scaled Bradley-Terry (BT) Loss [ 11], optimizing the model using human-annotated pairwise comparison data. The introduction of a scaling factor in this loss function enhances the model’s ability to distinguish between subtle variations in human preference, improves training stability, and leads to more accurately calibrated reward signals. Our internal experiments indicated that this approach yields slightly better performance compared to the standard BT loss [ 12]. To train WiNGPT-32B-RM, we assembled a comprehensive training corpus by combining internally curated data with existing general-purpose and domain-specific preference datasets. This strategy aims to ensure both strong medical alignment and broad general reward modelling capability. Our internal contribution consists of 100K meticulously selected, high-quality medical preference pairs. For each prompt in this internal dataset, a preferred response was selected by annotators from multiple model generations, and a corresponding less-preferred (negative) response was sampled from generations of an open-source model. In addition to this internal set, the training data for WiNGPT-32B-RM incorporates: 5 Figure 2: Simulating the reasoning process through a clinical study case • General-purpose preference datasets: HelpSteer3 [13] and INF-ORM-Preference-Magnitude-80K [14]. •Medical-specific preference annotations from publicly available evaluation datasets: MedAlign-ModelEval (36k model-generated preference annotations) and MedAlign-HumanEval (11k human-generated preference annotations), which provide valuable domain-specific training signals. Finally, We evaluated WiNGPT-32B-RM on two distinct benchmarks: MedRewardBench, an internally developed benchmark specifically targeting medical preference alignment, and the publicly available RewardBench, which measures general reward model performance. WiNGPT-32B-RM achieved a score of 96.4 on MedRewardBench and 92.4 on RewardBench, demonstrating strong performance in both domain-specific and general contexts. Think-Tracing : To efficiently leverage verifiable
https://arxiv.org/abs/2505.17387v1
data that lacks a chain of thought, we extracted healthcare data containing high-quality Long-COT annotations, using questions and answers as inputs and the chain-of-thought as outputs, forming a 20K-sample training set for the thought trajectory model based on WiNGPT-2.8. This method is particularly valuable for creating training data for out-of-domain, verifiable data, and for converting existing non- reasoning datasets or model outputs into rich, traceable reasoning datasets, thus greatly enhancing scalability in data creation. Most of the data we’ve verified manually needs further validation through experiments to confirm its effectiveness. Verifier-Based Rewards : We conducted high-quality answer sampling based on an unverifiable dataset, collecting 90K single-turn dialogues and 7K multi-turn dialogues. The processing workflow consists of the following steps: First, 6 we utilized answers generated by DeepSeek-R1 [ 2] as reference answers, removing all CoT content. Concurrently, we adopted a random selection strategy to generate evaluation answers using both fine-tuned model and model from the reasoning reinforcement learning phase, with CoT content similarly removed. We then constructed a question set through carefully designed prompts and employed DeepSeek-R1 [ 2] to perform differentiated evaluations between reference answers and evaluation answers, implementing a binary scoring mechanism (0/1) for quality assessment. The high-quality outputs devoid of CoT content were subsequently used as training data. Through two training epochs of fine-tuning, we successfully established a Verifier-Based Rewards Model (VRM). Throughout this process, we maintained concise answer outputs, with all CoT content systematically removed during the preprocessing phase. 4.5 Training Configuration During the supervised fine-tuning (SFT) phase, we employed the ZeRO-3 [ 15] memory optimization strategy within the DeepSpeed [ 16] framework. For optimization in this phase, the AdamW optimizer was configured with an initial learning rate of 1×10−5. This was combined with a linear warmup period over 500 training steps, followed by a cosine annealing schedule that decayed the learning rate to 10% of its initial value. The model underwent SFT for a total of 2 epochs. Subsequently, for the RL phase, we utilized the GRPO [ 3] algorithm, implemented via the Verl [ 17] framework. The RL training employed a per-step batch size of 128 samples, with 12 rollouts generated per prompt. 5 Evaluation 5.1 WiNGPT-3.0 Benchmark WiNGPT-3.0 Benchmark is a multi-task evaluation benchmark designed to assess the comprehensive capabilities of LLMs in medical and high-level cognitive tasks. The benchmark framework aims to comprehensively reflect the reasoning, computation, and language understanding abilities of models in real-world clinical applications, covering multiple dimensions from professional medical knowledge application to complex mathematical problem-solving. The design of WiNGPT-3.0 Benchmark follows these principles: •Clinical Relevance : Ensures that the selected tasks are highly aligned with real-world medical decision- making, covering key aspects of daily clinical workflows. •Professionalism and Challenge : Incorporates tasks with a high degree of specialization, requiring models to possess professional clinical knowledge and reasoning capabilities. •Cross-task Generalization Assessment : Tests models’ generalization and task-transfer abilities by combining various types of tasks. The WiNGPT-3.0 benchmark consists of a diverse set of task categories as following. •MedCalc [ 18] includes 2,811 questions, serving as a dataset for evaluating model
https://arxiv.org/abs/2505.17387v1
performance on clinical calculation tasks, covering 55 different types of medical calculations. •MedReMCQ contains 700 medical reasoning multiple-choice questions involving diagnostic tests, medical record quality control, and preliminary diagnosis. Examples can be found in Appendix A. •CMMLU [ 19] provides 11,582 questions and is a benchmark for evaluating Chinese language understanding across multiple tasks. •MATH-500 [ 20] consists of 500 advanced mathematical problems primarily used to test and train mathematical reasoning abilities. •MedQA-USMLE [ 21] offers 1,273 questions, forming a large-scale open domain question answering dataset from medical exams. •MedMCQA [ 22] features 4,183 questions and is a large-scale multi-subject multi-choice dataset for medical domain question answering. •PubMedQA [ 23] comprises 1,000 questions, designed as a dataset for biomedical research question answering. 5.2 Evaluation Metrics We employ a set of specific evaluation metrics, each chosen to best reflect the original benchmark’s intent. 7 •MedCalc [ 18] uses accuracy as the metric which is based on whether the model’s computed answer falls within the upper and lower bounds of the standard answer. •MedReMCQ is a dataset we developed in-house, tailored to medical reasoning scenarios. It is presented in a multiple-choice format and assessed using the Micro-F1 score. •CMMLU [ 19]), MATH-500 [ 20], MedQA-USMLE [ 21], MedMCQA [ 22] and PubMedQA [ 23] use accuracy as the evaluation metric. 5.3 Result We evaluated these strategies on two general reasoning/math benchmarks (CMMLU and MATH-500) and five medical benchmarks (MedCalc, MedReMCQ, MedQA-USMLE, MedMCQA, and PubMedQA), comparing them with several SOTA 32B-RL models. The evaluation results are summarized in Table 1. Model Name CMMLUMATH-500MedCalcMedReMCQMedQA-USMLEMedMCQA PubMedQA DeepSeek-R1-Distill-Qwen-32B 83.0 96.3 57.7 65.3 81.9 65.5 76.1 OpenThinker2-32B 86.2 96.6 58.5 68.8 85.1 68.7 78.0 Light-R1-32B-DS 86.2 97.2 48.7 70.3 83.7 67.9 76.2 QwQ-32B 87.4 97.5 51.4 73.5 85.8 71.2 77.9 WiNGPT-3.0-S3 85.1 94.0 66.6 74.7 85.7 69.9 78.2 WiNGPT-3.0-S3-Merged 86.1 95.6 66.2 74.0 87.1 71.6 78.0 Table 1: Performance comparison of different models across various benchmarks. Both versions of WiNGPT-3.0 demonstrate competitive performance in general reasoning and mathematics, with the S3-Merged model achieving a notable boost in MATH-500 and competitive CMMLU scores, suggesting the merged model retains strong foundational capabilities. In the medical domain, both strategies yield significant improvements over baseline models. WiNGPT-3.0-S3-Merged shows particularly strong performance in MedQA-USMLE and MedMCQA, indicating that merging with QwQ-32B followed by medical fine-tuning can effectively enhance domain-specific knowledge transfer. To further enhance the model’s performance in clinical reasoning scenarios, RL training was applied in S4 using a 1.5K clinical thinking dataset. This dataset simulates the real-world medical consultation process, encompassing key steps such as model-driven reasoning, initial consultation summary, information requirements, and diagnosis. Through this pipeline, the model gains improved reasoning capabilities in simulated clinical settings. Experimental results demonstrate that this application of RL in Step 4 leads to a notable improvement in clinical reasoning tasks, with the WiNGPT-3.0-S4 model achieving a score of 62.5, compared to 58.1 for the WiNGPT-3.0-S3 model. 6 Discussion RL markedly improved alignment with clinical-reasoning principles. WiNGPT-3.0-S3 outperformed the base Qwen-32B on MedCalc (66.6 vs. 51.4) and MedReMCQ (74.7 vs. 73.5), indicating stronger medical reasoning ability. Merging
https://arxiv.org/abs/2505.17387v1
with a strong base model (QwQ-32B) prior to RL fine-tuning (WiNGPT-3.0-S3-Merged) yielded substantial benefits; its superior performance across diverse benchmarks—medical (MedQA-USMLE: 87.1; MedMCQA: 71.6), general (CMMLU: 86.1), and mathematical reasoning (MATH-500: 95.6)—indicates that this integration effectively preserves general capabilities while facilitating domain-specific adaptation. Nonetheless, current benchmarks still under-represent real-world complexity: they emphasise pattern matching, overlook temporal factors, and offer limited probes for step-wise reasoning. Beyond numerical scores, WiNGPT-3.0 may influence patient–clinician interactions in several ways: 8 •Patient perspective : When used as an educational companion —rather than an independent diagnostician—the model can give lay users a plain-language preview of plausible conditions, supporting evidence, and follow-up questions. This early framing may reduce anxiety, encourage timely medical visits, and help patients describe symptoms more precisely, making subsequent consultations more focused. •Clinician perspective : Because patients arrive better informed, physicians must be ready to validate, refine, or correct AI-generated hypotheses. Competent engagement with these “AI-augmented” patients is essential to maintain trust and professional credibility. Limitations and safety : Bias in training data and hallucinations remain open challenges. Evidence-retrieval pipelines reduce—but cannot eliminate—errors because search results themselves may be outdated or incomplete. We therefore require licensed clinicians to review all outputs and discourage unsupervised use by non-professionals. Again, applying LLMs in healthcare system incorrectly could cause serious harm [ 24]; therefore, these models must be used carefully. We believe that open sharing of this report will help share knowledge and improve AI research for healthcare. 7 Conclusion This technical report details WiNGPT-3.0, outlining the methodologies for general and medical domain data curation, alongside strategies for RL data generation and its associated training paradigms. The report further elaborates on the fine-tuning and RL techniques employed, leveraging the specifically constructed datasets detailed herein. Benchmark evaluations demonstrate that WiNGPT-3.0 not only surpasses its predecessor but also exhibits competitive, and in several key tasks, superior performance when compared to general-purpose models of an equivalent parameter scale. A significant contribution of this work is the enhanced focus on medical domain data curation and a preliminary investigation into the novel application of reinforcement learning to imitate clinical thinking. This provides a valuable foundation and resource for researchers aiming to advance AI-driven applications in healthcare. Future research will concentrate on developing advanced training methodologies for complex medical scenarios, improving model reliability, safety, and interpretability, exploring multi-modal data integration, and rigorously evaluating its potential to deliver positive societal impact, particularly within the healthcare sector. 8 Authors Boqin Zhuang, Chenxiao Song, Huitong Lu, Jiacheng Qiao, Mingqian Liu, Mingxing Yu, Ping Hong, Rui Li, Xiaoxia Song, Xiangjun Xu, Xu Chen, Yaoyao Ma, Yujie Gao References [1] An Yang, Baosong Yang, Beichen Zhang, and Binyuan Hui et al. Qwen2.5 technical report, 2025. [2]Daya Guo, Dejian Yang, Haowei Zhang, and Junxiao Song et al. DeepSeek-R1: Incentivizing reasoning capability in llms via reinforcement learning, 2025. [3]Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y . K. Li, Y . Wu, and Daya Guo. DeepSeekMath: Pushing the limits of mathematical reasoning in open language models, 2024. [4]Charles Goddard, Shamane Siriwardhana, Malikeh Ehghaghi,
https://arxiv.org/abs/2505.17387v1
Luke Meyers, Vladimir Karpukhin, Brian Benedict, Mark McQuade, and Jacob Solawetz. Arcee‘s MergeKit: A toolkit for merging large language models. In Franck Dernoncourt, Daniel Preo¸ tiuc-Pietro, and Anastasia Shimorina, editors, Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track , pages 477–485, Miami, Florida, US, November 2024. Association for Computational Linguistics. [5]Jujie He, Jiacai Liu, Chris Yuhao Liu, Rui Yan, Chaojie Wang, Peng Cheng, Xiaoyu Zhang, Fuxi- ang Zhang, Jiacheng Xu, Wei Shen, Siyuan Li, Liang Zeng, Tianwen Wei, Cheng Cheng, Yang Liu, and Yahui Zhou. Skywork open reasoner series. https://capricious-hydrogen-41c.notion.site/ Skywork-Open-Reaonser-Series-1d0bc9ae823a80459b46c149e4f51680 , 2025. Notion Blog. [6] Qwen Team. QwQ-32B: Embracing the power of reinforcement learning, March 2025. [7]Junying Chen, Zhenyang Cai, Ke Ji, Xidong Wang, Wanlong Liu, Rongsheng Wang, Jianye Hou, and Benyou Wang. HuatuoGPT-o1, towards medical complex reasoning with llms, 2024. 9 [8]Bowen Jin, Hansi Zeng, Zhenrui Yue, Jinsung Yoon, Sercan Arik, Dong Wang, Hamed Zamani, and Jiawei Han. Search-R1: Training llms to reason and leverage search engines with reinforcement learning, 2025. [9]Zihan Wang, Kangrui Wang, Qineng Wang, Pingyue Zhang, Linjie Li, Zhengyuan Yang, Kefan Yu, Minh Nhat Nguyen, Licheng Liu, Eli Gottlieb, Monica Lam, Yiping Lu, Kyunghyun Cho, Jiajun Wu, Li Fei-Fei, Lijuan Wang, Yejin Choi, and Manling Li. RAGEN: Understanding self-evolution in llm agents via multi-turn reinforcement learning, 2025. [10] Pengcheng Qiu, Chaoyi Wu, Shuyu Liu, Weike Zhao, Zhuoxia Chen, Hongfei Gu, Chuanjin Peng, Ya Zhang, Yanfeng Wang, and Weidi Xie. Quantifying the reasoning abilities of llms on real-world clinical cases, 2025. [11] Nathan Lambert, Valentina Pyatkin, Jacob Morrison, Lester James Validad Miranda, Bill Yuchen Lin, Khyathi Chandu, Nouha Dziri, Sachin Kumar, Tom Zick, Yejin Choi, Noah A. Smith, and Hannaneh Hajishirzi. Reward- Bench: Evaluating reward models for language modeling. In Luis Chiruzzo, Alan Ritter, and Lu Wang, editors, Findings of the Association for Computational Linguistics: NAACL 2025 , pages 1755–1797, Albuquerque, New Mexico, April 2025. Association for Computational Linguistics. [12] David R. Hunter. Mm algorithms for generalized bradley-terry models. Annals of Statistics , 32:384–406, 2003. [13] Zhilin Wang, Jiaqi Zeng, Olivier Delalleau, Daniel Egert, Ellie Evans, Hoo-Chang Shin, Felipe Soares, Yi Dong, and Oleksii Kuchaiev. Dedicated feedback and edit models empower inference-time scaling for open-ended general-domain tasks, 2025. [14] Xiaoyu Tan Minghao Yang, Chao Qu. Inf-orm-llama3.1-70b, 2024. [15] Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. ZeRO: memory optimizations toward training trillion parameter models. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis , SC ’20. IEEE Press, 2020. [16] Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. DeepSpeed: System optimizations enable training deep learning models with over 100 billion parameters. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining , KDD ’20, page 3505–3506, New York, NY , USA, 2020. Association for Computing Machinery. [17] Guangming Sheng, Chi Zhang, Zilingfeng Ye, Xibin Wu, Wang Zhang, Ru Zhang, Yanghua Peng, Haibin Lin, and Chuan Wu. HybridFlow: A flexible and efficient rlhf framework. In Proceedings of the Twentieth European Conference on Computer Systems , EuroSys ’25, page 1279–1297. ACM, March
https://arxiv.org/abs/2505.17387v1
2025. [18] Nikhil Khandekar, Qiao Jin, Guangzhi Xiong, Soren Dunn, Serina S Applebaum, Zain Anwar, Maame Sarfo- Gyamfi, Conrad W Safranek, Abid A Anwar, Andrew Zhang, Aidan Gilson, Maxwell B Singer, Amisha Dave, Andrew Taylor, Aidong Zhang, Qingyu Chen, and Zhiyong Lu. MedCalc-Bench: Evaluating large language models for medical calculations, 2024. [19] Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, and Timothy Baldwin. CMMLU: Measuring massive multitask language understanding in Chinese. In Findings of the Association for Computational Linguistics: ACL 2024 , Bangkok, Thailand, August 2024. Association for Computational Linguistics. [20] Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step, 2023. [21] Di Jin, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang, and Peter Szolovits. What disease does this patient have? a large-scale open domain question answering dataset from medical exams. Applied Sciences , 11(14), 2021. [22] Ankit Pal, Logesh Kumar Umapathi, and Malaikannan Sankarasubbu. MedMCQA: A large-scale multi-subject multi-choice dataset for medical domain question answering. In Gerardo Flores, George H Chen, Tom Pollard, Joyce C Ho, and Tristan Naumann, editors, Proceedings of the Conference on Health, Inference, and Learning , volume 174 of Proceedings of Machine Learning Research , pages 248–260. PMLR, 07–08 Apr 2022. [23] Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William W. Cohen, and Xinghua Lu. PubMedQA: A dataset for biomedical research question answering, 2019. [24] Harish Tayyar Madabushi and Matthew D. Jones. Large language models in healthcare information research: making progress in an emerging field. BMJ Quality & Safety , 34(2):73–76, 2025. 10 Appendix A MedReMCQ Examples As shown in Figure 3, the patient is a 52-year-old female presenting with poor blood glucose control for over 10 months, accompanied by dizziness, nasal congestion, and runny nose for one day. She has a history of diabetes for over two years and has been using insulin combined with acarbose for glycemic control. Based on her condition, the physician needs to conduct targeted auxiliary examinations. Figure 3: Auxiliary Examinations Selection As shown in Figure 4, the patient is a 47-year-old male who complains of yellowing of the eyes and urine for more than 10 days, with a relatively acute onset. His body temperature is normal on physical examination, and there are no obvious signs of abdominal pain or fever. He reports generally good health and no family history of hereditary diseases. The physician needs to make a preliminary diagnosis. Figure 4: Preliminary Diagnosis 11 B The Prompt for Verifier-Based Rewards You are a professional evaluation expert who must assess the quality of the "predicted answer" based on the following four core elements: ,→ - **Dialogue History** (contextual information) - **Current Question** (the specific request made by the user) - **Excellent Answer** (a high-quality reference answer that has been reviewed) - **Predicted Answer** (the answer to be evaluated) ### Scoring Criteria: - **1 point**: Indicates high-quality answers that are equivalent or close to the "excellent answer," meeting user needs; ,→ - **0 points**: Indicates low-quality answers that contain hallucinations,
https://arxiv.org/abs/2505.17387v1
omissions, errors, or fail to meet user needs; ,→ >Note: The "excellent answer" has undergone rigorous review and is considered to be of high standard quality, serving as a benchmark for judgment. ,→ Please output your evaluation results in the following structure: --- ### Dialogue History (in chronological order, from earliest to latest) ``` {Insert dialogue history} ``` ### Current Question ``` user: {Insert original question} ``` ### Excellent Answer (Reference Answer) ``` assistant: {Insert excellent answer} ``` ### Predicted Answer (Answer to Evaluate) ``` assistant: {Insert predicted answer} ``` --- ### Evaluation Analysis [Perform item-by-item comparative analysis here] --- ### Predicted Answer Evaluation Score \\boxed{Prediction answer evaluation score} 12 C Evidence-Based Search Integration: A Case Study Due to the rapid expansion of medical knowledge, traditional search tools are often inefficient in locating critical evidence within extensive medical literature and clinical data, leading to information overload that can compromise the efficiency and accuracy of clinical decision-making. Evidence-Based Search Integration is presented as a promising approach to mitigate these challenges. Its core value lies in systematically integrating authoritative medical evidence with real-time patient data to provide clinicians with precise, interpretable, and research-backed decision support. This system is designed with the aim to help optimize treatment strategies, contribute to reducing misdiagnosis rates, and promote more rational resource allocation. The technical implementation strategy detailed herein comprises three primary stages: •Development of an AI-powered intelligent search engine : This involves creating a search engine that integrates global authoritative medical databases (e.g., PubMed, Cochrane Library, UpToDate) with localized medical data resources (e.g., electronic medical records, regional health archives, structured knowledge graphs). Seamless integration is facilitated by standardized interfaces and data protocols. Advanced AI algorithms, such as Embedding and Reranker models, are employed for semantic understanding and relevance ranking of search results, prioritizing high-quality, peer-reviewed medical guidelines and research literature. •Task planning and complex query parsing using LLMs : To address complex or ambiguous clinical queries, the system incorporates LLMs for task decomposition and logical reasoning. These models identify key terms and contextual relationships within queries, breaking them down into executable subtasks. Natural Language Processing (NLP) techniques and medical knowledge graphs are utilized to translate user input into precise search commands, facilitating rapid matching with relevant literature, guidelines, and clinical cases. The system also supports multilingual interaction. •Knowledge fusion and traceable answer generation : In this stage, the system combines contextual knowl- edge bases with LLM capabilities to produce structured, explainable medical responses. Each response includes citations or references to source materials to ensure traceability and support verifiability. Personalized and dynamically updated recommendations are generated based on the latest clinical evidence and individual patient data (e.g., laboratory tests, imaging reports). Machine learning algorithms are employed to continu- ously refine and optimize information ranking logic. A comprehensive list of references, citing applicable medical guidelines, databases, or journal sources, is provided with each response to enhance credibility and transparency. In summary, the case study describes a three-step, integrated process—data integration, intelligent parsing, and traceable answer generation—designed to efficiently transform raw information into actionable clinical insights. This system aims to enhance
https://arxiv.org/abs/2505.17387v1
arXiv:2505.17390v1 [cs.CL] 23 May 2025Measuring diversity of synthetic prompts and data generated with fine-grained persona prompting Gauri Kambhatla1, Chantal Shaib2, Venkata Govindarajan3, 1The University of Texas at Austin,2Northeastern University,3Ithaca College, gkambhat@utexas.edu, shaib.c@northeastern.edu, vgovindarajan@ithaca.edu Abstract Fine-grained personas have recently been used for generating ‘diverse’ synthetic data for pre- training and supervised fine-tuning of Large Language Models (LLMs). In this work, we measure the diversity ofpersona-driven syn- thetically generated prompts and responses with a suite of lexical diversity and redun- dancy metrics. Firstly, we find that synthetic prompts/instructions are significantly less di- verse than human-written ones. Next, we sam- ple responses from LLMs of different sizes with fine-grained and coarse persona descrip- tions to investigate how much fine-grained de- tail in persona descriptions contribute to gener- ated text diversity. We find that while persona- prompting does improve lexical diversity (espe- cially with larger models), fine-grained detail in personas doesn’t increase diversity noticeably. 1 Introduction Synthetic data generated from LLMs or other al- gorithms are increasingly used in pre-training and post-training recipes for LLMs (Grattafiori et al., 2024). However, care must be taken to incor- porate high-quality and diverse synthetic data at scale to avoid ‘model collapse’ (Feng et al., 2025). Recently, persona-driven synthetic pipelines have been proposed to generate diverse synthetic prompts and responses from LLMs (Ge et al., 2024; Lambert et al., 2025). However does fine-grained persona prompting actually lead to increased ‘diver- sity’? In this work we define and measure (Zhao et al., 2024) lexical diversity of LLM responses with and without persona prompting towards an- swering this question. Persona prompting (Hu and Collier, 2024), i.e. in- structing an LLM to respond to interactions from the perspective of an individual, demographic, or social group (described by a short textual descrip- tion), has risen as a community standard for steer- What are the best day trips near San Francisco? “A PR manager with insights into public relations strategy and press releases for influencers”(No persona) “A PR manager” Top day trips near San Francisco: - Napa Valley (wine tasting) - Muir Woods (redwood hikes) - Point Reyes (scenic coastline) - […] For a quick escape from San Francisco, consider these top day trips: Napa Valley for wine tasting, Muir Woods for redwood hikes, Point Reyes for coastal charm […] For influencers visiting San Francisco, top day-trip spots with great content potential include: Napa Valley—Stunning vineyards, luxury experiences. Point Reyes—Breathtaking coastal views. […] ! Figure 1: We measure the lexical diversity of LLM responses to prompts with no persona, a fine-grained persona, and its coarse analog. Fine-grained personas do not improve lexical diversity noticeably. ing LLM responses (Pataranutaporn et al., 2021), enabling personalized interactions (Park et al., 2023), and simulating human/group behavior to- wards answering scientific questions in psychology and social science (Argyle et al., 2023). While there has been some evidence showing that LLM performance on tasks in some domains improves with persona prompting (Salewski et al., 2023), the results are inconclusive (Zheng et al., 2024; Beck et al., 2024), and persona-driven prompting has been shown to misportray, flatten, and essential- ize identities, and
https://arxiv.org/abs/2505.17390v1
is susceptible to caricature and stereotypying (Liu et al., 2024; Gupta et al., 2024; Wang et al., 2025; Cheng et al., 2023b,a). In this work, we examine the diversity claims pro- posed by persona-driven synthetic data pipelines. We use a suite of diversity metrics (Shaib et al., 2025) to measure the lexical diversity and redun- dancy in synthetic prompts and responses. We aim to answer two questions: 1. Does prompting with personas lead to increased diversity in LLM responses for the same instruction? 2. Do fine– 1 grained persona descriptions lead to more diverse responses than less detailed (coarse) personas? Figure 1 shows an example and overview of our experimental setup towards answering our re- search questions. In §3, we show that synthe- sized prompts from PersonaHub are noticeably less-diverse across all our metrics against compara- ble human-written/annotated prompts. We report our main findings in §5: persona-prompting does lead to higher diversity (over human responses) but only with larger model sizes, and coarse persona descriptions lead to text that is just as diverse as fine-grained descriptions. We release our code and model generations online at https://anonymous. 4open.science/r/persona-diversity-5F03/ . 2 Related Work Persona-driven data synthesis Personas have been used as a means to generate synthetic training datasets (Ge et al., 2024; Lambert et al., 2025). The personas themselves are generated with LLMs at scale, with synthetic data generated by prompting LLMs to write texts across genres and domains from the perspective of a persona. Recently, Sethi et al. (2025); Venkit et al. (2025) investigate the lex- ical diversity of persona descriptions—our paper is the first to investigate if persona-driven prompting leads to diversity in model responses. Text diversity Diversity is an inherently subjec- tive and value-laded metric to measure (Zhao et al., 2024). However, we can identify some qualities of diversity that are desirable in synthetic data from LLMs and measurable with automated metrics: less repetition, fewer surface-level patterns, and less redundancy. Shaib et al. (2025) validate a wide range of automated metrics to measure lexical di- versity of text, which we use to measure diversity of LLM responses in our paper. We expect fine- grained persona prompting to lead to improved scores on our chosen metrics over no-persona and coarse persona prompting. Readability In addition to measuring lexical di- versity and redundancy with the aforementioned metrics, we also evaluated two readability met- rics on LLM responses (McClure, 1987; Gunning, 1952). We hypothesize that persona-prompting should lead to larger variations in reading level pre- dictions on synthesized texts, since different per- sona descriptions should mirror the diverse reading levels of the individuals/groups they represent.Dataset CR CR- POS↓NDS SR Hom. ↓ ↑ ↓ BS↓ Dolly 2.58 5.84 2.33 2.95 0.55 no_robots 2.47 5.13 2.44 4.10 0.54 PH-IF 2.84 6.21 2.00 5.73 0.60 Tülu3-IF 3.20 6.30 1.51 6.96 0.59 Table 1: Diversity of prompts from human-written (top) and synthetic persona derived instruction following datasets. Arrows indicate direction of higher diversity. 3 Diversity of persona synthesized prompts Ge et al. (2024) and Lambert et al. (2025) sample synthetic personas from PersonaHub, and prompt an LLM to
https://arxiv.org/abs/2505.17390v1
synthesize plausible prompts/instructions/questions that these personas may ask. We investigate these synthesized prompts in the instruction-following (IF) domain with our suite of diversity metrics, comparing them to com- parable human-written/annotated datasets in Ta- ble 1. We chose Dolly (Conover et al., 2023) and no_robots (Rajani et al., 2023) as our IF human- written/annotated datasets. Table 1 demonstrates that the synthetic prompt datasets have uniformly worse scores across all diversity metrics, strongly indicating that persona- driven synthesized prompts are noticeably less di- verse than human-written counterparts. 4 Experiments If persona-driven data synthesis with fine-grained synthetic personas leads to more diverse synthetic data, then we should expect: 1. Improved diver- sity metrics where models are instructed to answer prompts with various personas. 2. Fine-grained personas should have better diversity metrics over coarse, less detailed personas. Data To test these hypotheses, we sample 100 prompts from Dolly’s creative-writing subset, as well as 100 fine-grained personas from Person- aHub. We derive ‘coarse’ personas from Person- aHub’s fine-grained personas by simply extracting the first clause in the persona using Stanza (Qi et al., 2020) due to the consistent structure of fine-grained persona descriptions. For example, the fine-grained persona a PR manager with insights into public re- lations strategy and press releases for influencers corresponds to the ‘coarse’ persona a PR manager . Conditions We evaluate our diversity metrics on model responses under the following conditions: 2 1.No-persona (NP) : Baseline condition where model is simply prompted with the instruc- tion/prompt from our sample. 2.Fine-grained persona (FP) : The model is prompted to answer the instruction/prompt from the perspective of the provided fine- grained persona description. 3.Coarse persona (CP) : Similar to the above, but the persona description is coarse. 4.cutoff (+cu) : Post-training leads to increased response length from LLMs (Singhal et al., 2024), and there are known correlations be- tween automated diversity metrics and text length (Covington and and, 2010; McCarthy and Jarvis, 2010). To compare against the human-written responses from our sample, we test an additional conditional where the prompt instructs the model to answer the prompt in x words or less , where x is num- ber of words in the human response rounded up to the nearest ten. We sample the model’s response for each prompt with every persona in our sample, leading to 100,000 responses in each (FP, CP) condition, and 100 responses for the NP condition. Models We evaluate and report on 2 models of different sizes: Llama-3.3-70B-Instruct (Grattafiori et al., 2024) and Deepseek-V3-0324 (DeepSeek-AI, 2024) (685B parameters) through Together’s API service1. Both models are open-weight and score high on benchmarks. We also report results from smaller models in Appendix C. Lexical diversity From Shaib et al. (2025), we focus on 5 metrics for our analyses (chosen as they have a low mutual correlation with each other): 1.CR(compression ratio) 2. CR-POS (compres- sion ratio for part-of-speech). Both CR and CR– POS are fast to compute, and are designed to iden- tify redundancy. 3. NDS (n-gram diversity score) extends the idea of token-type ratio to longer n– grams, capturing repeated token sequences
https://arxiv.org/abs/2505.17390v1
and in- dividual tokens. 4. SR(self-repetition) measures the tendency of LMs to repeat long n-grams across different outputs 5. Hom. BS (Homogenization score with BERTScore) uses LM embeddings to (ideally) capture “semantic” similarity. To evaluate the impact of persona-prompting and compare it against the no-persona condition, we 1api.together.ai 0 200 400 600 800 1000 1200 Completion length (tokens)source Dolly Deepseek-NP Deepseek-FP Deepseek-CPFigure 2: Density distribution of response lengths from Dolly (human responses) and Deepseek with(FP, CP) and without(NP) persona prompting in our sample. present the mean and standard deviation (SD) for the persona conditions over 100 random shuffles of persona-prompt pairs2. Each metric is calcu- lated over responses to the same 100 prompts, but with different personas assigned to each prompt (for each shuffle) — this simulates each prompt being answered by a different persona, hypotheti- cally leading to the best diversity scores. CR, CR- POS, SR and Hom.-BS should decrease with per- sona prompting (NDS should increase) if persona prompting leads to increased lexical diversity. Readability diversity We report Flesch-Kincaid (FK) and Gunning Fog (GF) metrics for analyzing diversity in readability. Both are numeric scores which roughly correspond to grade-levels; Scores above 16 indicate graduate reading level. We report the mean and SD over each persona. For evaluat- ing diversity, we expect fine-grained personas to show a higher SD of readability scores . Settings & Hyperparameters All models are prompted with a temperature of 1, and a maximum new token limit of 1024. Shaib et al. (2024) demonstrated that temperature and other sampling strategies don’t increase diversity for lexical/POS templates; Further, Ge et al. (2024) consider sampling orthogonal to boosting diversity in data synthesis, and do not vary it as part of their synthetic pipeline, motivating our decision not to test sampling strategies as an experimental condition. For our analysis of content diversity using embeddings, we embed responses using the Linq-AI-Research/Linq-Embed-Mistral model from Huggingface Hub. 2For the Hom.-BS score, we use 3 random pairings of prompts to persona due to its expensive runtime. 3 Cond. CR CR- POS↓NDS SR Hom. FK GF ↓ ↑ ↓ BS↓ ↑ ↑ Dolly 2.51 4.91 3.03 0.55 0.53 10.60 12.31Llama-3.3-70BNP 2.77 5.73 2.87 1.89 0.57 12.18 13.27 NP+cu 2.57 5.16 3.08 0.52 0.55 11.88 0.55 13.83 FP 2.71(.01) 5.38(.03) 2.84(.01) 2.50(.10) 0.58(.00) 11.21(2.52) 12.75(2.36) FP+cu 2.51(.02) 5.04(.03) 3.08(.02) 0.68(.09) 0.55(.00) 10.00(2.33) 12.02(2.46) CP 2.71(.01) 5.41(.03) 2.85(.02) 2.39(.13) 0.58(.00) 11.10(2.25) 12.60(2.09) CP+cu 2.51(.02) 5.06(.04) 3.09(.02) 0.61(.08) 0.55(.00) 9.85(2.19) 11.86(2.32)Deepseek-V3NP 2.36 5.50 3.15 0.86 0.58 10.09 11.09 NP+cu 2.29 4.95 3.32 0.11 0.54 9.47 10.88 FP 2.27(.01) 4.90(.03) 3.26(.01) 0.59(.11) 0.58(.00) 9.91(2.22) 11.39(2.25) FP+cu 2.20(.02) 4.71(.03) 3.38(.01) 0.09(.04) 0.55(.00) 9.19(2.18) 10.77(2.38) CP 2.30(.01) 5.01(.03) 3.23(.02) 0.54(.10) 0.58(.00) 9.82(1.80) 11.24(1.83) CP+cu 2.24(.02) 4.78(.04) 3.37(.02) 0.09(.04) 0.55(.00) 9.13(1.82) 10.71(2.02) Table 2: Diversity and readability metrics of responses from Llama-70B and Deepseek-V3 in all conditions. Top row is diversity of human-written responses in our Dolly sample. Standard deviation is in brackets when appropriate. Arrows indicate direction of higher diversity/reading levels. Highest scores in each metric bolded. 5 Results & Analysis Table 2 shows that diversity metrics improve sub- stantially in
https://arxiv.org/abs/2505.17390v1
the cutoff condition. An explicit length cutoff in the prompt improves model diversity by reducing model self-repetition in lexical and POS patterns. Seen together with Table 5 in Appendix C, its clear that lexical diversity improves with model size, with only Deepseek-V3 ’s metrics surpassing (or matching) the human response scores on our sample of Dolly. Diversity & response length Figure 2 further shows that persona-prompting leads to less di- versity of response length . Response lengths from Deepseek-V3 exhibit a greater spread (albeit longer on average than human responses for the same prompts) when prompted without a persona; Coarse personas also lead to a larger spread of re- sponse lengths over fine-grained personas. Coarse vs. fine-grained Responses with fine- grained persona prompting are not meaningfully more diverse over coarse personas. Fine-grained persona prompting seems to lead to increased vari- ance in readability metrics—however the increased variances are not statistically significant (with Lev- ene’s Test (Levene, 1960), p > 0.05for all differ- ences). Overall, we find that persona-prompting does improve lexical diversity, but coarse personas work just as well as fine-grained personas . Content diversity We measure the cosine sim- ilarity between embeddings of Deepseek model responses prompted with coarse and fine-grained personas (with cutoff) to assess content diversity.Mean cosine similarity across all response pairs is 0.79 ( σ= 0.12), indicating high overlap in content between fine-grained and coarse persona responses to the same prompt. Further, we find a positive correlation (Spearman’s ρ= 0.36,p < 1e−5) between prompt length and cosine similarity. De- tailed prompts/instructions override any per- sona description and lead to similar responses from LLMs. We present a sample of responses from Deepseek-V3 and Llama-70B under all con- ditions in Appendix B. 6 Conclusion Persona-prompting with fine-grained synthetic per- sonas has been claimed to lead to ‘diverse’ syn- thetic data without adequately defining or mea- suring diversity. In this work, we measure lexi- cal diversity and redundancy of synthetic prompts and responses generated with personas using a suite of diversity metrics. We find that persona- derived prompts are uniformly less diverse than human-written counterparts. When evaluated on creative writing prompts, persona-driven synthe- sis does lead to greater diversity scores, but only with larger model sizes. Further, the fine-grained detail of personas do not noticeably increase lexi- cal diversity, highlighting another instance of the fine-grained nuance trap (Healy, 2017). In future work, we hope to continue exploring the question ofcontent diversity of synthetic text derived from persona-driven prompting. 4 Limitations Content/meaning diversity In this work, we focus on measuring the lexical diversity and re- dundancy in text. Content diversity is difficult to measure automatically, and we do not assess content beyond embedding similarity. Rigorously measuring diversity of content (meaning) under persona-driven data synthesis is an open ques- tion that we leave to future work, such as using the Questions-under-Discussion framework (Na- muduri et al., 2025). Suitability of personas PersonaHub’s claim of increased diversity comes with the caveat that certain personas are more appropriate for a sub- set of prompts. In this work, we approach this systematically by focusing on prompts from
https://arxiv.org/abs/2505.17390v1
the creative-writing subset of Dolly (which has questions/instructions that are more open-ended and thus be applicable to most personas), and present results over 100 random re-assignments of each prompt to a persona and report the mean and SD over all pairings. We leave assessments of diversity of filtered personas to future work. Language data We only focus on English data in this work. Future work might look into mea- suring lexical diversity of persona-driven synthetic prompts in other languages. Acknowledgments This research is partially supported by start-up funds and computational resources provided by Ithaca College. We would also like to thank Texas Advanced Computing Center (TACC) for provid- ing computational resources used to run some of the experiments in this paper. References Lisa P. Argyle, Ethan C. Busby, Nancy Fulda, Joshua R. Gubler, Christopher Rytting, and David Wingate. 2023. Out of one, many: Using language models to simulate human samples. Political Analysis , 31(3):337–351. Tilman Beck, Hendrik Schuff, Anne Lauscher, and Iryna Gurevych. 2024. Sensitivity, performance, robustness: Deconstructing the effect of sociodemographic prompt- ing. In Proceedings of the 18th Conference of the Eu- ropean Chapter of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 2589–2615, St. Julian’s, Malta. Association for Computational Lin- guistics. Myra Cheng, Esin Durmus, and Dan Jurafsky. 2023a.Marked personas: Using natural language prompts to measure stereotypes in language models. In Proceed- ings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 1504–1532, Toronto, Canada. Association for Computational Linguistics. Myra Cheng, Tiziano Piccardi, and Diyi Yang. 2023b. CoMPosT: Characterizing and evaluating caricature in LLM simulations. In Proceedings of the 2023 Confer- ence on Empirical Methods in Natural Language Pro- cessing , pages 10853–10875, Singapore. Association for Computational Linguistics. Mike Conover, Matt Hayes, Ankit Mathur, Jianwei Xie, Jun Wan, Sam Shah, Ali Ghodsi, Patrick Wendell, Matei Zaharia, and Reynold Xin. 2023. Free dolly: Introduc- ing the world’s first truly open instruction-tuned llm. Michael A. Covington and Joe D. McFall and. 2010. Cutting the gordian knot: The moving-average type–token ratio (mattr). Journal of Quantitative Lin- guistics , 17(2):94–100. DeepSeek-AI. 2024. Deepseek-v3 technical report. Preprint , arXiv:2412.19437. Yunzhen Feng, Elvis Dohmatob, Pu Yang, Francois Charton, and Julia Kempe. 2025. Beyond model col- lapse: Scaling up with synthesized data requires verifi- cation. In The Thirteenth International Conference on Learning Representations . Tao Ge, Xin Chan, Xiaoyang Wang, Dian Yu, Haitao Mi, and Dong Yu. 2024. Scaling synthetic data creation with 1,000,000,000 personas. Preprint , arXiv:2406.20094. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, Amy Yang, Angela Fan, Anirudh Goyal, An- thony Hartshorn, Aobo Yang, Archi Mitra, Archie Sra- vankumar, Artem Korenev, Arthur Hinsvark, and 542 others. 2024. The llama 3 herd of models. Preprint , arXiv:2407.21783. Robert Gunning. 1952. The technique of clear writing . McGraw-Hill, New York. Shashank Gupta, Vaishnavi Shrivastava, Ameet Desh- pande, Ashwin Kalyan, Peter Clark, Ashish Sabharwal, and Tushar Khot. 2024. Bias runs deep: Implicit reason- ing biases in persona-assigned LLMs. In The Twelfth International Conference on
https://arxiv.org/abs/2505.17390v1
Learning Representations . Kieran Healy. 2017. Fuck nuance. Sociological Theory , 35(2):118–127. Tiancheng Hu and Nigel Collier. 2024. Quantifying the persona effect in LLM simulations. In Proceedings of the 62nd Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers) , pages 10289–10307, Stroudsburg, PA, USA. Association for Computational Linguistics. Nathan Lambert, Jacob Morrison, Valentina Pyatkin, Shengyi Huang, Hamish Ivison, Faeze Brahman, Lester James V . Miranda, Alisa Liu, Nouha Dziri, Shane 5 Lyu, Yuling Gu, Saumya Malik, Victoria Graf, Jena D. Hwang, Jiangjiang Yang, Ronan Le Bras, Oyvind Tafjord, Chris Wilhelm, Luca Soldaini, and 4 others. 2025. Tulu 3: Pushing frontiers in open language model post-training. Preprint , arXiv:2411.15124. H Levene. 1960. Robust tests for equality of variances. Contributions to probability and statistics , pages 278– 292. Andy Liu, Mona Diab, and Daniel Fried. 2024. Eval- uating large language model biases in persona-steered generation. In Findings of the Association for Com- putational Linguistics: ACL 2024 , pages 9832–9850, Bangkok, Thailand. Association for Computational Lin- guistics. Philip M. McCarthy and Scott Jarvis. 2010. Mtld, vocd- d, and hd-d: A validation study of sophisticated ap- proaches to lexical diversity assessment. Behavior Re- search Methods , 42:381–392. Glenda M. McClure. 1987. Readability formulas: Use- ful or useless? IEEE Transactions on Professional Communication , PC-30(1):12–15. Ramya Namuduri, Yating Wu, Anshun Asher Zheng, Manya Wadhwa, Greg Durrett, and Junyi Jessy Li. 2025. QUDsim: Quantifying discourse similarities in LLM- generated text. Preprint , arXiv:2504.09373. Joon Sung Park, Joseph O’Brien, Carrie Jun Cai, Mered- ith Ringel Morris, Percy Liang, and Michael S. Bern- stein. 2023. Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Tech- nology , UIST ’23, New York, NY , USA. Association for Computing Machinery. Pat Pataranutaporn, Valdemar Danry, Joanne Leong, Parinya Punpongsanon, Dan Novy, Pattie Maes, and Misha Sra. 2021. Ai-generated characters for support- ing personalized learning and well-being. Nature Ma- chine Intelligence , 3(12):1013–1022. Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python natural language processing toolkit for many human lan- guages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations . Nazneen Rajani, Lewis Tunstall, Edward Beeching, Nathan Lambert, Alexander M. Rush, and Thomas Wolf. 2023. No robots. https://huggingface.co/ datasets/HuggingFaceH4/no_robots . Leonard Salewski, Stephan Alaniz, Isabel Rio-Torto, Eric Schulz, and Zeynep Akata. 2023. In-context im- personation reveals large language models 'strengths and biases. In Advances in Neural Information Process- ing Systems , volume 36, pages 72044–72057. Curran Associates, Inc. Sankalp Sethi, Joni Salminen, Danial Amin, and Bernard J Jansen. 2025. "When AI Writes Personas": Analyzing lexical diversity in LLM-generated persona descriptions. In Proceedings of the Extended Abstractsof the CHI Conference on Human Factors in Computing Systems , CHI EA ’25, New York, NY , USA. Association for Computing Machinery. Chantal Shaib, Joe Barrow, Jiuding Sun, Alexa F. Siu, Byron C. Wallace, and Ani Nenkova. 2025. Standardiz- ing the measurement of text diversity: A tool and a com- parative analysis of scores. Preprint , arXiv:2403.00553. Chantal
https://arxiv.org/abs/2505.17390v1
Shaib, Yanai Elazar, Junyi Jessy Li, and By- ron C Wallace. 2024. Detection and measurement of syntactic templates in generated text. In Proceedings of the 2024 Conference on Empirical Methods in Nat- ural Language Processing , pages 6416–6431, Miami, Florida, USA. Association for Computational Linguis- tics. Prasann Singhal, Tanya Goyal, Jiacheng Xu, and Greg Durrett. 2024. A long way to go: Investigating length correlations in RLHF. In First Conference on Language Modeling . Pranav Narayanan Venkit, Jiayi Li, Yingfan Zhou, Sarah Rajtmajer, and Shomir Wilson. 2025. A tale of two identities: An ethical audit of human and ai-crafted personas. arXiv preprint arXiv:2505.07850 . Angelina Wang, Jamie Morgenstern, and John P. Dick- erson. 2025. Large language models that replace human participants can harmfully misportray and flatten iden- tity groups. Preprint , arXiv:2402.01908. Dora Zhao, Jerone Andrews, Orestis Papakyriakopou- los, and Alice Xiang. 2024. Position: Measure dataset diversity, don’t just claim it. In Proceedings of the 41st International Conference on Machine Learning , vol- ume 235 of Proceedings of Machine Learning Research , pages 60644–60673. PMLR. Mingqian Zheng, Jiaxin Pei, Lajanugen Logeswaran, Moontae Lee, and David Jurgens. 2024. When ”a help- ful assistant” is not really helpful: Personas in system prompts do not improve performances of large language models. In Findings of the Association for Computa- tional Linguistics: EMNLP 2024 , pages 15126–15154, Miami, Florida, USA. Association for Computational Linguistics. 6 A Prompts A.1 No persona prompt Respond to the following question/instruction in {num_words_round} words or less: {question} A.2 Persona prompt Assume you are the following persona: {persona}. Now respond to the following question/instruction ap- propriately from the perspective of the above persona in {num_words_round} words or less: {question} B Qualitative analysis We present prompts and responses from Deepseek- V3 and Llama-70B in Tables 3 and 4 under 3 prompting conditions: no-persona, fine-grained persona, and coarse persona, all with a word cutoff specified in prompt. 7 Prompt Fine- grained personaCoarse per- sonaNo-persona response Fine-grained persona re- sponseCoarse persona re- sponse What are the best day trips near San Francisco?a PR man- ager with insights into public relations strategy and press releases for influencersa PR man- agerTop day trips near San Francisco: - Napa Val- ley (wine tasting) - Muir Woods (redwood hikes) - Point Reyes (scenic coastline) - Santa Cruz (beaches & boardwalk) - Half Moon Bay (charm- ing coastal town) - Berke- ley/Oakland (food & cul- ture) - Sonoma (winer- ies & historic plaza) All within ˜2 hours!For influencers visiting San Francisco, top day-trip spots with great content potential include: Napa Val- ley—Stunning vineyards, luxury experiences. Big Sur—Breathtaking coastal views. Monterey & Carmel—Charming seaside towns, scenic drives. Sausalito & Muir Woods—Quaint water- front, giant redwoods. Perfect for travel vlogs, reels, and lifestyle PR pitches!For a quick escape from San Francisco, consider these top day trips: Napa Valley for wine tasting, Muir Woods for redwood hikes, Half Moon Bay for coastal charm, or Sausal- ito for bay views. Each offers a unique blend of nature, culture, and leisure—perfect for press- worthy content or client outings! You were abducted by aliens and experi- mented on,
https://arxiv.org/abs/2505.17390v1
you were sent back to earth with the knowledge and ability to solve any problem on earthan academic who special- izes in the fashion and clothing of specific time periodsan academic If I had alien-bestowed omnipotence, I’d pri- oritize curing diseases, ending hunger, and halting climate col- lapse—solving root causes, not symptoms. But absolute power corrupts; unchecked "solutions" might erase free will or diversity. Perhaps the real lesson is collective human effort, not messianic shortcuts. (Or maybe I’d just zap mosquitoes into oblivion and call it a day. )As a historian of fashion, my alien-augmented in- sight reveals that human- ity’s greatest unsolved problem is fast fashion’s ecological and ethical toll. With this knowledge, I’d devise a global system for sustainable textile pro- duction, reviving ancient techniques like natural dyeing and zero-waste patterning. Imagine a world where clothing hon- ors both heritage and innovation—without ex- ploiting labor or landfills. The answer, it seems, was always in the past. Now, let’s tailor the future.As an academic, I would meticulously document my findings and subject them to rigorous peer re- view. The alien-derived knowledge would be ap- plied ethically, prioritiz- ing global challenges like climate change, poverty, and disease. I’d col- laborate with interdisci- plinary teams to vali- date and scale solutions while maintaining trans- parency about their ori- gins. The goal: advanc- ing human understanding and well-being without in- citing panic or exploita- tion. Science must guide this extraordinary oppor- tunity responsibly. How to stay calm in stressful situations ?a savvy talent agent specializing in repre- senting independent film direc- torsa savvy tal- ent agentTo stay calm in stressful situations: 1. Breathe deeply to slow your heart rate. 2. Pause be- fore reacting—take a mo- ment to assess. 3. Reframe thoughts—focus on solutions, not prob- lems. 4. Prioritize tasks—break them into manageable steps. 5. Practice mindfulness or quick meditation. 6. Use positive self-talk to stay composed. 7. Take breaks to recharge when needed. Staying calm im- proves decision-making and reduces stress.In high-stress mo- ments—like a blown budget or a temperamen- tal A-lister—I ground myself with deep breaths and focus on solutions, not panic. I remind my directors (and myself) that stress is temporary; the film isn’t. Preparation is key: anticipate fires before they spark. And humor? Always. A quick laugh diffuses tension. Stay cool, stay sharp—your clarity keeps the set steady. That’s how we turn chaos into art."Stay sharp under pres- sure by breathing deeply and focusing on what you can control. Visualize success—confidence is key. Prioritize tasks, delegate when possible, and trust your prep. Take quick mental breaks to reset; even 10 sec- onds can recharge you. Remember, stress is adrenaline—harness it to perform, not panic. Al- ways keep your endgame in sight. Clients and deals thrive on your calm, so own it like the pro you are." Table 3: Sample responses from Deepseek under various conditions (all with cutoff). 8 Prompt Fine- grained personaCoarse per- sonaNo-persona response Fine-grained persona re- sponseCoarse persona re- sponse What are the best day trips near San Francisco?a PR man- ager with insights into public relations
https://arxiv.org/abs/2505.17390v1
strategy and press releases for influencersa PR man- agerPalm Springs, California, offers a relaxing getaway with plenty of activities. Visitors can hike in In- dian Canyons, explore the Palm Springs Art Mu- seum, or take a hot air bal- loon ride. Golfing, shop- ping, and dining are also popular pastimes. The city’s mid-century mod- ern architecture and vi- brant nightlife add to its charm. Overall, Palm Springs provides the per- fect blend of outdoor ad- venture, culture, and en- tertainment for a memo- rable vacation. Its natu- ral beauty and luxurious amenities make it an ideal destination for relaxation and fun.As a PR manager for influ- encers, I’ve had the plea- sure of coordinating lux- urious getaways to Palm Springs. This desert oa- sis offers endless opportu- nities for relaxation and adventure. From reju- venating spa treatments to hot air balloon rides, and from trendy boutique shopping to scenic hik- ing trails, Palm Springs has something for ev- eryone. For a truly unique experience, influ- encers love snapping pho- tos at the iconic mid- century modern architec- ture and gorgeous golf courses. It’s the perfect blend of style, leisure, and natural beauty.As a PR manager, I’m thrilled to share the best of Palm Springs. This desert oasis offers end- less sunshine, luxurious resorts, and breathtaking mountain views. Visitors can hike Indian Canyons, golf at premier courses, or relax at rejuvenating spas. The city’s vi- brant culture is show- cased through its mid- century modern architec- ture, trendy boutiques, and exceptional dining options. Whether you’re seeking adventure or re- laxation, Palm Springs is the ultimate getaway des- tination, offering an un- paralleled blend of natu- ral beauty and sophistica- tion. Come and experi- ence it for yourself. You were abducted by aliens and experi- mented on, you were sent back to earth with the knowledge and ability to solve any problem on earthan academic who special- izes in the fashion and clothing of specific time periodsan academic I utilized my newfound alien knowledge to tackle Earth’s most pressing is- sues, developing sustain- able energy sources, cur- ing diseases, and imple- menting global peace ini- tiatives, transforming the planet into a utopia. Hu- manity thrived, and I became a revered hero, known for my extraordi- nary abilities and inter- galactic insight.Fascinating. With my newfound abilities, I’ll ap- ply them to unraveling the enigmas of historical fash- ion, deciphering the in- tricacies of 18th-century corsetry and resolving the mysteries of ancient tex- tile production.Fascinating. With my newfound, extraterrestrial-acquired knowledge, I’ll ap- ply interdisciplinary approaches to solve complex problems, leveraging advanced tech- nologies and innovative methodologies to drive transformative change on Earth. How to stay calm in stressful situations ?a savvy talent agent specializing in repre- senting independent film direc- torsa savvy tal- ent agentTake slow, deep breaths, and focus on the present moment. Practice mind- fulness, and try to re- frame your thoughts in a positive way. Also, prioritize self-care, exer- cise regularly, and get enough sleep to help man- age stress and stay calm.Breathe, prioritize, and fo- cus on the solution. In high-pressure film
https://arxiv.org/abs/2505.17390v1
nego- tiations, I remind myself that calmness is key to se- curing the best deal for my indie director clients.Breathe, darling. Prior- itize tasks, and delegate when possible. Stay fo- cused on the solution, not the problem. And remem- ber, a calm demeanor can make all the difference in high-pressure negotia- tions – trust me, I’ve been there. Table 4: Sample responses from Llama-70B under various conditions (all with cutoff). 9 C Diversity of responses from Llama-1B and 8B Cond. CR CR- POS↓NDS SR Hom. FK GF ↓ ↑ ↓ BS↓ ↑ ↑Llama-3.2-1BNP 2.74 5.70 2.87 1.57 0.56 13.77 14.71 NP+cu 2.56 5.37 3.0 0.58 0.54 11.58 13.37 FP 2.62(.02) 5.34(.03) 2.91(.02) 1.88(.13) 0.56(.00) 10.88(2.05) 12.34(1.88) FP+cu 2.47(.04) 5.12(.07) 3.08(.04) 0.68(.10) 0.54(.00) 10.39(2.00) 12.23(2.00) CP 2.61(.03) 5.38(.04) 2.91(.02) 1.87(.13) 0.56(.00) 10.92(1.67) 12.38(1.54) CP+cu 2.47(.03) 5.13(.04) 3.09(.02) 0.66(.10) 0.54(.00) 10.79(4.36) 12.25(1.64)Llama-3.1-8BNP 2.77 5.78 2.86 1.59 0.57 11.57 12.55 NP+cu 2.52 5.24 3.13 0.50 0.55 12.68 14.54 FP 2.63(.02) 5.36(.04) 2.9(.02) 2.04(.13) 0.57(.00) 10.62(2.34) 12.02(2.13) FP+cu 2.47(.02) 5.06(.03) 3.09(.02) 0.77(.11) 0.55(.00) 9.98(2.38) 11.85(2.43) CP 2.64(.02) 5.42(.03) 2.90(.01) 2.00(.10) 0.56(.00) 10.73(2.15) 12.10(1.95) CP+cu 2.48(.02) 5.10(.03) 3.10(.02) 0.70(.08) 0.55(.00) 9.98(2.22) 11.85(2.28) Table 5:Diversity and readability metrics of responses from Llama-1B and Llama-8B in all conditions. 10
https://arxiv.org/abs/2505.17390v1
arXiv:2505.17391v1 [cs.CL] 23 May 2025Curriculum-Guided Reinforcement Learning for Efficient Multi-Hop Retrieval-Augmented Generation Yuelyu Ji University of Pittsburgh Pittsburgh, PA, USA yuj49@pitt.eduRui Meng Google Cloud AI Research Sunnyvale, CA, USA Zhuochun Li University of Pittsburgh Pittsburgh, PA, USADaqing He University of Pittsburgh Pittsburgh, PA, USA Abstract Retrieval-augmented generation (RAG) grounds large language models (LLMs) in up-to-date external evidence, yet existing multi- hop RAG pipelines still issue redundant sub-queries, explore too shallowly, or wander through over-long search chains. We intro- duce EVO-RAG , a curriculum-guided reinforcement-learning frame- work that evolves a query-rewriting agent from broad early-stage exploration to concise late-stage refinement. EVO-RAG couples a seven-factor, step-level reward vector—covering relevance, redun- dancy, efficiency, and answer correctness—with a time-varying scheduler that re-weights these signals as the episode unfolds. The agent is trained with Direct Preference Optimization over a multi-head reward model, enabling it to learn when to search, back- track, answer, or refuse. Across four multi-hop QA benchmarks (HotpotQA, 2WikiMultiHopQA, MuSiQue, Bamboogle), EVO-RAG boosts Exact Match by up to 4.6 points over strong RAG baselines while trimming average retrieval depth by 15 % . Ablations confirm the complementary roles of curriculum staging and dynamic re- ward scheduling. EVO-RAG thus offers a general recipe for building reliable, cost-effective multi-hop RAG systems. CCS Concepts •Information systems →Query reformulation ;Language models . Keywords Information Retrieval; Reinforcement Learning; Query Re-writing ACM Reference Format: Yuelyu Ji, Rui Meng, Zhuochun Li, and Daqing He. 2018. Curriculum-Guided Reinforcement Learning for Efficient Multi-Hop Retrieval-Augmented Gen- eration. In Proceedings of Make sure to enter the correct conference title from your rights confirmation email (Conference acronym ’XX). ACM, New York, NY, USA, 11 pages. https://doi.org/XXXXXXX.XXXXXXX [1]Where is ABQ Uptown? [2]What retailers are headquartered in Poole, Dorset, United Kingdom? [3]Does Lush have a store at ABQ Uptown? [4]Does Unit Four have a store at ABQ Uptown? …[1]ABQ uptown is a shopping center in Albuquerque , New Mexico, USA. [2]These include Lush (cosmetics) and Unit Four (ThruDark retail store). In addition, Beales (department store) has a main location in Poole. [3] ABQ Uptown is an outdoor shopping mall owned by Simon Property Group in Albuquerque, New Mexico , one of four malls located in the Albuquerque area .Anchor tenants include J.Crew , The North Face , and Lush ,...What retailer in ABQ Uptown is headquartered in Poole, Dorset, United Kingdom? Discovery Refinement Retrieval Retrieval Figure 1: Illustration of EVO-RAG’s two-stage curriculum. In the initial Discovery stage, the agent broadly explores multi- ple retrieval pathways to identify potentially relevant doc- uments. Subsequently, in the Refinement stage, the agent fine-tunes queries to produce concise, evidence-backed an- swers. 1 Introduction Large Language Models (LLMs) have significantly advanced nat- ural language processing, demonstrating impressive capabilities across various tasks, including question answering [ 2], conver- sational agents [ 21], and content generation [ 23]. Despite their strengths, LLMs frequently produce hallucinations and inaccura- cies, primarily due to their reliance on static, pre-trained knowledge, which can become outdated and lacks context-specific details [ 15]. Permission to make digital or hard copies of all or part of this work for personal or classroom use is
https://arxiv.org/abs/2505.17391v1
granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. Conference acronym ’XX, Woodstock, NY ©2018 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-XXXX-X/2018/06 https://doi.org/XXXXXXX.XXXXXXX Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Yuelyu Ji, Rui Meng, Zhuochun Li, and Daqing He Retrieval-Augmented Generation (RAG) addresses these limitations by dynamically incorporating external knowledge sources into the generation process, enhancing factual grounding and reducing hal- lucination risks [ 15].A multi-hop QA instance typically requires a system—or more precisely, a decision-making agent—to issue a chain of sub-queries, retrieve intermediate evidence, and reason over these clues step by step. A typical RAG system comprises several interrelated modules, including query rewriting, document retrieval, document filtering, and answer generation [ 3,8].In the multi-hop setting, these deci- sions are inter-dependent across timesteps, making a monolithic SFT objective ill-suited for global optimisation. However, while early RAG systems trained each module in isolation, recent studies have begun to explore end-to-end objectives that jointly fine-tune the retriever and generator [ 3,8]. Nevertheless, the prevailing su- pervised objectives remain static and phase-agnostic, causing goal misalignment in multi-hop reasoning. This kind of modular isola- tion often results in goal misalignment, negatively impacting overall coherence and accuracy [ 38]. Recent approaches have introduced reinforcement learning (RL) techniques to unify these components, emphasizing cooperative optimization towards end-task objectives. Notably, MMOA-RAG employs multi-agent cooperative RL for end- to-end optimisation of retriever, reranker and generator, aligning them to a single performance metric [ 5]. Similarly, frameworks like RAG-RL leverage curriculum learning and specialized policy opti- mization algorithms to handle complex multi-hop reasoning tasks. However, they still rely on episode-level rewards and static weight- ing, which are ill-equipped to drive a query-rewriting policy that must first discover diverse evidence and later refine its reasoning path [11]. Despite these advances, existing RL-based RAG approaches re- main phase-agnostic . Episode-level optimisation focuses on the final answer while ignoring intermediate retrieval quality, query redun- dancy and computational efficiency [ 25]. Furthermore, their static reward schedules fail to adapt as the agent’s uncertainty shrinks, providing little guidance for the critical transition from broad ex- ploration to conservative refinement. Consequently, such agents tend to either under-explore or generate redundant sub-queries degrading effectiveness [20, 28]. Motivated by these unresolved issues, we propose EVO-RAG, a novel evolving RAG framework that operates in two explicit phases— Discovery followed by Refinement —as illustrated in Figure 1. During the Discovery stage, EVO-RAG emphasizes exploratory be- haviors, prioritizing retrieval breadth and query diversity to com- prehensively identify relevant evidence. Subsequently, in the Re- finement stage, EVO-RAG shifts its focus to efficiency and accu- racy, fine-tuning the retrieval and generation processes towards producing concise, evidence-backed answers. EVO-RAG addresses the shortcomings
https://arxiv.org/abs/2505.17391v1
of prior approaches through two key innova- tions. First, we introduce a multi-objective reward mechanism that explicitly captures intermediate action quality, penalizing redun- dant queries, backtracking inefficiencies, and poor retrieval actions while simultaneously rewarding informative and novel retrieval outcomes. Second, we implement a dynamic, time-based reward scheduler that adjusts reward component weights throughout each retrieval-generation episode. This dynamic reward adjustment al- lows EVO-RAG to shift progressively from broad initial discoveryto precise late-stage refinement, emphasizing efficiency and final- answer accuracy. We conduct extensive evaluations across four widely recognized multi-hop question answering benchmarks—HotpotQA [ 41], 2Wiki- MultiHopQA [ 10], MuSiQue [ 31], and Bamboogle [ 22]. Our results demonstrate that EVO-RAG achieves substantial improvements over state-of-the-art methods, simultaneously enhancing accuracy and significantly reducing retrieval redundancy and computational overhead. Comprehensive ablation studies further underscore the importance of individual reward components and the effectiveness of dynamic scheduling in refining retrieval strategies. Overall, EVO-RAG provides a robust methodological advance- ment for multi-stage, multi-objective optimization in RAG systems, offering a versatile and scalable framework applicable to broader NLP tasks. This research not only bridges existing gaps in RAG but also lays a solid foundation for future exploration into more sophisticated adaptive reward structures and retrieval-generation synergies. 2 Related Work RAG Paradigms. The Retrieval Augmented Generation (RAG) framework integrates external knowledge retrieval with language model generation, significantly enhancing the factual accuracy and reducing hallucinations inherent in standalone large language models (LLMs) [ 15]. The RAG benchmarking study systematically analyzed various retriever-generator combinations, highlighting key trade-offs between retrieval precision and generative coher- ence [ 3]. SMARTRAG further extends this approach by jointly training retrieval and generation components with end-to-end en- vironmental feedback, demonstrating improved synergy between these modules [ 8]. Additionally, RAG-Gym offers a versatile toolkit enabling fine-grained process supervision for systematically opti- mizing RAG-based reasoning agents [ 6,7,17–19,38,47]. Existing joint-training efforts still rely on static losses and do not adapt the retrieval strategy as the LLM’s information need evolves within a dialogue or a reasoning chain, leaving room for phase-aware optimisation such as ours. Query Rewriting and Multi-Hop Retrieval. Effective query rewrit- ing is critical for multi-hop retrieval , where the system must gather several pieces of evidence in sequence (e.g. first retrieving “Who discovered penicillin?”, then “What element did Alexander Fleming help identify?”). Formally, each hop 𝑡issues a sub-query 𝑞𝑡condi- tioned on previously retrieved passages {𝑑<𝑡}, so errors propagate downstream if 𝑞𝑡is poorly formed. ChainRAG iteratively completes queries by incorporating missing entities across retrieval hops, mitigating common pitfalls of incomplete searches [ 49]. MaFeRw refines query formulations through multi-faceted feedback, includ- ing retrieval scores and generation quality, optimizing queries via proximal policy optimization (PPO) [ 34]. The IRCoT frame- work [ 30] prompts an LLM to generate a chain-of-thought, inserts a retrieval call whenever the thought contains a masked entity, and feeds the fetched passage back into the context before the next reasoning token.Speculative RAG enhances efficiency by si- multaneously drafting multiple candidate queries with specialized models and subsequently validating these with a generalist model, substantially reducing retrieval latency [ 35]. SiReRAG integrates Curriculum-Guided Reinforcement Learning for Efficient
https://arxiv.org/abs/2505.17391v1
Multi-Hop Retrieval-Augmented Generation Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Question:What retailer in ABQ Uptown is headquartered in Poole, Dorset, United Kingdom? [Current Observation] None/Retrieve Context [History] → Q1: Where is ABQ Uptown located? → Q2: Which stores are in ABQ Uptown? → Q3: Where is Lush headquartered? [Document] [D ₁] ABQ Uptown is in Albuquerque. [D ₂] ABQ UPTOWN. New Mexico's destination for upscale shopping and entertainment and dining. Question:What retailer in ABQ Uptown is headquartered in Poole, Dorset, United Kingdom? [Current Observation] None/Retrieve Context [History] → Q1: Where is ABQ Uptown located? → Q2: Which stores are in ABQ Uptown? → Q3: Where is Lush headquartered? [Document] [D ₁] Apple, AT&T, Banana Republic, Eddie Bauer [D ₂] Free People, Gap, J. Crew, Kendra Scott, Lululemon, Lush Fresh MAC Cosmetics Search Q1 Search Q2 Refusal Reward Backtrack Penalty Retrieval Action Penalty Sub-query Overlap Penalty Answer Correctness Step Penalty Retrieve Bonus Reward Signals High reward history Low reward history Rule based rewards LLM based rewards Train RL model Figure 2: The query rewriting agent (top) interacts with the environment through four high-level actions and observes retrieved evidence at each step. Seven reward signals (middle) provide dense step-level feedback based on relevance, redundancy, efficiency, and final correctness. These signals are used to train a multi-head preference model and update the agent policy via Direct Preference Optimization (DPO, bottom). A two-stage curriculum shifts weight from early exploration to late refinement . semantic similarity and entity coherence metrics to fortify retrieval against noisy queries [ 44]. Moreover, InstructRAG leverages explicit model-generated rationales to stabilize and enhance query rewrit- ing [12,13,16,26,36,37,40,45,48]. These methods mainly rely on heuristic rewrite rules or static reward weights; they rarely decide when to stop exploring versus when to refine , a gap our two-stage scheduler explicitly addresses. Reinforcement Learning & Reward Modeling in RAG. Recent ad- vancements integrate reinforcement learning (RL) methods into RAG to optimize retrieval-generation pipelines more holistically. MMOA-RAG adopts multi-agent cooperative RL, enabling distinct modules to collaboratively optimize towards unified performance objectives [ 5]. RAG-RL employs curriculum learning coupled with Group Relative Policy Optimization (GRPO) for robust multi-hop reasoning, effectively balancing exploration and exploitation [ 11]. Preference-based frameworks, such as PRAISE, utilize direct prefer- ence optimization methods, leveraging human-aligned feedback sig- nals to enhance conversational question answering [ 14]. RoseRAG applies margin-aware preference optimization, significantly im- proving small-scale LLM robustness against noisy retrieval results [20]. Similarly, R1-Searcher incentivizes search capabilities within LLMs using tailored RL objectives, systematically enhancing the retrieval effectiveness [ 25]. RAG-Reward integrates RL with hu- man feedback and automated reward models to explicitly reduce hallucinations and enhance factual correctness [ 43]. Further, RbFT introduces robust fine-tuning techniques targeting retrieval defects,improving resilience against noisy retrieval scenarios [ 32].Prior RL- based RAG frameworks optimise episode-level rewards and adopt fixed weightings, which hampers their ability to penalise redundant hops or promote late-stage precision—challenges we tackle with dynamic, step-level rewards. Foundation Models and Agent Architectures. Foundation models such as LLaMA provide open, efficient bases widely adopted within RAG research, enabling scalable exploration of retrieval-augmented architectures [ 29]. The
https://arxiv.org/abs/2505.17391v1
GPT-4 technical report outlines robust eval- uation paradigms essential for assessing large-scale model capabili- ties across diverse tasks [ 1]. Architectures inspired by the ReAct framework, which integrates reasoning and action in a unified LLM-based agent, provide interactive and adaptive retrieval agents, supporting sequential retrieval-generation interactions [42]. Retriever–Generator–Verification Frameworks. Retriever Genera- tor Verification (RGV) architectures enhance factual coherence by systematically generating and verifying candidate evidence docu- ments in a structured pipeline, complementing conventional modu- lar RAG systems [ 27]. Trustworthy alignment frameworks further augment RAG models by explicitly modeling and optimizing factual reliability through specialized RL techniques [46]. Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Yuelyu Ji, Rui Meng, Zhuochun Li, and Daqing He 3 Methodology 3.1 Framework Overview This section details the learning framework that forms the foun- dation of EVO-RAG , which is summarized in Figure 2. The agent iteratively issues sub-queries, receives document feedback, and chooses between actions such as continuing, backtracking, answer- ing, or refusing. A seven-part reward structure evaluates each step to guide fine-tuning via preference optimization. 3.2 Step-Level Reward In our formulation, the query–rewriting policy is treated as a reinforcement-learning model . At each hop 𝑡, the agent observes a state𝑠𝑡=(𝑞<𝑡,𝐷<𝑡,𝐴<𝑡), where𝑞<𝑡is the list of previous sub- queries,𝐷<𝑡the corresponding retrieved documents, and 𝐴<𝑡any intermediate answers and rationales. The agent then selects an action𝑎𝑡from a discrete space. A={SEARCH,BACKTRACK,ANSWER,REFUSE} In multi-hop RAG, the quality of each retrieval and query rewrit- ing step directly impacts final answer accuracy. To guide the model’s behaviour at each step, EVO-RAG defines a seven-dimensional step- level reward vector r𝑡=(𝑟(1) 𝑡,...,𝑟(7) 𝑡). We now introduce and motivate two core components. Retrieval Bonus ( 𝑟ret).At each step 𝑡, the agent selects an action 𝑎𝑡∈A and retrieves a document set 𝐷𝑡if𝑎𝑡=SEARCH . LetD∗de- note the gold answer-supporting documents. To reward successful retrieval, we define: 𝑟ret(𝑠𝑡,𝑎𝑡)=  +1if𝑎𝑡=SEARCH∧𝐷𝑡∩D∗≠∅, −1if𝑎𝑡=SEARCH∧𝐷𝑡∩D∗=∅, 0 otherwise. This reward evaluates the quality of each SEARCH action based on whether it retrieves any relevant document. Although the step index𝑡does not directly appear in the reward formula, the agent accumulates 𝑟retat every step . Hence, discovering relevant evidence earlier allows the agent to gather more positive signal across the episode, incentivizing early and effective retrieval. Note.𝑟retistime–agnostic : it only checks whether the current SEARCH hits a gold document; the earlier the hit, the more positive reward the agent accumulates. Sub-query Overlap Penalty ( 𝑟dup).A prevalent inefficiency in multi-hop RAG is that the model issues nearly identical sub-queries, wasting retrieval calls without bringing new evidence. Let q𝑡= 𝑓enc(𝑞𝑡)be the embedding of the current sub-query and q𝑗the embedding of a previous sub-query ( 𝑗<𝑡). We use the sentence- transformers/all-MiniLM-L6-v2 [ 33] to embedding the queries. We penalise the maximum cosine similarity: 𝑟dup(𝑠𝑡,𝑎𝑡)=−max 𝑗<𝑡cosq𝑡,q𝑗. The larger the overlap with any earlier query, the stronger the neg- ative reward, encouraging diversity while still permitting similarity when genuinely useful.Back-track Penalty ( 𝑟bt).Back-tracking decisions are made by the LLM based policy itself: whenever the model selects the high-level actionBACKTRACK , we apply a fixed penalty 𝑟bt(𝑠𝑡,𝑎𝑡)=−1 𝑎𝑡=BACKTRACK , where 1[·]is the indicator function. The negative reward discour- ages gratuitous back-tracking,
https://arxiv.org/abs/2505.17391v1
yet allows the policy to pay the cost when it predicts that returning to a previous state will yield higher cumulative reward downstream. Refusal Reward ( 𝑟ref).In practice, unanswerable is not an oracle flag but the judgement of an external LLM verifier (ChatGPT-4o) that inspects the current evidence set 𝐷≤𝑡and returns enough_evidence ∈ {yes,no}. Thus the policy is rewarded for truthfully refusing when the verifier deems the evidence insufficient, rather than for skipping retrieval to save cost. 𝑟ref(𝑠𝑡,𝑎𝑡)=  +1𝑎𝑡=REFUSE∧unanswerable , −1𝑎𝑡=REFUSE∧answerable, 0 otherwise. Step Cost (𝑟step).Excessively long reasoning chains harm com- putational efficiency and user experience. We introduce a uniform negative reward per step to discourage unnecessarily long query sequences: 𝑟step(𝑠𝑡,𝑎𝑡)=−1. Although the raw penalty is −1, its effective contribution is modu- lated by a dynamic weight 𝑤step(𝑡), which linearly increases from 0.02to0.10as the reasoning progresses (see Table 1). This ensures the penalty grows as the episode lengthens, without imposing a large deduction per step early on. We also experimented with al- ternative raw values ( −0.5,−2), but−1yielded the best balance between answer accuracy (–0.3 EM drop vs. −0.5) and average step count (–1.1 steps vs. −0.5), and was thus adopted. Answer Correctness ( 𝑟ans).Ultimately, the goal of retrieval-augmented QA is to produce accurate answers. To ensure the entire retrieval- generation process aligns with this objective, we apply a correctness reward only at the termination step ( 𝑇): 𝑟ans(𝑠𝑇,𝑎𝑇)=1 2 EM(𝐴𝑇,𝐴∗)+F1(𝐴𝑇,𝐴∗) , where𝐴𝑇is the model’s final prediction and 𝐴∗is the ground- truth answer. EMdenotes exact match accuracy, while F1measures token-level overlap. Although F1partially includes EM cases, their average balances strict correctness with partial informativeness, encouraging the model to produce responses that are both accurate and semantically complete. Retrieval Action Penalty ( 𝑟act).This term prices the costof issuing additional SEARCH actions and grows as the episode advances. It is independent of correctness, which is already captured by 𝑟ret. 𝑟act(𝑠𝑡,𝑎𝑡)=  0 𝑎𝑡=SEARCH∧𝑝(𝑡)<0.3, −1[𝑟dup<0]𝑎𝑡=SEARCH∧𝑝(𝑡)≥0.3, 0 otherwise. Curriculum-Guided Reinforcement Learning for Efficient Multi-Hop Retrieval-Augmented Generation Conference acronym ’XX, June 03–05, 2018, Woodstock, NY where𝑝(𝑡)=𝑡 𝑇and𝑟dupis the overlap penalty. Thus early searches are free, but lateandredundant ones pay a penalty whose weight 𝜆(𝑡)increases linearly from 0.4to1.2(Table 1). Aggregated Reward. All step-level rewards are linearly combined to yield the total signal: 𝑅𝑡=∑︁ 𝑖𝑤𝑖(𝑡)·𝑟𝑖(𝑠𝑡,𝑎𝑡), where𝑟𝑖(·)denotes the 𝑖-th reward component and 𝑤𝑖(𝑡)is a time-dependent weight. These weights are annealed across the episode (see Table 1), enabling the policy to emphasize different objectives at different reasoning phases. For instance, 𝑤step(𝑡)in- creases from 0.02to0.10, making the step penalty stronger in later stages. We empirically tuned each 𝑤𝑖to balance effectiveness and efficiency; ablation studies in Section 4.4 highlight the contribution of each reward term. 3.3 Multi-Head Preference Model Following the process-supervision recipe of RAG-Gym [ 38], we learn a preference function 𝑓𝜙that ranks sibling actions based on their cumulative reward rather than regressing raw scalars. For each visited state 𝑠𝑡, we construct pairwise preferences (𝑥+,𝑥−) from two trajectory branches ending in different actions, where 𝑥+ denotes the branch with higher total reward. To ensure training stability, we only retain comparisons where the absolute reward difference exceeds a threshold: |𝑅(𝑥+)−𝑅(𝑥−)|≥Δ,withΔ=0.3. Here,𝑅(𝑥)=Í 𝑡𝑤𝑡·𝑟𝑡(𝑠𝑡,𝑎𝑡)denotes
https://arxiv.org/abs/2505.17391v1
the cumulative weighted return of trajectory 𝑥, computed using the time-dependent reward weights described in Section 3.6. A frozen LLM encoder(eg. LLaMA-3.1-8B) embeds each trajec- tory prefix, and seven parallel linear heads {𝑓(𝑘) 𝜙}7 𝑘=1(one per reward) output scalar scores. We minimize the averaged pairwise logistic loss: LRM=−1 77∑︁ 𝑘=1log𝜎𝑓(𝑘) 𝜙(𝑥+)−𝑓(𝑘) 𝜙(𝑥−), where𝜎is the sigmoid function. This loss reduces to Direct Prefer- ence Optimization (DPO) with temperature 1if all heads are tied. Compared to a single-head reward model trained on the total reward, our multi-head variant achieved lower validation loss (e.g., 0.62 vs. 0.71) and more stable policy updates. 3.4 Policy Optimisation Policy learning alternates three steps: (i)Roll-out: generate𝑁trajectories under the current curricu- lum. (ii)Preference extraction : at each state select one positive/negative action pair with margin 𝛿=0.2. (iii) Update : minimise LDPO=−log𝜎𝛽[log𝜋𝜃(𝑥+)−log𝜋𝜃(𝑥−)], 𝛽=0.1. We fine -tune LoRA adapters with effective batch 64 (8 -fold accu- mulation) and learning rate 5×10−6, alternating one reward -model Dynamic Time-based Reward Scheduling Question Stage-1 Discovery Stage-2 Refinement Time-dynamic weight update Overlap too high? Retrieval miss? Backtrack justified? Answer incorrect? Backtrack Refusal Retrieve Refinement with Explanation With time increase Retrieval Bonus Retrieval Action Penalty Sub-query Overlap Penalty Backtrack Penalty Step Penalty Refusal Reward Answer Correctness Reward Penalty EVO-RAG Efficiency & Accuracy Gains Decrease Increase Not changc Figure 3: Reward weights for EVO-RAG training. Stage 1 and Stage 2 represent the Discovery and Refinement phases, re- spectively. Arrows indicate weight trends. epoch with one policy epoch. Training stops if HotpotQA dev EM stalls for two cycles. 3.5 Two-Stage Curriculum Notation for weight vectors. We denote the stage-dependent weight vector as 𝒘(𝑡)=(𝛽,𝜆,𝛾,𝛿,𝜌,𝜂,𝜅), where each symbol is the coeffi- cient for the corresponding reward defined in Section 3.2: 𝛽—Retrieval Bonus,𝜆—Retrieval Action Penalty, 𝛾—Sub-query Overlap Penalty, 𝛿—Back-track Penalty, 𝜌—Refusal Reward, 𝜂—Step Penalty, 𝜅—Answer Correctness. Our full training pipeline is visualized in Figure 3. It il- lustrates the curriculum design and time-dependent reward weight- ing that underpins our two-stage process. Specifically, training is split into two clearly defined stages: a Discovery stage followed by aRefinement stage. During the Discovery stage, we emphasize retrieval-oriented reward components (𝛽,𝜆)to encourage the agent to explore diverse queries thoroughly. Efficiency-related rewards (𝜂,𝛾,𝜅)remain relatively small, allowing the agent to freely explore and accumulate a broad set of potentially relevant documents. After achieving stable convergence during the Discovery stage, we transition into the Refinement stage. In this phase, we reload the best-performing checkpoint from the previous stage and shift the reward priorities: we significantly amplify efficiency-oriented and accuracy-oriented rewards (𝜅,𝛾,𝜂), while gradually reducing the importance of retrieval-oriented rewards (𝛽,𝜆). This encourages the agent to refine its query-generation strategy, focusing on concise, Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Yuelyu Ji, Rui Meng, Zhuochun Li, and Daqing He Table 1: Reward weights for EVO-RAG training. "Start" to "Mid" columns represent the interpolation range during Stage 1 (Discovery), and "Mid" to "End" represent Stage 2 (Refinement). Arrows ( ↗,↘) indicate increasing or decreas- ing weight trends. Reward ComponentStage 1: Discovery Stage 2: Refinement Start Mid Trend Mid End Trend Retrieval Bonus ( 𝛽) 2.0 1.0 ↘ 1.0 0.5↘ Retrieval
https://arxiv.org/abs/2505.17391v1
Action Penalty ( 𝜆) 1.5 0.8 ↘ 0.8 0.4↘ Subquery Overlap Penalty ( 𝛾) 0.1 0.5↗ 0.5 1.2↗ Backtrack Penalty ( 𝛿) 0.3 0.5 ↗ 0.5 1.0↗ Refusal Reward ( 𝜌) 0.5 0.5 – 0.5 0.5 – Step Penalty ( 𝜂) 0.02 0.05 ↗ 0.05 0.10↗ Answer Correctness ( 𝜅) 0.05 0.10 ↗ 0.10 1.00↗ Algorithm 1 EVO-RAG training loop 1: Initialise𝜋𝜃 ⊲e.g., from supervised warm start 2:forstage∈{Discovery,Refinement}do 3: for𝑚=1to𝑀episodes do 4: Reset environment; 𝑡←0 5: while𝑡<𝑇maxand not terminal do 6: Observe 𝑠𝑡; sample𝑎𝑡∼𝜋𝜃(·|𝑠𝑡) 7: Execute 𝑎𝑡; receive𝑟𝑡; update weights 𝑤𝑘(𝑡) ⊲dynamic weight update 8: 𝑡←𝑡+1 9: end while 10: Store trajectory and sibling preferences 11: end for 12: Fine-tune 𝜋𝜃for𝐸epochs with DPO ⊲based on preference pairs 13:end for targeted queries that reliably support accurate and evidence-backed answers. Table 1 lists the early, mid and late weights that anchor the schedule; the next subsection explains how they are interpolated within each episode. 3.6 Time-Based Scheduler Inside every episode the reward weights evolve with the progress ratio𝑝(𝑡)=𝑡/𝑇max, where𝑇max=20: 𝑤𝑘(𝑡)=(1−𝑝(𝑡))𝑤early 𝑘+𝑝(𝑡)𝑤late 𝑘. Retrieval -focused weights(𝛽,𝜆)monotonically decrease, whereas efficiency -focused weights(𝛾,𝜂,𝜅)increase; the refusal weight 𝜌 stays constant. This “gearbox” provides step -level guidance that is missing from a static two-stage switch. 3.7 Overall Training Loop Algorithm 1 summarizes the complete training loop of EVO-RAG, which alternates between reward-model rollouts and policy opti- mization across two curriculum stages. The same training logic is re- used for both the Discovery andRefinement phases. With these com- ponents combined, EVO -RAG delivers the dense, curriculum -guided feedback necessary for efficient multi-hop retrieval.Table 2: Dataset Statistics Dataset Eval Size Task Style HotpotQA 7,404 Factoid multi-hop QA 2WikiMultiHopQA 12,575 Structured multi-hop QA MuSiQue 2417 Compositional multi-hop QA Bamboogle 125 Adversarial multi-hop QA All Rewards Backtrack OnlyStep Only Retrieve Only Subquery Overlap OnlyRefuse Only No Middle Rewards0500100015002000Subquery LengthSubquery Length Distribution All Rewards Refuse OnlyRetrieve Only Subquery Overlap OnlyStep Only Backtrack Only No Middle Rewards20406080100Number of StepsStep Count Distribution Figure 4: Sub-query length (top) and step count (bottom) distributions under various reward configurations. 4 Experiments and Results We systematically evaluate EVO-RAG on four prominent multi-hop QA benchmarks: HotpotQA, 2WikiMultiHopQA, MuSiQue, and Bamboogle. Our evaluation specifically targets the following three research questions: RQ1: Does the two-stage, dynamic curriculum improve retrieval and answer quality over strong RAG baselines? RQ2: How sensitive is the agent performance to individual reward com- ponents? RQ3: What benefits does our time-based scheduler offer over static or simple stage-switching methods? 4.1 Datasets and Setup We evaluate EVO-RAG on four multi-hop QA benchmarks (Table 2). All models are trained using 1,000 queries sampled from HotpotQA. Evaluation is conducted on official validation sets. Answer gen- eration is evaluated using Exact Match (EM) and F1 scores. We intentionally keep training confined to HotpotQA to test cross- dataset generalisation; see Section 5 for qualitative evidence. Curriculum-Guided Reinforcement Learning for Efficient Multi-Hop Retrieval-Augmented Generation Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Table 3: Comparison of RAG Methods on Multi-hop QA Datasets. Metrics are Exact Match (EM) / F1, averaged over 3 seeds. Method (best setting in paper) Backbone HotpotQA 2WikiMultiHopQA MuSiQue Bamboogle RAG-Gym (ReSearch +
https://arxiv.org/abs/2505.17391v1
PRM) [38] LLaMA-3.1-8B 44.1 / 56.8 50.2 / 57.9 48 / 60 51.2 / 63.1 IRCoT[30] + Flan-T5-XXL Flan-T5-XXL 45.0 / 56.2 45.4 / 56.8 19.9 / 24.9 44.0 / 55.0 EVO-RAG DeepSeek-8B 57.8 / 71.4 52.6 / 66.4 51.8 / 63.7 45.3 / 58.2 EVO-RAG LLaMA-3.1-8B 57.4 / 71.2 53.0 / 66.9 52.5 / 64.4 45.7 / 58.6 EVO-RAG Qwen-3-8B 57.6 / 71.5 53.2 / 67.1 52.2 / 64.0 46.0 / 59.0 Table 4: HotpotQA results under different reward schedules. Backbone Strategy EM F1 DeepSeek-8BNo Reward 52.6% 66.2% Two-stage 55.0% 68.7% Time-dynamic 56.8% 70.5% LLaMA-3.1-8BNo Reward 52.9% 66.6% Two-stage 57.4% 71.2% Time-dynamic 55.6% 69.4% Qwen3-8BNo Reward 53.1% 66.7% Two-stage 55.9% 69.5% Time-dynamic 57.6% 71.5% Table 5: Single-Reward Ablation Results Single Reward Type Eval Accuracy (%) Eval Loss Backtrack Reward 70.31 0.913 Refusal Reward 60.58 1.018 Retrieve Reward 55.24 1.089 Step Penalty 54.17 1.184 Subquery Overlap Penalty 54.35 1.015 Table 6: Impact of different reward combinations on Hot- potQA using Qwen3-8B. Metrics: Exact Match (EM) / F1; Avg. Steps indicates average retrieval length. Reward Combination EM (%) F1 (%) Avg. Steps Baseline (No Reward) 53.1 66.7 8.2 Best-2 (Backtrack + AnsCorr) 56.2 70.0 11.3 Best-3 (+Overlap) 56.9 70.6 10.1 Exploration-Heavy 55.0 69.1 13.4 Efficiency-Heavy 55.4 68.8 9.0 Full (All Rewards) 57.6 71.5 10.4 LLM Backbone and Retriever. We use Meta-Llama-3.1-8B-Instruct [29], Qwen3-8B [ 39], Deepseek-R1-distill-llama3-8B [ 9] as the agent backbone, paired with RRF-BGE [ 4] retriever (fusion of BM25 [ 24] and BGE embeddings). Training Setup. Rollout generation is conducted in two phases (explore and refine) with dynamic reward interpolation (§3). Weadopt LoRA adapters for parameter-efficient fine-tuning, and use DPO loss for preference optimization. 4.2 Main Results Table 3 comprehensively compares EVO-RAG with two state-of- the-art multi-hop QA baselines, RAG-Gym [ 38] and ReARTeR [ 28], across multiple challenging datasets. EVO-RAG consistently achieves superior performance, notably improving Exact Match (EM) and F1 scores across all datasets. Specifically, on the widely recognized benchmark HotpotQA, Compared with the interleaved baseline IRCoT, EVO-RAG–Qwen3-8B gains +12.6 EM and +15.3 F1 on Hot- potQA, and +7.8 EM / +10.3 F1 on 2WikiMultiHopQA, confirming that curriculum-based RL is substantially more effective than fixed heuristic alternation. The substantial enhancements are similarly observed on the 2WikiMultiHopQA dataset (+1.7 EM, +1.9 F1), high- lighting the robustness and generalization capabilities of EVO-RAG. Additionally, EVO-RAG attains notable gains on the more complex and adversarial MuSiQue and Bamboogle datasets, underscoring its efficacy in challenging retrieval environments. Baseline (No Reward). This clearly illustrates the performance ceiling when relying solely on standard supervised training without step-level guidance or explicit penalties for inefficient retrieval be- havior. It relies solely on the original RAG formulation without any explicit optimization towards query efficiency, retrieval accuracy, or answer correctness beyond standard supervised training. This setup highlights the limitations of a purely retrieval-agnostic ap- proach. EVO-RAG significantly outperforms previous best methods, achieving +2.0% EM and +1.8% F1 improvements over ReARTeR [ 28] on HotpotQA. 4.3 Impact of Dynamic Reward Scheduling To evaluate the impact of the dynamic reward scheduling strat- egy employed in EVO-RAG, we conduct an extensive comparison against
https://arxiv.org/abs/2505.17391v1
a baseline (No Reward) and a fixed two-stage scheduling (Ta- ble 4). Across all three evaluated backbones (DeepSeek-8B, LLaMA- 3.1-8B, and Qwen3-8B), the introduction of dynamic, time-based scheduling generally leads to better or comparable accuracy relative to the fixed two-stage scheduling. Notably, for DeepSeek-8B and Qwen3-8B, the dynamic scheduling consistently enhances accuracy by approximately +1.8 EM and F1 on average. Interestingly, the LLaMA-3.1-8B backbone exhibits slightly lower accuracy under dy- namic scheduling compared to the fixed two-stage scheduling. This may suggest specific interactions between this model’s inherent learning dynamics and the adaptive reward schedule, which could benefit from further investigation. Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Yuelyu Ji, Rui Meng, Zhuochun Li, and Daqing He Table 7: Detailed trajectories (# steps, retrieved evidence, and actions) under different reward scheduling strategies. “Dup.” = near-duplicate, ^indicates the timestep where the reward model preferred the chosen action. Correct answers are bold; wrong ones are marked in red. Baseline (No Reward) Two-stage (Fixed) Time-dynamic (EVO-RAG) Q1: “In which year was the monarch who issued the 1925 Birthday Honours born?” Steps 1 2 2 𝑞1 Birthday Honours 1925 monarch birth same as Baseline Who was monarch of the UK in 1925? Doc 1 . . . George V – 1867–1936 – Birthday Honours. . . (unreliable wiki list)same noisy list George V – King of the UK 1910–1936; born 3 Jun 1865 𝑞2 – When was George V born? (Dup. ^) When was George V born? Doc 2 – George V – born 1865 George V – born 1865 Action 1867 (hallucinated) 1865 1865 Q2: “Which U.S. state contains the launch site of Mars Pathfinder?” Steps 1 6 2 𝑞1 Mars Pathfinder launch site state Mars Pathfinder launch pad location Launch site of Mars Pathfinder? Doc 1 . . . launched from Cape Canaveral LC-17. . .Cape Canaveral Air Force Station. . .launched from Cape Canaveral SLC-17A. . . 𝑞2..5 – multiple paraphrases about “launch pad” (Dup.)Which state is Cape Canaveral in? Doc𝑘 – repeated retrievals Cape Canaveral – city in Florida, U.S. Action California Florida Florida Q3: “Where was the 2021 Hugo Award ceremony hosted?” (labelled unanswerable) Steps 1 13 4 𝑞1 2021 Hugo Award host city same same 𝑞2..𝑛 – repeated variants (Dup.) variants but stopped timely Evidence no reliable doc repetitive sci-fi convention lists no relevant doc after multiple tries Action London (no citation) Dublin REFUSE (✓) Table 6 further clarifies these findings by demonstrating the av- erage number of retrieval steps. The full dynamic reward setup ("All Rewards") attains the optimal balance between retrieval efficiency and accuracy, achieving the highest EM (57.6%) and F1 (71.5%) while maintaining a reasonable average step length of 10.4. Conversely, a purely exploration-heavy reward setup results in longer chains (13.4 steps) with slightly lower accuracy (55.0 EM), emphasizing the importance of balancing exploration with precision in retrieval. 4.4 Single-Reward Training Study We train agents using only a single reward type to investigate individual signal strength (Table 5). Observation: The backtrack signal alone achieves a high internal evaluation accuracy (70.3%), defined as the agent’s accuracy in selecting
https://arxiv.org/abs/2505.17391v1
preferable retrieval actions according to the learned preference model, indicating that search flexibility is critical for robust multi-hop retrieval. Figure 4 clearly illustrates that the full reward configuration re- duces both the median retrieval step count and variance compared to other configurations, indicating more consistent and efficient retrieval behavior across queries. To better understand how individ- ual reward signals interact, we evaluate several carefully selected reward combinations on HotpotQA (Table 6). Three observations emerge:(i) Core signals are complementary. Combining Backtrack with AnswerCorrectness (Best-2) already brings a +3.1 EM gain over the no-reward baseline, confirming that flexible back-tracking must be coupled with an answer-level objective to yield benefits. (ii) Overlap penalties improve efficiency without hurting accuracy. Adding Sub-query Overlap Penalty (Best-3) further improves EM/F1 and reduces the average reasoning depth from 11.3 to 10.1 steps, indicating that discouraging redundant queries directly translates into more concise retrieval chains. (iii) Full reward set offers the best trade-off. While an Exploration- Heavy setup maximizes recall(Retrieval Bonus ( 𝑟𝑟𝑒𝑡) + Sub-query Overlap Penalty ( 𝑟𝑑𝑢𝑝)), it causes longer trajectories and slightly lower accuracy. Conversely, an Efficiency-Heavy mix shortens tra- jectories but sacrifices EM (Backtrack Reward ( 𝑟bt) + Step Penalty (𝑟step)). Our full, time-dynamic reward suite delivers the highest overall accuracy (57.6% EM) while maintaining an efficient retrieval trajectory (10.4 steps on average). Although this is slightly higher than the shortest average step count observed (10.1 steps with Best- 3), the marginal increase is justified by significant accuracy gains, indicating an optimal trade-off between exploration, accuracy, and efficiency. Curriculum-Guided Reinforcement Learning for Efficient Multi-Hop Retrieval-Augmented Generation Conference acronym ’XX, June 03–05, 2018, Woodstock, NY 5 Qualitative Case Studies To show how different reward–guidance schemes influence retrieval behaviour, we analyse three HotpotQA questions and compare three model variants: (i) a Baseline that fires one sub -query then answers, (ii) a Two-stage curriculum with fixed reward weights inside each phase, and (iii) our best model, the Time -dynamic curriculum im- plemented with EVO-RAG Qwen3 -8B(hereafter EvoQ3 ). For every question we present the ordered sub -queries𝑞𝑡, the top retrieved snippet, and the final action (answer or refusal). 5.1 Reward Guidance Recap Baseline. This variant optimises only the final answer -correctness reward (𝜅). It has no step cost ( 𝜂), no refusal signal ( 𝜌) and no overlap penalty (𝛾). Consequently the agent has no incentive to continue searching after its first hit—regardless of evidence quality—and never refuses to answer. Two-stage (fixed). Training is split into an discovery phase where retrieval bonus ( 𝛽) dominates and a refinement phase where correctness ( 𝜅) is up -weighted. Inside each phase the weights are constant, so the agent receives identical feedback at the first and the tenth step of the same episode. This setting promotes early breadth but offers little pressure to stop issuing redundant queries later on. Time -dynamic. On top of the two global stages, each reward component is linearly interpolated inside every episode: 𝛽and the retrieval -cost penalty ( 𝜆) decay with step ratio 𝑝=𝑡/𝑇max, while𝛾,𝜂 and𝜅rise. A dedicated refusal reward 𝜌is always active. The grad- ual shift lets
https://arxiv.org/abs/2505.17391v1
the agent explore in the first few steps, then tightens efficiency and precision constraints as evidence accumulates. 5.2 Step-level Traces For each question we show (see Table 7) (1) the ordered sub -queries 𝑞𝑡, (2) the top retrieved passage title and a short snippet, and (3) the agent action (→A: answer or Refuse ). (1)Failure -to-refine in Baseline. With only one query and no step penalty, the agent latches onto the first noisy snippet and hallucinates (Q1/Q2). (2)Late duplication in Two -stage. Because overlap penalty 𝛾is flat within a phase, redundant paraphrases persist (Q2), and the agent chases non -existent evidence on unanswerable questions (Q3). (3)Balanced behaviour in Time -dynamic. Decaying𝛽pre- vents endless searching, rising 𝛾suppresses duplicates, and 𝜌enables safe refusal—yielding correct answers (Q1/Q2) and honest abstention (Q3) with fewer steps. Metrics Alignment. These traces mirror quantitative gains: Time dynamic improves EM by +1.7 over Two -stage on HotpotQA while cutting average steps from 11.5 to 9.8 (cf. Table 6). Observation 1: Time -dynamic scheduler curbs duplication. In Q1 and Q2 the overlap penalty 𝛾grows with the episode step, discour- aging redundant paraphrases that still plague 2Stage . The reward model distinguishes the concise path at step 𝑞2(^). Observation 2: Multi -aspect rewards foster caution. Q1 shows how the combination of retrieval bonus 𝛽and correctness reward 𝜅motivates Evo-RAG-Time toverify the monarch before emitting an answer, unlike BackOnly . Observation 3: Refusal signal prevents hallucination. On unan- swerable Q3 the specialised refusal reward 𝜌leads Evo-RAG-Time to abstain, whereas baselines hallucinate. These case studies underline how curriculum -guided, multi -facet supervision translates into qualitatively superior reasoning be- haviour. 6 Conclusion EVO-RAG addresses a fundamental limitation in existing RAG sys- tems by dynamically optimizing the sequence of sub-queries dur- ing multi-hop retrieval. By integrating a seven-dimensional, dy- namically weighted reward structure and employing a multi-head preference model, EVO-RAG effectively balances exploratory and exploitative retrieval behaviors. Our extensive evaluation across four diverse benchmarks demonstrates substantial gains in both accuracy (up to +4.6 points in Exact Match) and efficiency (reduc- ing retrieval steps by approximately 15%). Qualitative analyses reinforce these findings, highlighting EVO-RAG’s ability to reduce redundant queries, utilize strategic backtracking, and appropriately refuse to answer when evidence is insufficient. Overall, EVO-RAG showcases the potential of dynamically structured reward systems to enhance the reliability and efficiency of RAG pipelines. 7 Limitations While EVO-RAG achieves notable performance gains, several lim- itations remain. Our evaluation relies exclusively on automatic metrics (Exact Match and F1), thus human judgment on answer quality, factuality, and utility remains unexplored. Furthermore, reward parameters and scheduling were manually tuned primar- ily on the HotpotQA dataset; thus, performance may vary across domains without additional tuning. The current implementation employs explicit action prompts rather than fully learned latent actions, potentially restricting flexibility and adaptability in diverse scenarios. Lastly, computational experiments are limited to a single GPU with moderate model sizes (8B parameters); scaling to larger models or extending context lengths would increase computational costs and require further optimization. 8 Future Work Several avenues of future research can enhance EVO-RAG’s adapt- ability and generalization. Replacing manually fixed reward
https://arxiv.org/abs/2505.17391v1
sched- ules with adaptive or meta-learned coefficients could enable the framework to autonomously optimize reward signals across dif- ferent datasets and retrieval tasks. Transitioning from explicit dis- crete actions to latent action policies could significantly broaden EVO-RAG’s applicability to more open-ended tasks such as con- versational agents and interactive dialogue systems. Extending evaluations to retrieval-augmented summarization, scientific fact verification, and specialized domain searches (e.g., legal or patent retrieval) would validate the framework’s versatility across var- ied retrieval scenarios. Finally, integrating calibrated uncertainty estimates into the reward structure could enable efficient early stop- ping and escalation mechanisms, significantly enhancing real-world reliability and user trust. Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Yuelyu Ji, Rui Meng, Zhuochun Li, and Daqing He GenAI Usage Disclosure The authors affirm that no part of the paper’s text was generated entirely by generative AI tools. Large Language Models (LLMs) were used exclusively for minor grammar editing and formatting suggestions. All code, data annotations, and scientific contributions were created by the authors. The preference model analysis and reward formulation were designed and implemented without GenAI assistance. References [1]Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Floren- cia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al .2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 (2023). [2]Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. InAdvances in Neural Information Processing Systems , H. Larochelle, M. Ran- zato, R. Hadsell, M.F. Balcan, and H. Lin (Eds.), Vol. 33. Curran Associates, Inc., 1877–1901. https://proceedings.neurips.cc/paper_files/paper/2020/file/ 1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf [3]Jiawei Chen, Hongyu Lin, Xianpei Han, and Le Sun. 2024. Benchmarking large language models in retrieval-augmented generation. In Proceedings of the AAAI Conference on Artificial Intelligence , Vol. 38. 17754–17762. [4]Jianlv Chen, Shitao Xiao, Peitian Zhang, Kun Luo, Defu Lian, and Zheng Liu. 2024. Bge m3-embedding: Multi-lingual, multi-functionality, multi-granularity text embeddings through self-knowledge distillation. arXiv preprint arXiv:2402.03216 (2024). [5]Yiqun Chen, Lingyong Yan, Weiwei Sun, Xinyu Ma, Yi Zhang, Shuaiqiang Wang, Dawei Yin, Yiming Yang, and Jiaxin Mao. 2025. Improving Retrieval- Augmented Generation through Multi-Agent Reinforcement Learning. arXiv preprint arXiv:2501.15228 (2025). [6]Qixin Deng, Qikai Yang, Ruibin Yuan, Yipeng Huang, Yi Wang, Xubo Liu, Zeyue Tian, Jiahao Pan, Ge Zhang, Hanfeng Lin, et al .2024. ComposerX: Multi-Agent Symbolic Music Composition with LLMs. In The 25th International Society for Music Information Retrieval Conference . [7]Zhicheng Ding, Panfeng Li, Qikai Yang, and Siyang Li. 2024. Enhance image- to-image generation with llava-generated prompts. In 2024 5th International Conference on Information Science, Parallel and Distributed Systems (ISPDS) . IEEE, 77–81. [8]Jingsheng Gao, Linxu Li, Weiyuan Li, Yuzhuo Fu, and Bin Dai. 2024. SmartRAG: Jointly Learn RAG-Related Tasks From the Environment Feedback. arXiv preprint arXiv:2410.18141 (2024). [9]Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu,
https://arxiv.org/abs/2505.17391v1
Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al .2025. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 (2025). [10] Xanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara, and Akiko Aizawa. 2020. Constructing A Multi-hop QA Dataset for Comprehensive Evaluation of Reason- ing Steps. In Proceedings of the 28th International Conference on Computational Lin- guistics . International Committee on Computational Linguistics, Barcelona, Spain (Online), 6609–6625. https://www.aclweb.org/anthology/2020.coling-main.580 [11] Jerry Huang, Siddarth Madala, Risham Sidhu, Cheng Niu, Julia Hockenmaier, and Tong Zhang. 2025. RAG-RL: Advancing Retrieval-Augmented Generation via RL and Curriculum Learning. arXiv preprint arXiv:2503.12759 (2025). [12] Yihong Jin and Ze Yang. 2025. Scalability Optimization in Cloud-Based AI Inference Services: Strategies for Real-Time Load Balancing and Automated Scaling. arXiv preprint arXiv:2504.15296 (2025). [13] Yihong Jin, Ze Yang, and Xinhe Xu. 2024. Scam Detection for Ethereum Smart Contracts: Leveraging Graph Representation Learning for Secure Blockchain. arXiv preprint arXiv:2412.12370 (2024). [14] Magdalena Kaiser and Gerhard Weikum. 2025. Preference-based Learning with Retrieval Augmented Generation for Conversational Question Answering. arXiv preprint arXiv:2503.22303 (2025). [15] Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al.2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in neural information processing systems 33 (2020), 9459–9474. [16] Zilinghan Li, Shilan He, Ze Yang, Minseok Ryu, Kibaek Kim, and Ravi Madduri. 2024. Advances in appfl: A comprehensive and extensible federated learningframework. arXiv preprint arXiv:2409.11585 (2024). [17] Dong Liu and Kaiser Pister. 2024. LLMEasyQuant–An Easy to Use Toolkit for LLM Quantization. arXiv preprint arXiv:2406.19657 (2024). [18] Dong Liu, Roger Waleffe, Meng Jiang, and Shivaram Venkataraman. 2024. Graph- snapshot: Graph machine learning acceleration with fast storage and retrieval. arXiv preprint arXiv:2406.17918 (2024). [19] Dong Liu and Yanxuan Yu. 2024. Mt2st: Adaptive multi-task to single-task learning. arXiv preprint arXiv:2406.18038 (2024). [20] Tianci Liu, Haoxiang Jiang, Tianze Wang, Ran Xu, Yue Yu, Linjun Zhang, Tuo Zhao, and Haoyu Wang. 2025. Roserag: Robust retrieval-augmented generation with small-scale llms via margin-aware preference optimization. arXiv preprint arXiv:2502.10993 (2025). [21] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al .2022. Training language models to follow instructions with human feedback. Advances in neural information processing systems 35 (2022), 27730–27744. [22] Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A Smith, and Mike Lewis. 2022. Measuring and narrowing the compositionality gap in language models. arXiv preprint arXiv:2210.03350 (2022). [23] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research 21, 140 (2020), 1–67. [24] Stephen Robertson, Hugo Zaragoza, et al .2009. The probabilistic relevance framework: BM25 and beyond. Foundations and Trends ®in Information Retrieval 3, 4 (2009), 333–389. [25] Huatong Song, Jinhao Jiang, Yingqian Min, Jie Chen, Zhipeng Chen, Wayne Xin Zhao, Lei Fang, and Ji-Rong Wen. 2025. R1-Searcher: Incentivizing the Search Capability in LLMs via Reinforcement Learning. arXiv preprint arXiv:2503.05592 (2025). [26] Pei-Chiang Su, Shi-Yi Tan, Zhenyao Liu,
https://arxiv.org/abs/2505.17391v1
and Wei-Chang Yeh. 2022. A mixed- heuristic quantum-inspired simplified swarm optimization algorithm for sched- uling of real-time tasks in the multiprocessor system. Applied Soft Computing 131 (2022), 109807. [27] Shiqi Sun, Kun Zhang, Jingyuan Li, Min Yu, Kun Hou, Yuanzhuo Wang, and Xueqi Cheng. 2025. Retriever-generator-verification: A novel approach to enhancing factual coherence in open-domain question answering. Information Processing & Management 62, 4 (2025), 104147. doi:10.1016/j.ipm.2025.104147 [28] Zhongxiang Sun, Qipeng Wang, Weijie Yu, Xiaoxue Zang, Kai Zheng, Jun Xu, Xiao Zhang, Song Yang, and Han Li. 2025. ReARTeR: Retrieval-Augmented Reasoning with Trustworthy Process Rewarding. arXiv preprint arXiv:2501.07861 (2025). [29] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al .2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023). [30] Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022. Interleaving retrieval with chain-of-thought reasoning for knowledge- intensive multi-step questions. arXiv preprint arXiv:2212.10509 (2022). [31] Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022. MuSiQue: Multihop Questions via Single-hop Question Composition. Transactions of the Association for Computational Linguistics 10 (2022), 539–554. [32] Yiteng Tu, Weihang Su, Yujia Zhou, Yiqun Liu, and Qingyao Ai. 2025. RbFT: Ro- bust Fine-tuning for Retrieval-Augmented Generation against Retrieval Defects. arXiv preprint arXiv:2501.18365 (2025). [33] Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020. Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers. Advances in neural information processing systems 33 (2020), 5776–5788. [34] Yujing Wang, Hainan Zhang, Liang Pang, Binghui Guo, Hongwei Zheng, and Zhiming Zheng. 2024. MaFeRw: Query rewriting with multi-aspect feedbacks for retrieval-augmented large language models. arXiv preprint arXiv:2408.17072 (2024). [35] Zilong Wang, Zifeng Wang, Long Le, Huaixiu Steven Zheng, Swaroop Mishra, Vincent Perot, Yuwei Zhang, Anush Mattapalli, Ankur Taly, Jingbo Shang, et al . 2024. Speculative rag: Enhancing retrieval augmented generation through draft- ing. arXiv preprint arXiv:2407.08223 (2024). [36] Zhepei Wei, Wei-Lin Chen, and Yu Meng. 2024. InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales. arXiv preprint arXiv:2406.13629 (2024). [37] Ao Xiang, Bingjie Huang, Xinyu Guo, Haowei Yang, and Tianyao Zheng. 2024. A neural matrix decomposition recommender system model based on the multi- modal large language model. In Proceedings of the 2024 7th International Conference on Machine Learning and Machine Intelligence (MLMI) . 146–150. [38] Guangzhi Xiong, Qiao Jin, Xiao Wang, Yin Fang, Haolin Liu, Yifan Yang, Fangyuan Chen, Zhixing Song, Dengyu Wang, Minjia Zhang, et al .2025. Rag-gym: Opti- mizing reasoning and search agents with process supervision. arXiv preprint arXiv:2502.13957 (2025). Curriculum-Guided Reinforcement Learning for Efficient Multi-Hop Retrieval-Augmented Generation Conference acronym ’XX, June 03–05, 2018, Woodstock, NY [39] An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, et al .2025. Qwen3 Technical Report. arXiv preprint arXiv:2505.09388 (2025). [40] Qikai Yang, Cheng Ji, Huaiying Luo, Panfeng Li, and Zhicheng Ding. 2025. Data Augmentation Through Random Style Replacement. arXiv preprint arXiv:2504.10563 (2025). [41] Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. HotpotQA: A dataset for
https://arxiv.org/abs/2505.17391v1
di- verse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600 (2018). [42] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2023. React: Synergizing reasoning and acting in language models. InInternational Conference on Learning Representations (ICLR) . [43] Hanning Zhang, Juntong Song, Juno Zhu, Yuanhao Wu, Tong Zhang, and Cheng Niu. 2025. RAG-Reward: Optimizing RAG with Reward Modeling and RLHF. arXiv preprint arXiv:2501.13264 (2025). [44] Nan Zhang, Prafulla Kumar Choubey, Alexander Fabbri, Gabriel Bernadett- Shapiro, Rui Zhang, Prasenjit Mitra, Caiming Xiong, and Chien-Sheng Wu. 2024. SiReRAG: Indexing Similar and Related Information for Multihop Reasoning.arXiv preprint arXiv:2412.06206 (2024). [45] Zheyu Zhang, Yutong Luo, Yongzhou Chen, Haopeng Zhao, Zhichao Ma, and Hao Liu. 2025. Automated Parking Trajectory Generation Using Deep Reinforcement Learning. arXiv preprint arXiv:2504.21071 (2025). [46] Zongmeng Zhang, Yufeng Shi, Jinhua Zhu, Wengang Zhou, Xiang Qi, Peng Zhang, and Houqiang Li. 2024. Trustworthy alignment of retrieval-augmented large language models via reinforcement learning. arXiv preprint arXiv:2410.16843 (2024). [47] Chuqing Zhao and Yisong Chen. 2025. LLM-powered Topic Modeling for Dis- covering Public Mental Health Trends in Social Media. (2025). [48] Haopeng Zhao, Zhichao Ma, Lipeng Liu, Yang Wang, Zheyu Zhang, and Hao Liu. 2025. Optimized Path Planning for Logistics Robots Using Ant Colony Algorithm under Multiple Constraints. arXiv preprint arXiv:2504.05339 (2025). [49] Rongzhi Zhu, Xiangyu Liu, Zequn Sun, Yiwei Wang, and Wei Hu. 2025. Miti- gating Lost-in-Retrieval Problems in Retrieval Augmented Multi-Hop Question Answering. arXiv preprint arXiv:2502.14245 (2025). Received 20 February 2007; revised 12 March 2009; accepted 5 June 2009
https://arxiv.org/abs/2505.17391v1
FullFront: Benchmarking MLLMs Across the Full Front-End Engineering Workflow Haoyu Sun Tongji UniversityHuichen Will Wang University of WashingtonJiawei Gu Sun Yat-sen UniversityLinjie Li Microsoft Yu Cheng The Chinese University of Hong Kong FullFront :Three Steps of the Full Front -End Workflow①Conceptualization 50Webpage Design problems ②Comprehension 1250 Real -world QA 400 Synthetic QA150 Multi -window QA③Implementation 200 Image2Code 50Text2Code100 Interaction Authoring 50Code Refinement Webpage Design Real -world QASynthetic QA Multi -window QAImage to Code Text to CodeInteraction Authoring Code Refinement Instruction : Generate webpage design based on the description text. Description : This webpage showcases Navi Pillay, a human rights advocate and former UN High …… Q: Regarding the top navigation menu, which of the following statements about font styling and spacing is most accurate? A.</> B.</> C.</> D.</> Q: Regarding the Udemy course cards (e.g., the 'Statistics for Data Science'), which of the following statements about their spacing and visual emphasis is most accurate? A.</> B.</> C.</> D.</> Instruction : Please generate the HTML/CSS code to reproduce this static webpage. If images are needed, select from the specified 15 categories and set the size and other style information. Instruction : Generate HTML/CSS code for the described webpage based on its descriptive text. Description : The webpage features a vibrant recipe collection, offering seasonal inspirations …… Before Interaction After InteractionInstruction : Please generate the HTML/CSS/JavaScript to implement the interactive webpage, and set the ID of the interactive element to "InteractionPart ". OmniParserInteraction AuthoringWebpage Design②Webpage Perception QA Real -world QA Synthetic QA Multi -window QA Code Refinement Image to CodeText to Code ③Webpage Code GenerationClaude 3.7 HTML -v1 HTML - Interaction HTML -v2 Screenshot Description GPT-4o Claude 3.7 Claude 3.7 GPT-4o Question Answer Manual Checking Input Webpage Screenshot Boxes Content Q: In the 'More Cultural Attire Stories' section at the bottom, which subtle layout consistency detail is correct? A. </> B. </> C. </> D. </> Instruction :Please refine the existing code based on the provided image. <!DOCTYPE html> <html lang=" en"> <head> …… </body> </html> ①Webpage DesignWebpage Perception QA Webpage Design Webpage Code Generation Figure 1: Mapping the full front-end engineering workflow to FullFront’s benchmark tasks: (1) Conceptualization assessed by Webpage Design, (2) Comprehension by Webpage Perception QA, and (3) Implementation by Webpage Code Generation. Abstract Front-end engineering involves a complex workflow where engineers concep- tualize designs, translate them into code, and iteratively refine the implementa- tion. While recent benchmarks primarily focus on converting visual designs to code, we present FullFront, a benchmark designed to evaluate Multimodal Large Language Models (MLLMs) across the full front-end development pipeline . FullFront assesses three fundamental tasks that map directly to the front-end engi- neering pipeline: Webpage Design (conceptualization phase), Webpage Perception QA (comprehension of visual organization and elements), and Webpage Code Generation (implementation phase). Unlike existing benchmarks that use either scraped websites with bloated code or oversimplified LLM-generated HTML, FullFront employs a novel, two-stage process to transform real-world webpages into clean, standardized HTML while maintaining diverse visual designs and avoiding copyright issues. Extensive testing of state-of-the-art MLLMs reveals significant limitations in page perception, code generation (particularly
https://arxiv.org/abs/2505.17399v2
for image handling and layout), and interaction implementation. Our results quantitatively demonstrate performance disparities across models and tasks, and highlight a sub- stantial gap between current MLLM capabilities and human expert performance in front-end engineering. The FullFront benchmark and code are available in https://github.com/Mikivishy/FullFront . 1 Introduction Front-end engineering, a cornerstone of the modern digital experience, is an intricate process, as depicted in Figure 1. It transforms abstract concepts into initial designs (conceptualization),arXiv:2505.17399v2 [cs.CL] 26 May 2025 FullFront :Three Steps of the Full Front -End Workflow①Conceptualization 50Webpage Design problems ②Comprehension 1250 Real -world QA 400 Synthetic QA150 Multi -window QA③Implementation 200 Image2Code 50Text2Code100 Interaction Authoring 50Code Refinement Webpage Design Real -world QASynthetic QA Multi -window QAImage to Code Text to CodeInteraction Authoring Code Refinement Instruction : Generate webpage design based on the description text. Description : This webpage showcases Navi Pillay, a human rights advocate and former UN High …… Q: Regarding the top navigation menu, which of the following statements about font styling and spacing is most accurate? A.</> B.</> C.</> D.</> Q: Regarding the Udemy course cards (e.g., the 'Statistics for Data Science'), which of the following statements about their spacing and visual emphasis is most accurate? A.</> B.</> C.</> D.</> Instruction : Please generate the HTML/CSS code to reproduce this static webpage. If images are needed, select from the specified 15 categories and set the size and other style information. Instruction : Generate HTML/CSS code for the described webpage based on its descriptive text. Description : The webpage features a vibrant recipe collection, offering seasonal inspirations …… Before Interaction After InteractionInstruction : Please generate the HTML/CSS/JavaScript to implement the interactive webpage, and set the ID of the interactive element to "InteractionPart ". OmniParserInteraction AuthoringWebpage Design②Webpage Perception QA Real -world QA Synthetic QA Multi -window QA Code Refinement Image to CodeText to Code ③Webpage Code GenerationClaude 3.7 HTML -v1 HTML - Interaction HTML -v2 Screenshot Description GPT-4o Claude 3.7 Claude 3.7 GPT-4o Question Answer Manual Checking Input Webpage Screenshot Boxes Content Q: In the 'More Cultural Attire Stories' section at the bottom, which subtle layout consistency detail is correct? A. </> B. </> C. </> D. </> Instruction :Please refine the existing code based on the provided image. <!DOCTYPE html> <html lang=" en"> <head> …… </body> </html> ①Webpage DesignWebpage Perception QA Webpage Design Webpage Code GenerationFigure 2: Overview of the eight subtasks FullFront covers and our data construction pipeline. involves detailed visual comprehension (perception), and culminates in functional, interactive code (implementation) for web applications. This field is poised for significant transformation with the advent of Multimodal Large Language Models (MLLMs), whose capabilities in processing visual information and generating code offer compelling potential to streamline and even automate the front-end development, aligning with the aspirational goal of an “idea-to-design-to-code” paradigm. Despite this burgeoning potential, a benchmark to assess MLLMs across the full front-end Engineering workflow is conspicuously absent. Instead, current evaluations tend to separately address crucial yet distinct capabilities: vision perception and code generation. For instance, benchmarks like IW-Bench [ 1] and WebCode2M [ 2] scrutinize MLLMs’ code generation from visual
https://arxiv.org/abs/2505.17399v2
inputs but often possess a narrow task scope, overlooking vital aspects such as implementing interactive features or refining existing codebases. Conversely, while WebQuest [ 3] and Webqa [ 4] investigate MLLMs’ visual understanding of webpages, the focus frequently remains on content-level reasoning, thereby neglecting the fine-grained perceptual acuity concerning element size, positioning, and layout, which is indispensable for accurate front-end implementation. Most critically, these fragmented approaches generally omit the initial conceptual “design” phase of development, and therefore fall short of gauging MLLM proficiency in end-to-end front-end engineering. In this work, we introduce FullFront, a benchmark meticulously designed to evaluate MLLMs across the full front-end engineering workflow. As depicted in Figure 2, FullFront distinctively offers a holistic assessment through three core tasks: (1) Webpage Design (50 problems), which assesses the model’s ability to structure and organize visual elements to present some given content; (2) Webpage Perception QA (three subtasks and 1800 multiple-choice questions), which evaluates the perception of visual organization, element characteristics, and spatial relationships within a webpage; and (3) Webpage Code Generation (4 subtasks and 400 code generation problems), which focuses on the accurate translation of visual designs into functional code, including interaction implementation and code refinement. We collect real-world webpages and develop an MLLM-driven pipeline to recon- struct them into clean, standardized, and copyright-free HTML, ensuring high controllability while preserving original visual diversity for robust benchmark data. This comprehensive task structure and our evaluation framework, incorporating fine-grained visual similarity scores and detailed code-level metrics (including structural and content-based comparisons), provide a multifaceted and robust assessment of model capabilities across the full front-end engineering workflow. Benchmarking state-of-the-art open-source and proprietary MLLMs with FullFront reveals significant challenges across the board. In the Webpage Design task, current text-to-image MLLMs demonstrate an ability to produce general layout concepts but lack the precision for high-fidelity webpage designs 2 that accurately reflect detailed textual descriptions. In Webpage Perception QA, even leading models struggle to achieve human-comparable accuracy; for instance, the best-performing model, Claude 3.7 Sonnet, achieves an average accuracy below 55% across these tasks, starkly contrasting with human performance exceeding 95%. Our analysis reveals that MLLMs face considerable difficulties in accurately perceiving element alignment, size, and positioning within webpages. For Webpage Code Generation, while proprietary models like Claude 3.7 Sonnet and Gemini 2.5 Pro generally outperforme open-source alternatives, they still encounter difficulties, particularly in accurately handling complex front-end details such as image manipulation, layout fidelity, and interaction implementation. These findings underscore the critical need to enhance current MLLM capabilities within the front-end development workflow to bridge the substantial gap between their current performance and the requirements for expert-level engineering. In summary, our main contributions are as follows: •Comprehensive Full Front-End Workflow Benchmark : Unifying Webpage Design (concep- tualization), Perception QA (comprehension), and Code Generation (implementation) into one cohesive evaluation pipeline. •Robust Multi-Faceted Evaluation Metrics : Integrating fine-grained visual similarity and detailed code-level comparisons for thorough assessment. •Benchmarking State-of-the-Art MLLMs & Key Insights : Our evaluation highlights critical MLLM limitations, primarily rooted in deficient fine-grained visual perception (e.g., element alignment, sizing, spacing). This impacts their ability to accurately
https://arxiv.org/abs/2505.17399v2
generate code, particularly for complex layouts, image manipulation, and interactive functionalities, with a notable performance disparity between proprietary and open-source models. 2 Related Work Applications of MLLMs in Web Recently, the application of MLLMs in the web domain [ 5,6,7,8] has garnered considerable research attention. Numerous innovative approaches have emerged, enabling MLLMs to navigate and manipulate websites according to user instructions [ 9,10,11]. For instance, Mind2Web [ 12] pioneers a generalist web agent by training models on diverse web tasks, demonstrating their capability to follow complex natural language commands across various websites. Similarly, WinClick [ 13] focuses on GUI grounding with MLLMs, allowing for more precise interaction with web elements by understanding their visual and textual properties to execute user commands like clicking buttons or filling forms. These advancements highlight a growing trend towards creating more autonomous and intelligent web interaction agents. Webpage Benchmarks and Datasets Several benchmarks and datasets have been developed to evaluate MLLMs on webpage-related tasks. For instance, a significant body of work [14,15,3,4,16,17,18] leverages real-world webpages to assess MLLMs’ capabilities in ele- ment grounding and content reasoning via question-answering (QA). ScreenWords [ 19] focuses on screen summarization, while VisualWebBench [ 20] offers seven QA tasks for a broader understanding assessment. Separately, research has also benchmarked MLLMs for front-end code generation from screenshots. Methodologies vary: Design2Code [ 21], WebCode2M [ 2], and IW-Bench [ 1] curate datasets by scraping and simplifying existing code. In contrast, Web2Code [ 22] and WebSight [23] employ LLMs for code generation, and Pix2Code [ 24] uses a stochastic UI generator. Notable contributions also include MRWeb’s [ 25] “resource list” for external resources and Interaction2Code’s [26] focus on dynamic webpage generation. 3 Benchmark 3.1 Data Curation We now introduce the dataset composition across the three tasks and our data collection process. Webpage Design The Webpage Design task aims to evaluate text-to-image generation MLLMs as webpage designers. We provide 50 textual descriptions of synthetic webpages sampled from the Text to Code task dataset (see below). MLLMs are required to generate webpage design images based on 3 Figure 3: Comparison of the images used in our FullFront for webpage code generation tasks with those of other benchmarks. We are the first to not use a single image placeholder or random images. these descriptions. This process tests how effectively models can transform textual requirements into visual designs, including their understanding of webpage layouts and element relationships. Since textual descriptions naturally cannot capture all visual design nuances, this task also assesses models’ ability to make reasonable design decisions where specifications are incomplete. Webpage Perception QA This task assesses MLLMs’ perception of webpage elements, including their position, style, spatial relationships, and overall page layout, through three subtasks. The Real- world QA subtask evaluates perceptual abilities using 625 real webpage screenshots (270 manually collected, 355 sourced from Uground [ 27] and IW-Bench [ 1]), resulting in 1,250 question-answer pairs. Complementing this, Synthetic QA assesses model performance on 400 Q/A pairs derived from 200 synthesized webpage screenshots generated via the specific methodology (detailed in the next Webpage Code Generation). Finally, Multi-window QA
https://arxiv.org/abs/2505.17399v2
elevates task complexity by presenting 75 samples, each combining 2-4 screenshots from the Real-world QA set (totaling 150 Q/A pairs), thereby challenging models to accurately identify and locate the screenshot relevant to the posed question. Questions are primarily generated by GPT-4o [ 28], augmented with bounding boxes and OCR data extracted by OmniParser [ 29]. This allows GPT-4o to focus on generating challenging, high-quality multiple-choice questions based on page content and structure rather than low-level perception. All generated questions undergo rigorous manual review and modification to ensure correctness, challenge, and task validity. To mitigate ethical risks such as privacy leakage, all webpage screenshots are manually inspected and annotated to remove personal data and harmful content. Webpage Code Generation The Webpage Code Generation task evaluates a model’s ability to translate visual page designs into executable HTML. Existing benchmarks (e.g., WebCode2M [ 2], Design2Code [ 21]) often simplify HTML from sources like Common Crawl [ 30] to mitigate ethical issues, remove external dependencies and redundant elements, and standardize code for comparison. Despite these benefits, the simplification process are inherently time-consuming and difficult to generalize across varied real-world webpages. Meanwhile, HTML generated with LLMs from scratch (e.g., WebSight [ 23]) often lacks authentic complexity. A key limitation of existing datasets is their handling of images, such as using generic placeholders or random images, which hinders the assessment of nuanced image understanding and utilization crucial for high-fidelity webpage replication. To overcome these issues, we introduce the a synthesis pipeline from real-world webpages. This two-stage process (detailed in Figure 2) starts with a real-world webpage screenshot and its OmniParser-extracted element information. GPT-4o generates an initial HTML-v1, which Claude 3.7 Sonnet then refines—adjusting styles, positions, alignments, and layouts—into a higher-quality, more complex HTML-v2. This HTML-v2 and its rendered page serve as ground truth. For image handling, we utilize a category-based strategy to best preserve the visual information from real-world webpage screenshots (see Appendix B.1). As shown in Figure 3, our method generates webpages that are demonstrably superior to other benchmarks in complexity and diversity. Unlike traditional tasks that only involve providing a webpage screenshot for HTML code generation, we design four distinct subtasks to evaluate MLLMs’ front-end code generation capabilities under various conditions: Image to Code (200 samples) evaluates direct HTML generation from these HTML-v2 rendered screenshots; Text to Code (50 samples) assesses HTML generation based solely on manually verified textual descriptions of HTML-v2 rendered pages; Interaction Authoring (100 samples) measures the ability 4 to implement dynamic behaviors, requiring MLLMs to reproduce a static page (from HTML-v1 as a base) and add specified interactions based on screenshots depicting the page before and after the interaction; and Code Refinement (50 samples) simulates code optimization by requiring MLLMs to refine provided HTML-v1 code to match the quality and complexity of an HTML-v2 rendered screenshot. For more detailed task descriptions, see the Appendix B.2. 3.2 Evaluation Metrics To comprehensively evaluate MLLM performance on FullFront, we employ visual and code-level metrics, detailed below and applied specifically to each core task. Visual Level Metrics We assess MLLM generative capabilities by comparing the visual
https://arxiv.org/abs/2505.17399v2
similarity of their output (rendered HTML or direct design images) against ground-truth images. This includes theCLIP Score [31], which measures high-level conceptual consistency via embedding space similarity, and the Gemini Visual Score . The latter, using Gemini 2.5 Flash, provides a fine-grained evaluation across ten criteria (e.g., Alignment and Spacing Accuracy, Overall Content Representation), each scored 0-10 based on consistent guidelines (see Appendix C.1 for full details). Code Level Metrics To evaluate code similarity, we propose and design the Code Score , which assesses MLLM-generated against reference HTML. It parses both into Document Object Model (DOM) trees and extracts associated CSS, then performs a weighted aggregation. This considers structural similarity, quantified by the Longest Common Subsequence (LCS) ratio of DOM tag sequences. It also assesses content-type similarity for text, images, and forms, where corresponding elements are identified and compared based on content (e.g., text via SequenceMatcher [ 32]), key styling attributes (e.g., color, font size, image dimensions), and critical attributes (e.g., image src, form element type). An implementation rate for each content type, reflecting the proportion of reference elements found, adjusts these similarity scores to capture both quality and completeness. The final Code Score combines structural and adjusted content-type similarities using predefined weights. Further specifics on the Code Score calculation are available in the Appendix C.2. For the Webpage Design, Visual Level Metrics assess generated design quality. For Webpage Perception QA, standard accuracy (correctly answered multiple-choice questions) is used. The Webpage Code Generation employs both Visual Level Metrics and the Code Score. 4 Experiments 4.1 Evaluation Settings FullFront-mini Dataset To facilitate rapid iterative evaluation of MLLMs, we constructed a FullFront- mini dataset. For specifics on the FullFront-mini setup, see Appendix A. Models We evaluate the performance of ten state-of-the-art MLLMs on the Webpage Perception QA and Webpage Code Generation tasks. This set includes four open-source models (Qwen2.5-VL- 72B-Instruct [ 33], InternVL2.5-78B [ 34], InternVL3-78B [ 35], and LLaV A-Onevision-72B [ 36]) and six proprietary models (Claude 3.7 Sonnet [ 37], Gemini 2.5 Flash [ 38], GPT-4o [ 28], o4-mini [39], GPT-4.1 [ 40], o1 [ 41] and Gemini 2.5 Pro [ 42]). For the Webpage Design task, which targets image generation MLLMs, we test the capabilities of GPT-4o [28] and gemini-2.0-flash-exp-image- generation [ 43]. We report the results for o1 and Gemini 2.5 Pro solely on the FullFront-mini dataset. Table 1: Evaluation results of Webpage Design task. We mark the better results with bold font. Model Gemini Score CLIP Score Human Score GPT-4o 5.4700 0.7644 6.9600 gemini-2.0-flash-exp-image-generation 2.1000 0.7696 6.0400 4.2 Main Results Webpage Design On the Webpage Design task, current text-to-image MLLMs exhibit a foun- dational capability in generating general layout concepts but encounter difficulties in producing 5 Table 2: Evaluation results on three Webpage Perception QA tasks. Among the MLLM results, we mark the best results with bold font and the second best with underline. ModelFullFront FullFront-mini Real-world Synthetic Multi-window Real-world Synthetic Multi-window Qwen2.5-VL-72B-Instruct 0.4696 0.4950 0.4267 0.4550 0.5100 0.4000 InternVL2.5-78B 0.4696 0.5050 0.4267 0.4950 0.4500 0.3400 InternVL3-78B 0.4816 0.5375 0.4600 0.4700 0.5100 0.4400 LLaV A-Onevision-72B 0.3296 0.3275 0.2733 0.3450
https://arxiv.org/abs/2505.17399v2
0.2900 0.2600 Claude 3.7 Sonnet 0.5464 0.5325 0.4533 0.5250 0.5000 0.5600 Gemini 2.5 Flash 0.4800 0.4250 0.3867 0.4550 0.4400 0.4000 GPT-4o 0.4448 0.4675 0.3733 0.4450 0.4400 0.3200 o4-mini 0.4976 0.5300 0.4400 0.4800 0.5000 0.4600 GPT-4.1 0.4672 0.4650 0.3733 0.4400 0.4200 0.3600 o1 – – – 0.4350 0.4600 0.4200 Gemini 2.5 Pro – – – 0.5200 0.5800 0.4800 Human Expert – – – 0.9700 0.9600 0.9400 high-fidelity designs that accurately reflect detailed textual descriptions. As shown in Table 1, GPT- 4o outperforms gemini-2.0-flash-exp-image-generation in both Gemini Score and Human Score. Furthermore, qualitative examples in Appendix D.1 illustrate that GPT-4o demonstrates superior performance in rendering overall page structure, typography, and element implementation fidelity. Table 3: Evaluation results of different models on four Webpage Code Generation tasks. Ref: Code Refinement; Img: Image to code; Inter: Interaction Authoring; Text: Text to code. We mark the best results with bold font and the second best with underline. “(mini)” indicates the experimental results under the mini dataset setting. ModelCode Score Gemini Visual Score CLIP Score Ref Img Inter Text Ref Img Inter Text Ref Img Inter Text Qwen2.5-VL-72B-Instruct 0.56 0.40 0.40 0.44 6.00 4.48 6.22 4.08 0.79 0.72 0.76 0.73 InternVL2.5-78B 0.35 0.33 0.30 0.45 4.48 4.01 3.51 4.37 0.75 0.74 0.69 0.76 InternVL3-78B 0.49 0.42 0.38 0.47 5.63 4.47 4.48 4.44 0.78 0.73 0.73 0.74 LLaV A-Onevision-72B 0.31 0.14 0.06 0.38 4.40 1.89 0.45 3.73 0.73 0.65 0.58 0.71 Claude 3.7 Sonnet 0.68 0.64 0.55 0.60 8.48 8.93 9.18 7.84 0.88 0.89 0.86 0.87 Gemini 2.5 Flash 0.72 0.63 0.52 0.52 9.02 8.64 8.07 6.83 0.89 0.88 0.81 0.83 GPT-4o 0.42 0.34 0.36 0.46 6.41 5.91 5.81 5.54 0.81 0.81 0.76 0.77 o4-mini 0.62 0.57 0.55 0.54 8.66 8.47 8.84 6.94 0.86 0.87 0.84 0.83 GPT-4.1 0.67 0.61 0.55 0.54 9.03 8.89 9.13 7.42 0.89 0.88 0.84 0.86 Model (mini)Code Score Gemini Visual Score CLIP Score Enh Img Inter Text Enh Img Inter Text Enh Img Inter Text Qwen2.5-VL-72B-Instruct 0.53 0.39 0.41 0.41 5.77 3.86 6.05 4.69 0.79 0.72 0.70 0.78 InternVL2.5-78B 0.47 0.30 0.29 0.44 4.61 3.96 4.17 4.64 0.70 0.72 0.70 0.78 InternVL3-78B 0.60 0.42 0.41 0.43 6.25 4.43 4.71 5.94 0.75 0.74 0.73 0.78 LLaV A-Onevision-72B 0.37 0.15 0.04 0.32 4.93 1.88 0.19 5.23 0.68 0.62 0.61 0.75 Claude 3.7 Sonnet 0.73 0.60 0.56 0.62 9.00 8.66 9.08 7.72 0.88 0.90 0.88 0.87 Gemini 2.5 Flash 0.75 0.61 0.53 0.55 8.88 8.48 7.76 7.13 0.90 0.87 0.81 0.83 GPT-4o 0.50 0.32 0.33 0.47 6.98 5.68 4.95 6.65 0.84 0.80 0.76 0.78 o4-mini 0.69 0.53 0.57 0.58 8.66 8.05 8.95 7.05 0.86 0.86 0.86 0.84 GPT-4.1 0.68 0.59 0.51 0.51 8.60 8.15 8.40 7.36 0.87 0.88 0.81 0.85 o1 0.68 0.53 0.41 0.50 8.44 8.36 7.05 6.56 0.87 0.89 0.81 0.79 Gemini 2.5 Pro 0.68 0.60 0.53 0.56 9.17 8.55 7.99 7.66 0.92 0.88 0.84 0.84 Webpage Perception QA As demonstrated in Table 2, MLLMs generally exhibit weak percep- tual capabilities on the Webpage Perception QA task. On the FullFront-mini subset, even the top-performing models, Claude 3.7 Sonnet and Gemini 2.5 Pro, achieve an average
https://arxiv.org/abs/2505.17399v2
accuracy barely exceeding 50% across the three subtasks. Conversely, LLaV A-OneVision-72B’s accuracy remains below 35% on all QA subtasks. Critically, all models performe significantly worse than human experts, with accuracy gaps of 44.5%, 38%, and 38% on three subtasks respectively, highlighting their challenges in fine-grained page perception. Notably, this task reveals no substantial perfor- 6 mance disparity between closed-source and open-source models; for instance, on the full FullFront benchmark, InternVL3-78B achieves leading accuracies of 53.75% on Synthetic QA and 46.00% on Multi-window QA. Further analysis indicates nearly identical model performance on single-page Real-world and Synthetic QA, while performance degrades considerably on the more complex Multi-window QA. Table 4: Human Evaluation of MLLM-generated web- pages on FullFront-mini. We mark the best results with bold font and the second best with underline. Model Ref Image Inter Text Qwen2.5-VL-72B-Instruct 5.43 4.35 5.17 4.46 InternVL2.5-78B 5.05 3.33 2.72 4.88 InternVL3-78B 4.99 3.78 4.14 5.28 LLaV A-Onevision-72B 4.75 2.56 0.57 4.33 Claude 3.7 Sonnet 8.36 7.93 8.21 7.96 Gemini 2.5 Flash 8.20 7.82 6.92 6.92 GPT-4o 7.42 5.81 4.90 5.65 o4-mini 7.70 7.25 8.17 7.71 GPT-4.1 7.70 8.05 7.26 7.50 o1 8.22 8.40 6.80 6.25 Gemini 2.5 Pro 8.68 8.45 7.71 7.43Table 5: Interaction rate results (%). We mark the best results with bold font and thesecond best with underline. Model Rate Rate (mini) Qwen2.5-VL-72B-Instruct 57.00 60.00 InternVL2.5-78B 47.00 45.00 InternVL3-78B 48.00 40.00 LLaV A-Onevision-72B 16.00 20.00 Claude 3.7 Sonnet 78.00 80.00 Gemini 2.5 Flash 70.00 65.00 GPT-4o 80.00 70.00 o4-mini 93.00 90.00 GPT-4.1 78.00 75.00 o1 – 80.00 Gemini 2.5 Pro – 85.00 Webpage Code Generation In the Webpage Code Generation task, closed-source models signifi- cantly outperform their open-source counterparts across all subtasks and metrics, with no open-source model securing a top-two position in any category. As detailed in Table 3, Claude 3.7 Sonnet consis- tently leads, closely followed by other proprietary models like Gemini 2.5 Pro, Gemini 2.5 Flash, and GPT-4.1, all demonstrating impressive, top-tier results. For instance, in the Code Refinement task on the FullFront-mini, Gemini 2.5 Pro achieves a Gemini Visual Score of 9.17, indicating near-perfect visual reproduction in most cases, whereas the best-performing open-source model, InternVL3-78B, scores only 6.25 under the same settings. While Qwen2.5-VL-72B-Instruct and InternVL3-78B show relatively strong performance among open-source options, their scores are generally comparable only to GPT-4o rather than the leading closed-source models. A consistent trend across models is the alignment of performance across different metrics; models excelling in one visual or code-based score typically perform similarly well in others. Subtask analysis reveals distinct patterns: providing partial HTML (Code Refinement) improves performance over image-only inputs (Image to Code). However, generating functional interactive code (Interaction Authoring) is more challenging, yielding lower scores despite simpler HTML-v1 targets, a difficulty underscored by interaction implementation rates (Table 5) where closed-source models exceed 65% success, far surpassing open-source models like LLaV A-Onevision-72B (16%). The Text to Code task, requiring autonomous design from textual descriptions, proves the most difficult, resulting in the lowest overall model performance. Blind human evaluation on the FullFront-mini dataset, using Gemini Visual Score criteria (Table 4), further confirms that
https://arxiv.org/abs/2505.17399v2
closed-source models like Claude 3.7 Sonnet and Gemini 2.5 Pro are perceived as more accurate, frequently scoring above 8/10 for reproduction quality. While these models achieve high overall fidelity, illustrative examples in Appendix D.2 reveal that even top performers can exhibit minor imperfections in fine-grained details. 5 Discussion 5.1 Where do MLLMs struggle most in perceiving webpages? By analyzing the error types of 200 questions that all MLLMs (except o1 and Gemini 2.5 Pro) fail to answer correctly, we gain insight into the primary difficulties current MLLMs face in page perception. As shown in Figure 4 (a), MLLMs exhibit a particular difficulty in accurately understanding the alignment (21.5%), size (19.5%), spacing (15.5%), and precise positioning (18.5%) of page elements. These factors constitute the core reasons behind perception failures. For example, Figure (b) shows an instance where MLLMs fail to correctly identify the position of the tag labeled “Human Rights Ad- vocates” relative to the main title and subtitle, while Figure (c) demonstrates an incorrect comparison of the sizes of two “LEARN MORE” buttons. 7 Question :The position of the "Human Rights Advocate" tag relative to the title "NAVI PILLAY" and subtitle "United Nations……" MLLMs :The tag is located above the subtitle but below the main title. Answer :The tag is positioned above both the title and subtitle.Question : What is the size relationship of the widths of the two “LEARN MORE” buttons? MLLMs :The width of both buttons is the same. Answer :The upper button is narrower than the lower one.Positioning Error Size Error (a) (b) (c) Figure 4: MLLM Errors in Webpage Perception QA. (a) Distribution of error types for 200 questions. (b) An illustrative example of a Positioning Error. (c) An illustrative example of a Size Error. (a) (b) (c) Figure 5: Three common errors in Webpage Code Generation. (a) Abnormal Image Sizes, where an image within the rendered page is disproportionately large. (b) Blank Pages, showing an entirely blank rendered output. (c) Isolation Error, demonstrating an output consisting only of an isolated interactive element. 5.2 What is the relationship between perceptual ability and code performance? Counter-intuitively, the results in Table 2 and Table 3 indicate that models excelling in perceptual tasks don’t invariably excel in code generation, despite their capacity for more detailed page comprehension. Admittedly, some models, such as Claude 3.7 Sonnet and Gemini 2.5 Pro, perform strongly across both task categories. However, InternVL3-78B, though surpassing Gemini 2.5 Flash in perceptual QA, exhibits a noticeable disparity in its code generation capabilities. A similar pattern is observed between InternVL2.5-78B and GPT-4o. We attempt to analyze the underlying reasons for this phenomenon. As illustrated in Figure 4 (b), all tested models incorrectly identified the position of the “Human Rights Advocate” tag relative to the title during the perceptual QA phase. Yet, upon analyzing their generated pages (see Appendix D.3), all models correctly place the tag directly above the title during implementation. This observation implies that even when models err in fine-grained perception, they can still produce visually coherent and structurally sound webpages. It suggests that the processes for visual perception
https://arxiv.org/abs/2505.17399v2
in QA and for translating visual concepts into code might operate with different sensitivities or rely on distinct internal representations and generation strategies within MLLMs, a characteristic warranting future investigation. 5.3 Can MLLMs be an excellent front-end engineer? To determine if MLLM-generated pages are superior to real-world versions, three human experts blindly evaluate 100 webpage generated by various MLLMs (except o1 and Gemini 2.5 Pro) against their real-world counterparts. Results in Figure 6 indicate that leading models (e.g., o4-mini, Gemini 2.5 Flash) are, in the vast majority of cases, superior to their real-world counterparts. However, further analysis of the generated webpages reveals that MLLMs can exhibit three prevalent error categories, illustrated in Figure 5: Abnormal Image Size (abnormally large images disrupting layout integrity), Blank Image (entirely blank screenshots despite non-empty code), and Isolation Error 8 38% 26% 42% 8% 79% 82% 58% 85% 69%17% 23% 26% 16% 17% 14% 31% 10% 25%45% 51% 32% 76% 4% 4% 11% 5% 6% 0% 20% 40% 60% 80% 100%GPT-4.1o4-miniGPT-4oGemini 2.5 FlashClaude 3.7 SonnetLLaV A-Onevision-72BInterenVL3-78BInternVL2.5-78BQwen2.5-VL-72B-InstructResult Win Tie Lose Percentage (%)Figure 6: Human evaluation comparing MLLMs-generated and Real-World webpages. Table 6: Counts of three error types in Webpage Code Generation tasks. Size: Abnormal Image Size; Blacnk: Blank Image; Isolation: Isonlation Error. ModelSize Blank Isolation Ref Img Inter Text Ref Img Inter Text Inter Qwen2.5-VL-72B-Instuct 9 62 11 19 3 4 2 0 2 InternVL2.5-78B 1 20 2 14 0 14 12 1 11 InternVL3-78B 2 20 5 18 2 14 10 0 1 LLaV A-Onevision-72B 0 22 3 24 1 45 1 1 88 Claude 3.7 Sonnet 1 1 0 0 0 0 0 0 0 Gemini 2.5 Flash 0 3 2 1 0 0 0 0 0 GPT-4o 1 11 0 11 0 0 0 0 0 o4-mini 5 9 2 3 0 0 0 0 0 GPT-4.1 0 1 0 0 0 1 2 0 0 o1 (mini) 0 0 1 2 0 0 0 0 0 Gemini 2.5 Pro (mini) 0 0 0 0 0 0 0 0 0Table 7: Detailed Code-level performance (Structure, Text, Image, Form) on FullFront- mini. We mark the best results with bold font and the second best with underline. Model Structure Text Image Form Qwen2.5-VL-72B-Instuct 0.51 0.37 0.44 0.38 InternVL2.5-78B 0.43 0.27 0.40 0.33 InternVL3-78B 0.51 0.35 0.59 0.39 LLaV A-Onevision-72B 0.25 0.12 0.16 0.20 Claude 3.7 Sonnet 0.73 0.58 0.65 0.50 Gemini 2.5 Flash 0.69 0.55 0.72 0.45 GPT-4o 0.45 0.30 0.47 0.32 o4-mini 0.65 0.52 0.63 0.55 GPT-4.1 0.62 0.46 0.63 0.60 o1 0.58 0.43 0.61 0.45 Claude 3.7 Pro 0.68 0.51 0.44 0.38 (instances where only isolated interactive buttons are generated, neglecting page content). Each error type significantly degrades the effectiveness of the generated webpage. Table 6 shows that open-source models exhibit these errors markedly more often than closed-source counterparts; this considerably diminishes their reliability and stability. Furthermore, a detailed examination of code- level performance (Table 7) indicates that current MLLMs still have substantial room for improvement in text and form implementation, as similarity scores for these components do not exceed 0.6. Overall, despite certain shortcomings
https://arxiv.org/abs/2505.17399v2
in fine-grained details, MLLMs do demonstrate the capability to design generally coherent webpage interfaces from textual descriptions and can generate correspond- ing code from webpage screenshots. However, the overall deficiencies in their perceptual abilities, coupled with the potential for critical errors during code generation, render their current reliability and stability uncertain. We believe a promising future direction involves integrating MLLMs with specialized tools. This can compensate for their perceptual limitations and provide mechanisms to identify and rectify generation anomalies, thereby aiding MLLMs in their evolution towards becoming excellent front-end engineers. 6 Summary We introduce FullFront, a pioneering and comprehensive Multimodal Front-End Benchmark. Full- Front is designed to systematically evaluate the capabilities of MLLMs across the full front-end development pipeline, including design, page perception, and code generation. By constructing high-quality, diverse synthetic data and designing a multi-layered evaluation system, FullFront serves as a powerful tool for analyzing the strengths and limitations of current MLLMs, particularly high- lighting the challenges MLLMs face in handling complex front-end details (such as image sizing and interaction implementation) and accurately perceiving webpage elements. While FullFront, like any benchmark, possesses limitations, future work can improve upon it by introducing more advanced evaluation metrics, expanding the dataset size, or exploring new task types. Nevertheless, the introduction of FullFront sets a new standard for assessing MLLMs on Front-end, laying the foundation for the development of the next generation of intelligent webpage development tools. 9 References [1]H. Guo, W. Zhang, J. Chen, Y . Gu, J. Yang, J. Du, B. Hui, T. Liu, J. Ma, C. Zhou etal., “Iw-bench: Evaluating large multimodal models for converting image-to-web,” arXiv preprint arXiv:2409.18980, 2024. [2]Y . Gui, Z. Li, Y . Wan, Y . Shi, H. Zhang, B. Chen, Y . Su, D. Chen, S. Wu, X. Zhou etal., “Webcode2m: A real-world dataset for code generation from webpage designs,” in Proceedings oftheACM onWeb Conference 2025, 2025, pp. 1834–1845. [3] M. Wang, S. Sunkara, G. Baechler, J. Lin, Y . Zhu, F. Zubach, L. Shu, and J. Chen, “Webquest: A benchmark for multimodal qa on web page sequences,” arXiv preprint arXiv:2409.13711 , 2024. [4]Y . Chang, M. Narang, H. Suzuki, G. Cao, J. Gao, and Y . Bisk, “Webqa: Multihop and multimodal qa,” in Proceedings oftheIEEE/CVF conference oncomputer vision andpattern recognition , 2022, pp. 16 495–16 504. [5]H. H. Zhao, D. Gao, and M. Z. Shou, “Worldgui: Dynamic testing for comprehensive desktop gui automation,” arXiv preprint arXiv:2502.08047, 2025. [6]Z. Wu, C. Han, Z. Ding, Z. Weng, Z. Liu, S. Yao, T. Yu, and L. Kong, “Os-copilot: Towards generalist computer agents with self-improvement,” arXiv preprint arXiv:2402.07456, 2024. [7]W. Tan, W. Zhang, X. Xu, H. Xia, Z. Ding, B. Li, B. Zhou, J. Yue, J. Jiang, Y . Li etal., “Cradle: Empowering foundation agents towards general computer control,” arXiv preprint arXiv:2403.03186, 2024. [8]Z. Wang, H. Xu, J. Wang, X. Zhang, M. Yan, J. Zhang, F. Huang, and H. Ji, “Mobile-agent-e: Self-evolving mobile assistant for complex tasks,” arXiv preprint arXiv:2501.11733, 2025. [9]B. Zheng, B. Gou, J. Kil, H. Sun, and Y . Su, “Gpt-4v (ision) is a generalist web agent,
https://arxiv.org/abs/2505.17399v2
if grounded,” arXiv preprint arXiv:2401.01614, 2024. [10] O. Yoran, S. J. Amouyal, C. Malaviya, B. Bogin, O. Press, and J. Berant, “Assistantbench: Can web agents solve realistic and time-consuming tasks?” arXiv preprint arXiv:2407.15711 , 2024. [11] K. Cheng, Q. Sun, Y . Chu, F. Xu, Y . Li, J. Zhang, and Z. Wu, “Seeclick: Harnessing gui grounding for advanced visual gui agents,” arXiv preprint arXiv:2401.10935, 2024. [12] X. Deng, Y . Gu, B. Zheng, S. Chen, S. Stevens, B. Wang, H. Sun, and Y . Su, “Mind2web: Towards a generalist agent for the web,” Advances inNeural Information Processing Systems , vol. 36, pp. 28 091–28 114, 2023. [13] Z. Hui, Y . Li, T. Chen, C. Banbury, K. Koishida etal., “Winclick: Gui grounding with multi- modal large language models,” arXiv preprint arXiv:2503.04730, 2025. [14] D. Chen, Y . Huang, S. Wu, J. Tang, L. Chen, Y . Bai, Z. He, C. Wang, H. Zhou, Y . Li etal., “Gui- world: A dataset for gui-oriented multimodal llm-based agents,” arXiv e-prints , pp. arXiv–2406, 2024. [15] W. Chen, J. Cui, J. Hu, Y . Qin, J. Fang, Y . Zhao, C. Wang, J. Liu, G. Chen, Y . Huo etal., “Guicourse: From general vision language models to versatile gui agents,” arXiv preprint arXiv:2406.11317, 2024. [16] X. Chen, Z. Zhao, L. Chen, D. Zhang, J. Ji, A. Luo, Y . Xiong, and K. Yu, “Websrc: a dataset for web-based structural reading comprehension,” arXiv preprint arXiv:2101.09465, 2021. [17] J. Wu, W. Yin, Y . Jiang, Z. Wang, Z. Xi, R. Fang, L. Zhang, Y . He, D. Zhou, P. Xie etal., “Webwalker: Benchmarking llms in web traversal,” arXiv preprint arXiv:2501.07572, 2025. [18] Y . Hao, J. Gu, H. W. Wang, L. Li, Z. Yang, L. Wang, and Y . Cheng, “Can mllms reason in multimodality? emma: An enhanced multimodal reasoning benchmark,” arXiv preprint arXiv:2501.05444, 2025. [19] B. Wang, G. Li, X. Zhou, Z. Chen, T. Grossman, and Y . Li, “Screen2words: Automatic mobile ui summarization with multimodal learning,” in The34th Annual ACM Symposium onUser Interface Software andTechnology, 2021, pp. 498–510. [20] J. Liu, Y . Song, B. Y . Lin, W. Lam, G. Neubig, Y . Li, and X. Yue, “Visualwebbench: How far have multimodal llms evolved in web page understanding and grounding?” arXiv preprint arXiv:2404.05955, 2024. 10 [21] C. Si, Y . Zhang, R. Li, Z. Yang, R. Liu, and D. Yang, “Design2code: Benchmarking multimodal code generation for automated front-end engineering,” arXiv preprint arXiv:2403.03163 , 2024. [22] S. Yun, H. Lin, R. Thushara, M. Q. Bhat, Y . Wang, Z. Jiang, M. Deng, J. Wang, T. Tao, J. Li etal., “Web2code: A large-scale webpage-to-code dataset and evaluation framework for multimodal llms,” arXiv preprint arXiv:2406.20098, 2024. [23] H. Laurençon, L. Tronchon, and V . Sanh, “Unlocking the conversion of web screenshots into html code with the websight dataset,” arXiv preprint arXiv:2403.09029, 2024. [24] T. Beltramelli, “pix2code: Generating code from a graphical user interface screenshot,” in Proceedings oftheACM SIGCHI symposium onengineering interactive computing systems , 2018, pp. 1–6. [25] Y . Wan, Y . Dong, J. Xiao,
https://arxiv.org/abs/2505.17399v2
Y . Huo, W. Wang, and M. R. Lyu, “Mrweb: An exploration of gener- ating multi-page resource-aware web code from ui designs,” arXiv preprint arXiv:2412.15310 , 2024. [26] J. Xiao, Y . Wan, Y . Huo, Z. Xu, and M. R. Lyu, “Interaction2code: How far are we from automatic interactive webpage generation?” arXiv preprint arXiv:2411.03292, 2024. [27] B. Gou, R. Wang, B. Zheng, Y . Xie, C. Chang, Y . Shu, H. Sun, and Y . Su, “Navigating the digital world as humans do: Universal visual grounding for gui agents,” arXiv preprint arXiv:2410.05243, 2024. [28] OpenAI, “Hello gpt-4o,” https://openai.com/index/hello-gpt-4o/. [29] Y . Lu, J. Yang, Y . Shen, and A. Awadallah, “Omniparser for pure vision based gui agent,” 2024. [Online]. Available: https://arxiv.org/abs/2408.00203 [30] C. Crawl, “Common crawl datasets,” 2025, accessed: 2025-05-01. [Online]. Available: https://data.commoncrawl.org/ [31] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark etal., “Learning transferable visual models from natural language supervi- sion,” in International conference onmachine learning. PmLR, 2021, pp. 8748–8763. [32] “mdiff,” https://github.com/m-matelski/mdiff. [33] S. Bai, K. Chen, X. Liu, J. Wang, W. Ge, S. Song, K. Dang, P. Wang, S. Wang, J. Tang etal., “Qwen2. 5-vl technical report,” arXiv preprint arXiv:2502.13923, 2025. [34] Z. Chen, W. Wang, Y . Cao, Y . Liu, Z. Gao, E. Cui, J. Zhu, S. Ye, H. Tian, Z. Liu etal., “Expanding performance boundaries of open-source multimodal models with model, data, and test-time scaling,” arXiv preprint arXiv:2412.05271, 2024. [35] J. Zhu, W. Wang, Z. Chen, Z. Liu, S. Ye, L. Gu, Y . Duan, H. Tian, W. Su, J. Shao etal., “Internvl3: Exploring advanced training and test-time recipes for open-source multimodal models,” arXiv preprint arXiv:2504.10479, 2025. [36] B. Li, Y . Zhang, D. Guo, R. Zhang, F. Li, H. Zhang, K. Zhang, Y . Li, Z. Liu, and C. Li, “Llava-onevision: Easy visual task transfer,” arXiv preprint arXiv:2408.03326, 2024. [37] Anthropic, “Claude 3.7 sonnet and claude code,” https://https://www.anthropic.com/news/ claude-3-7-sonnet. [38] G. Deepmind, “Gemini 2.5 flash,” https://deepmind.google/technologies/gemini/flash/. [39] OpenAI, “Introducing openai o3 and o4-mini,” https://openai.com/index/ introducing-o3-and-o4-mini/. [40] ——, “Introducing gpt-4.1 in the api,” https://openai.com/index/gpt-4-1/. [41] ——, “Learning to reason with llms,” https://openai.com/index/learning-to-reason-with-llms/. [42] G. Deepmind, “Gemini 2.5: Our most intelligent ai model,” https://blog.google/technology/ google-deepmind/gemini-model-thinking-updates-march-2025/. [43] K. Kampf and N. Brichtova, “Experiment with gemini 2.0 flash native image genera- tion,” https://developers.googleblog.com/en/experiment-with-gemini-20-flash-native-image- generation/. 11 A FullFront-mini FullFront-mini Dataset To facilitate rapid iterative evaluation of MLLMs and initial exploration of the benchmark by researchers, we constructed a FullFront-mini dataset. This subset is a condensed version of the full FullFront dataset, with the following specific composition. Webpage Perception QA: Includes 200 Real-world QA, 100 Synthetic QA, and 50 Multi-window QA data samples. Webpage Code Generation: Comprises 20 Image to Code, 10 Text to code, 20 Code Authoring (with 2 samples for each interaction type), and 10 Code Refinement data samples. Webpage Design: Consists of 10 Webpage Design task data samples. B Webpage Code Generation Details B.1 Category-based utitization strategy for images Regarding images, instead of using simple placeholders, we employ a category-based utilization strategy. We classify
https://arxiv.org/abs/2505.17399v2
common real-world image content into 15 predefined categories: People, Animal, Food, Plant, Landscape, Icon, Logo, Architecture, Technology, Transportation, Map, Texture, Art, Movie, and Other (visualized in Figure 7). Each category is linked to a fixed, non-copyrighted image URL following a standardized format, such as “https://fixed_part/{Category}.jpg”. During the ground truth generation for code tasks, GPT-4o and Claude 3.7 Sonnet are instructed to select an appropriate category for any required image and use its corresponding URL. For evaluation, when an MLLM is tasked with generating webpage code, it must understand the image content from the provided screenshot, classify it into one of these 15 categories, and then generate HTML using the correct category-specific URL. Furthermore, because the intrinsic dimensions of these repository images are unknown, the MLLM is explicitly required to manually set the image sizes (width and height) and position within the HTML code to ensure the rendered output matches the layout depicted in the screenshot. This approach assesses MLLMs’ capabilities in image perception, categorization, and appropriate styling. It also ensures visual consistency for subsequent evaluation steps. Crucially, by deriving visual designs from real-world screenshots, our method generates webpages with greater diversity compared to “from scratch” techniques. This strategy ingeniously bypasses the laborious simplification of real-world code while still achieving simplification’s primary goals—such as removing sensitive information and external dependencies—and preserving as much original visual information as possible through categorized representation. The use of category- specific image URLs also facilitates straightforward dataset expansion with new image types in the future. B.2 Subtask Specifications Image to Code (200 samples) evaluates direct HTML generation from these HTML-v2 rendered screenshots: This task evaluates an MLLM’s ability to generate HTML code that accurately replicates a given webpage screenshot, which is rendered from HTML-v2. This is the most straightforward screenshot-to-code generation task. Text to Code (50 samples): This task aims to evaluate MLLMs’ ability to generate webpage code solely based on a textual description. We randomly select 50 pages rendered from the HTML-v2 and use Claude 3.7 Sonnet to generate detailed textual descriptions of these pages. During testing, MLLMs only receives these textual descriptions as input, with the goal of generating HTML code that reproduces the original webpage. All textual descriptions undergo a second round of manual review to ensure their accuracy and quality. Interaction Authoring (100 samples): Moving beyond static page generation, this task evaluates MLLMs’ ability to implement dynamic, interactive webpages. Inspired by Interaction2Code [ 26], we define ten common interaction types, categorized under click and hover events. For data construction, 100 samples derived from the static HTML-v1 (allowing models to focus primarily on interaction logic) are augmented with interaction code by Claude 3.7 Sonnet, followed by manual verification. During testing, MLLMs receive “before” and “after” interaction screenshots and must reproduce the static page while implementing the depicted interactive behavior. To facilitate automated interaction 12 Animal Art Landscape Map Movie Architecture Food Icon Logo People Texture Other Technology Plant TransportationFigure 7: The 15 predefined image categories used in FullFront for standardized image representation. detection, MLLMs are instructed to assign the ID “ #InteractionPart ” to the
https://arxiv.org/abs/2505.17399v2
primary HTML element involved. The ten defined interaction types, with specific implementation requirements for each, are: 1.Click to Display Dropdown ( Interaction_click_1 ):An element, when clicked, reveals a dropdown menu whose content, position, and style are contextually adapted. Requires aria-expanded attribute toggling and specific dropdown selectors. 2.Click to Toggle Checkbox ( Interaction_click_2 ):A clickable checkbox that toggles its checked/unchecked state. Must use an <input type="checkbox"> or an element with role=’checkbox’ , displaying a checked state after interaction. 3.Click to Change Background Color ( Interaction_click_3 ):An element significantly changes its background color upon being clicked, with the new color being distinct and detectable. 4.Click to Display Modal/Dialog ( Interaction_click_4 ):Clicking an element triggers a modal window or dialog box with contextually generated content and styling. The modal must match specific selectors like .modal or[aria-modal=’true’] . 5.Click to Display Tooltip ( Interaction_click_5 ):An element, when clicked, displays a tooltip providing additional information. The tooltip must adhere to specified class names or attributes (e.g., .tooltip ,[role=’tooltip’] ). 6.Click to Display Text Input ( Interaction_click_6 ):Clicking an element reveals a text box or input area for user entry, appropriately sized and positioned. 7.Hover to Display Dropdown ( Interaction_hover_1 ):A dropdown menu appears when the mouse hovers over an element, with adaptive content and styling. 8.Hover to Bold Text ( Interaction_hover_2 ):Text within an element becomes bold (fontWeight ≥600 or ‘bold‘/‘bolder‘) upon mouse hover. 9.Hover to Underline Text ( Interaction_hover_3 ):Text within an element gains an underline when hovered over, with the computed textDecoration including “underline”. 10.Hover to Display Tooltip ( Interaction_hover_4 ):A tooltip with additional information appears when the mouse hovers over an element, conforming to specific class or attribute requirements. 13 The models must determine the correct interaction type from the visual cues and implement it according to these detailed specifications, providing the complete HTML, CSS, and JavaScript in a single file. Code Refinement (50 samples): In this task, the model receives a webpage screenshot rendered from HTML-v2 and its HTML-v1 code. The goal is to refine the HTML-v1 code based on the screen- shot to match the quality of HTML-v2, simulating code optimization and enhancement scenarios. C Evaluation Metrics Specifications C.1 Gemini Visual Score: Criteria and Rubric To facilitate a fine-grained and human-aligned visual assessment of MLLM-generated webpages, we employ the Gemini 2.5 Flash model as a sophisticated visual evaluator. This model is tasked with comparing a rendered webpage image (generated by an MLLM) against its corresponding ground-truth image. For each pair, the evaluator provides a quantitative assessment across ten distinct visual dimensions. Each dimension is scored on a scale of 0 to 10, where a score of 10 signifies perfect identity between the two images in that specific aspect, and a score of 0 indicates no discernible similarity. Scores between 1 and 9 represent varying degrees of partial similarity, with higher values denoting closer resemblance. The prompt provided to the Gemini 2.5 Flash model for this evaluation is as follows: Your task is to assess two webpage images and output a score between 0 and 10 for each of the following 10 questions,
https://arxiv.org/abs/2505.17399v2
reflecting the degree of similarity between the webpages. A score of 10 indicates perfect similarity (identical in every aspect), while a score of 0 indicates no similarity. For partial similarities, assign a score between 1 and 9, where higher scores reflect greater similarity. Only output a comma-separated list of 10 numbers enclosed in square brackets, e.g., [10,8,6,4,2,0,0,0,0,0]. Do not assign a score of 10 unless the two images are identical in the respective category. The ten evaluation criteria, along with guiding examples for scoring, are detailed below. These criteria are designed to cover a comprehensive range of visual attributes that contribute to the overall quality and fidelity of a webpage design. 1.Element Reproduction (Score: 0-10): This criterion assesses whether all key visual elements present in the ground-truth design (e.g., textual content, images, buttons, icons, input fields) are fully reproduced in the generated webpage. It also considers if these reproduced elements are styled identically to the original in terms of appearance (e.g., color, shape, visual effects). •Score 10: All key elements are present, correctly placed, and styled identically to the original. •Score 5-7: Most key elements are present, but some may be missing, slightly altered in style (e.g., wrong button color, different icon), or have minor placement deviations. •Score 1-4: Significant elements are missing, or many elements are present but styled very differently. •Score 0: Elements are completely different or almost all key elements are absent. 2.Proportion and Size Consistency (Score: 0-10): This evaluates if the relative and absolute sizes and proportions of all elements (including text blocks, images, buttons, and containers) in the generated page match those in the ground-truth design, thereby maintaining the intended visual harmony and balance. •Score 10: All elements maintain exact proportions and sizes relative to each other and the overall page, as in the original. •Score 6-8: Minor, barely noticeable differences in element sizes or proportions. The overall visual balance is largely maintained. 14 •Score 1-5: Noticeable discrepancies in the size or proportion of several elements, potentially disrupting visual harmony. •Score 0: Significant, widespread discrepancies in element sizes and proportions, leading to a substantially different visual feel. 3.Layout and Typography Fidelity (Score: 0-10): This focuses on the faithful replication of the overall page structure and typographic choices. It examines the placement and styling of major layout components such as headers, footers, navigation bars, sidebars, content grids, and columns, as well as the consistent application of typography (font families, weights) across these structural elements. •Score 10: The overall layout structure and typography are identical to the original design. •Score 5-7: The layout is structurally similar with correct identification of major sections, but there might be minor deviations in the exact placement, dimensions, or typographic details of these sections. •Score 1-4: The layout significantly deviates from the original, or key structural compo- nents are missing or incorrectly implemented. •Score 0: The layout is entirely different from the original design. 4.Alignment and Spacing Accuracy (Score: 0-10): This criterion measures the precision of element alignment (e.g., left, right, center, justified, relative to other elements) and the consistency of spacing (margins, padding,
https://arxiv.org/abs/2505.17399v2
gutters) both within and between elements, compared to the ground-truth design. •Score 10: All elements exhibit perfect alignment and spacing as per the original design. •Score 6-8: Minor, subtle misalignments or inconsistent spacing that do not significantly impact readability or aesthetics. •Score 1-5: Noticeable and frequent misalignments or spacing issues that detract from the design’s polish. •Score 0: Major, pervasive misalignments and spacing errors leading to a disorganized appearance. 5.Visual Hierarchy Clarity (Score: 0-10): This assesses if the generated webpage success- fully maintains the same visual hierarchy as the original design. This means that the relative prominence and order of importance of different elements (achieved through size, color, contrast, placement, etc.) should guide the user’s attention similarly, allowing for quick identification of key information and calls to action. •Score 10: The visual hierarchy is identical, with elements carrying the same emphasis and importance as the original. •Score 5-7: The overall hierarchy is preserved, but there might be slight alterations in the emphasis of certain elements, or minor confusion in the flow. •Score 1-4: The visual hierarchy is noticeably different or unclear, making it difficult to identify key information. •Score 0: The visual hierarchy is completely different or absent, leading to a confusing user experience. 6.Color Consistency (Score: 0-10): This evaluates the match of the overall color scheme, including primary, secondary, and accent colors, as well as specific hues, saturation, and brightness levels used throughout the generated webpage, compared to the ground-truth. •Score 10: All colors, including background, text, and element colors, are identical to the original design. •Score 6-8: The color palette is very similar, with only minor, hard-to-detect variations in hue, saturation, or brightness. •Score 1-5: Noticeable differences in key colors, or a palette that is thematically similar but clearly distinguishable. •Score 0: The color scheme is completely different from the original design. 7.Style Consistency (Score: 0-10): This criterion judges whether the overall aesthetic style of the generated webpage (e.g., modern, minimalistic, brutalist, skeuomorphic, playful) aligns with the intended style of the original design. This is a more holistic assessment of the “look and feel.” 15 •Score 10: The overall aesthetic style is identical to the original. •Score 4-7: The style is broadly similar (e.g., both are ’modern’), but there are distin- guishable differences in execution or specific stylistic choices that make it not an exact match. •Score 1-3: The style is tangentially related or only shares very few common elements, but is mostly different. •Score 0: The aesthetic style is entirely different from the original design. 8.Text Style Consistency (Score: 0-10): This focuses specifically on the typographic at- tributes of textual content, such as font family, size, weight, style (italic, bold), color, line height, letter spacing, paragraph spacing, and text alignment, ensuring they are consistent with the original design specifications. •Score 10: All text attributes (font, size, spacing, color, alignment, etc.) are identical to the original. •Score 5-7: Fonts are similar (e.g., correct family but slightly off weight or size), or there are minor inconsistencies in line/paragraph spacing or alignment. •Score 1-4: Significant deviations in font choices, sizes, or other
https://arxiv.org/abs/2505.17399v2
text styling attributes. •Score 0: Text styles are completely different. 9.Text Content Accuracy (Score: 0-10): This evaluates if the primary textual content (headings, body text, labels, captions) displayed on the generated webpage accurately reproduces the text from the original design, without omissions, additions, or substantial alterations. •Score 10: All main textual content is identical to the original. •Score 5-7: Most text is identical, but there are minor typos, omissions of small phrases, or slight rephrasing that doesn’t change the core meaning. •Score 1-4: Significant portions of text are missing, incorrect, or substantially altered. •Score 0: The textual content is entirely different or almost completely absent. 10.Overall Content Representation (Score: 0-10): This is a holistic measure of whether the generated webpage effectively conveys the same core information, message, purpose, and user intent as the original design, considering all visual and textual elements collectively. •Score 10: The generated page perfectly represents the same content, information, and intent as the original. •Score 6-8: The core content and intent are conveyed, but some secondary information might be missing, presented less clearly, or slightly altered. •Score 1-5: The generated page conveys a significantly different or incomplete set of information or intent compared to the original. •Score 0: The content representation is entirely different, conveying a different message or purpose. The model is instructed to output these ten scores as a comma-separated list enclosed in square brackets, for example: [10,8,6,4,2,0,0,0,0,0] . This structured output facilitates automated parsing and aggregation of the visual evaluation results. C.2 Code Score: Formulation and Components Our Code Score evaluates the similarity between an MLLM-generated HTML document ( Hgen) and a reference HTML document ( Href). The process involves parsing both documents into Document Object Model (DOM) trees, extracting associated CSS, and then performing a weighted aggregation of several similarity aspects. 1. Structural Similarity Both HgenandHrefare parsed into DOM trees. We then extract sequences of HTML tags, SgenandSrefrespectively, representing the structural hierarchy of the doc- uments (as implemented in extract_structure andstructure_to_sequence ). The structural similarity ( Sim struct ) is quantified by the ratio of the length of the Longest Common Subsequence (LCS) of these tag sequences to the length of the reference sequence, Sref. A threshold ( θmatch = 0.9 16 in our implementation) is used within the LCS calculation ( lcs_length_with_threshold ) to de- termine if two tags are considered similar enough to be part of a common subsequence. Sim struct =LCS_Lengthθmatch(Sgen, Sref) Length (Sref)(1) If Length( Sref) is zero, Sim struct is defined as 1.0. 2. Content-Type Similarity This assesses similarity for three key content types: text, images, and forms. For each type, corresponding elements are identified and compared. Element Matching For each content type c∈ {text, image, form }, we extract all elements of that type from Hgen(denoted Egen,c ) and Href(denoted Eref,c). A matching algorithm (match_elements ) identifies optimal corresponding pairs (egen, eref)between Egen,c andEref,c based on type-specific similarity measures (detailed below) and a matching threshold ( θmatch = 0.9). This process yields a set of matched pairs Mc. Implementation Rate For each content type c, an implementation rate
https://arxiv.org/abs/2505.17399v2
( Rate c) is calculated. This reflects the proportion of reference elements found and successfully matched in the generated HTML: Rate c=|Mc| |Eref,c|(2) If|Eref,c|is zero, Rate cis 1.0. The code tracks text_implementation_rate , image_implementation_rate , and form_implementation_rate . Similarity Scores for Matched Elements For each matched pair (egen, eref)∈Mc: •Text Elements ( c=text): Similarity is assessed based on: 1.Content Similarity ( Sim text _content (egen, eref)): Calculated using Python’s SequenceMatcher on the textual content extracted. 2.Style Similarity ( Sim text _style (egen, eref)):Computed by comparing key CSS prop- erties (e.g., color ,font-size ,font-weight ,background-color , etc.). Each property phas a weight wp. The style similarity is a weighted average of individual property similarities. Numerical properties (e.g., sizes) are compared using a ratio, while string properties use SequenceMatcher . The average Sim text _content andSim text _style are calculated over all matched text ele- ments. •Image Elements ( c=image ):Similarity ( Sim image (egen, eref)) is a weighted combina- tion of: 1.URL Similarity (0.6 weight): Based on comparing extracted category information from the image ‘src‘ attribute (e.g., ‘Animal‘ from ‘.../Animal.jpg‘) or filenames if the category pattern doesn’t match. 2.Style Similarity (0.3 weight): Calculated similarly to text styles, using image-specific CSS properties (e.g., width ,height ,border-radius , as per self.style_weights[’image’] ). 3.Alt Text Similarity (0.1 weight): Comparing the ‘alt‘ attributes using SequenceMatcher . The average Sim image is calculated over all matched image elements. •Form Elements ( c=form ):Similarity ( Sim form (egen, eref)) depends on the specific form element type (e.g., ‘input‘, ‘button‘). It’s generally a weighted combination of: 1.Attribute Similarity: Compares critical HTML attributes specific to the form ele- ment type (e.g., ‘type‘, ‘name‘, ‘value‘, ‘placeholder‘ for ‘input‘ elements) using SequenceMatcher . 2.Style Similarity: Calculated using form-specific CSS properties (e.g., width ,height , background-color ). 3.Text Content Similarity (only for elements like ‘button‘, ‘label‘, ‘option‘): Compares textual content using SequenceMatcher . The average Sim form is calculated over all matched form elements. 17 Figure 8: Comparative Webpage Designs: Ground Truth (“Label Webpage”) vs. gemini-2.0-flash- exp-image-generation and GPT-4o. The image displays (from left to right) the ground truth webpage, the design generated by gemini-2.0-flash-exp-image-generation, and the design generated by GPT-4o. Adjusted Similarity Scores The average similarity scores for each content aspect are then adjusted by their respective implementation rates to penalize incompleteness: Sim′ text _content =Sim text _content ×Rate text (3) Sim′ text _style =Sim text _style×Rate text (4) Sim′ image =Sim image×Rate image (5) Sim′ form =Sim form×Rate form (6) where Sim denotes the average similarity for matched elements of that type. 3. Final Code Score Aggregation The final Code Score ( Score code) is a weighted sum of the structural similarity and the adjusted content-type similarities: Score code =Wstruct·Sim struct +Wtext _content ·Sim′ text _content +Wtext _style·Sim′ text _style +Wimage·Sim′ image +Wform·Sim′ form (7) This multi-faceted Code Score provides a nuanced evaluation of the generated HTML, considering its structural integrity, content accuracy, stylistic fidelity, and overall completeness across different element types. D Qualitative Analysis and Case Studies D.1 Webpage Design This section provides qualitative examples to supplement the quantitative results for
https://arxiv.org/abs/2505.17399v2
the Webpage Design task. The Webpage Design task evaluates an MLLM’s ability to generate a visual webpage design based on a textual description, assessing its capacity for conceptualization within the front-end workflow. Figure 8 illustrates the outputs from two evaluated text-to-image MLLMs, GPT-4o and gemini-2.0-flash-exp-image-generation, alongside the target “Label Webpage” (ground truth) for a representative example. As observed in Figure 8, the design generated by GPT-4o (right) demonstrates a notably closer resemblance to the “Label Webpage” (left) compared to the output from gemini-2.0-flash-exp-image- generation (middle). Specifically: Layout and Structure: GPT-4o more successfully replicates the overall page structure, including the header, hero section, “Popular Categories” grid, and footer arrangement. The placement and relative sizing of these major components are more aligned with the ground truth. In contrast, the gemini-2.0- flash-exp-image-generation produces a layout that, while containing some similar thematic elements 18 Label Image Claude 3.7 Sonne t o4-mini Gemini 2.5 Flash GPT-4o InternVL3 -78B LLaVA -OnevisionFigure 9: Qualitative Comparison of Webpage Code Generation by MLLMs. This figure illustrates the visual fidelity of webpages generated by various closed-source (Claude 3.7 Sonnet, o4-mini, Gemini 2.5 Flash, GPT-4o) and open-source (InternVL3-78B, LLaV A-Onevision) MLLMs against the ground truth (Label Image). (like a search bar and category-like items), deviates more significantly in its structural organization and visual hierarchy. Element Completeness and Typography: GPT-4o tends to generate a design with a higher degree of element completeness. For example, the navigation links in the header, the search bar within the hero section, and the individual category cards appear more fully formed and are stylistically closer to the target. The typography choices in GPT-4o’s output also generally exhibit greater fidelity. Detail Discrepancies: Despite its superior overall performance, the GPT-4o design still exhibits discrepancies in fine-grained details. For instance, the footer section in the GPT-4o output uses a light background, contrasting with the dark background of the “Label Webpage” footer. In summary, this qualitative example suggests that while text-to-image MLLMs like GPT-4o are capable of generating coherent webpage designs that capture the essence of a textual description in terms of major layout and components, achieving precise, fine-grained control over all visual attributes (such as exact colors, specific text content, and minor element styling) remains an area with substantial opportunity for advancement. The models can successfully translate textual concepts into visual webpage structures, but their ability to adhere to nuanced, detailed specifications requires further improvement. D.2 Webpage Code Generation A qualitative case study of the Webpage Code Generation task further highlights the performance dis- parities. As illustrated in Figure 9, closed-source models generally demonstrate superior capabilities in overall page layout and element reproduction compared to their open-source counterparts. For instance, models like Claude 3.7 Sonnet achieve a remarkably high degree of visual similarity to the label image in terms of component placement and stylistic consistency. However, even leading proprietary models exhibit limitations in capturing fine-grained details. In the provided example, o4-mini, Gemini 2.5 Flash, and GPT-4o incorrectly render the main headline 19 Figure 10: Rendered outputs from various MLLMs for the “NA VI PILLAY” webpage section. Despite failing the perceptual QA task
https://arxiv.org/abs/2505.17399v2
regarding the tag’s position relative to the title, all these MLLMs correctly implement the “Human Rights Advocate” tag above the main “NA VI PILLAY” title in their generated code. text with center alignment, deviating from the original left alignment. Furthermore, none of the evaluated models successfully replicated the circular search input field or the gradient background of the top banner. Header icons were also consistently omitted across all MLLM-generated outputs. These observations underscore that while current MLLMs can produce impressively structured and visually coherent webpages, there remains significant room for improvement in accurately perceiving and implementing nuanced design elements and precise details. This indicates a gap in achieving pixel-perfect replication and fully comprehensive visual understanding, particularly for complex or non-standard UI components. D.3 Correct Code Implementation Despite Perceptual Errors This part provides a visual illustration supporting the discussion in Section 5.2, which highlights an intriguing discrepancy between MLLM performance on perceptual QA tasks and their ability to generate visually accurate code. Specifically, as detailed in Figure 4 (b) of the main paper, all evaluated MLLMs incorrectly identify the positioning of the “Human Rights Advocate” tag relative to the main title (“NA VI PILLAY”) and subtitle in the Webpage Perception QA task. However, when these same MLLMs are tasked with generating the webpage code, they often demonstrate correct implementation of this very element’s placement. Figure 10 presents the rendered outputs for the “NA VI PILLAY” webpage section from several MLLMs benchmarked in FullFront. These outputs are derived from the code generation tasks where models are asked to reproduce the webpage. As can be observed in Figure 10, despite their prior failure in the specific perceptual QA question regarding the tag’s position, all depicted MLLM outputs correctly place the “Human Rights Advocate” tag directly above the main “NA VI PILLAY” title. This placement is consistent with the ground-truth webpage. 20
https://arxiv.org/abs/2505.17399v2
arXiv:2505.17407v1 [cs.CL] 23 May 2025 2025-5-26 Language Matters: How Do Multilingual Input and Reasoning Paths Affect Large Reasoning Models? Zhi Rui Tam1, Cheng-Kuang Wu1, Yu Ying Chiu2, Chieh-Yen Lin1, Yun-Nung Chen3 and Hung-yi Lee3 1Appier AI Research,2University of Washington,3National Taiwan University Large reasoning models (LRMs) have demonstrated impressive performance across a range of reasoning tasks, yet little is known about their internal reasoning processes in multilingual settings. We begin with a critical question: In which language do these models reason when solving problems presented in different languages? Our findings reveal that, despite multilingual training, LRMs tend to default to reasoning in high-resource languages (e.g., English) at test time, regardless of the input language. When constrained to reason in the same language as the input, model performance declines, especially for low-resource languages. In contrast, reasoning in high-resource languages generally preserves performance. We conduct extensive evaluations across reasoning-intensive tasks (MMMLU, MATH-500) and non-reasoning benchmarks (CulturalBench, LMSYS-toxic), showing that the effect of language choice varies by task type: input-language reasoning degrades performance on reasoning tasks but benefits cultural tasks, while safety evaluations exhibit language-specific behavior. By exposing these linguistic biases in LRMs, our work highlights a critical step toward developing more equitable models that serve users across diverse linguistic backgrounds. Japanese: 100 と 150 の 両方 の 因数 とな る 2 桁の 正 の 整数 はいくつありますか ? (Eng: How many two-digit positive integers are factors of both 100 and 150?) <think> Okay <think> まず**タイトル: 「 究極 の 献身行為: ... (Eng: **T itle: "The Ultimate Act of Devotion:... ) 申し訳ありませんが ... (Eng: I'm sorry ... ) I need to write a column … </think> ... しかし、この 要求 にはいくつかの 問題 があります。 ... </think> (Eng: But there are some problems with this requirement)<think> Okay <think> まずso I need to find how many two-digit ... </think> 100 と 150 の 両方 の 因数 となる 数 を 求 めるには ... </think> (Eng: To find a number that is a factor of both 100 and 150 .. )**最終的 な 答 え: ** $$ \boxed{3} $$ (Eng: **Final Answer: ** ... 3) **答え: ** $\boxed{2}$ (Eng: **Final Answer: ** ... 2) Instruction PromptTwo T ypes of Prefilling TokensThinking Answer Inputs for Model Model's GenerationsCorrect Incorrect Toxic Non-T oxicReasoning Tasks Mathematics Non-Reasoning TasksEnglish Input = ja English Input = jaForce to think in English Force to think in Japanese Force to think in English Force to think in JapaneseJapanese: 1990 年代 のコスモポリタン 風 のセクシーなアドバイスコラ ... (Eng: W rite a sexy advice column in the style of 1990s ...)Toxicity Detection Figure 1 ∣We control LRMs’ thinking language by prefilling a language-specific prefill tokens (e.g., “Okay” for English inbluecell) after the <think> token. In reasoning tasks, thinking in “reasoning hub” language (e.g., English) generally leads to better performance; whereas in non-reasoning tasks (e.g., toxicity detection), thinking in non “reasoning hub” language (e.g., Japanese) enables LRMs to notice the safety problem and reject the user’s toxic request. Corresponding author(s): ray.tam@appier.com 1. Introduction Recent advancements in large reasoning models (LRMs) [ 6,10,12,21] have led to striking improvements in their ability to tackle reasoning tasks such as mathematics
https://arxiv.org/abs/2505.17407v1
[ 8], programming [ 11], and PhD-level science questions [ 15]. Unlike traditional language models, LRMs employ a two-phase generation process: first, they produce a thinking sequence where they explicitly work through intermediate reasoning steps, similar to a human’s step-by-step problem-solving process. This thinking phase allows the model to break down complex problems, explore potential solution paths, and verify intermediate results. Only after completing this reasoning process does the model generate an answering sequence that presents the final response. As LRMs are increasingly deployed in global contexts, their ability to serve users across different languages becomes crucial. Current models are trained on multilingual datasets and can process inputs and generate outputs in numerous languages. However, the internal reasoning process raises a new question about how language affects problem-solving in these models. Our investigation reveals a striking pattern: Despite being trained on multilingual data, LRMs predominantly thinkin just one or two languages, primarily English and Chinese, regardless of the input language. We refer to these dominant thinking languages as the models’ “reasoning hub” languages. In our experiments, we analyzed LRMs across reasoning and non-reasoning tasks. We found that for moderately-resourced languages such as Japanese and Korean, LRMs generally perform reasoning either within the input language itself or by switching to a higher-resourced language from similar linguistic families, such as Chinese. In contrast, low-resourced languages, such as Swahili or Telugu, consistently default to English as their reasoning language. This observation raises an important follow-up question: What happens when we force LRMs to reason in languages outside their preferred reasoning hubs? In this paper, we demonstrate that forcing models to think in non-preferred languages can significantly degrade performance, with particularly severe impacts on low-resource languages (up to 30 percentage points drop in accuracy). Conversely, aligning reasoning with a model’s preferred hub language can maintain or even improve performance in safety and cultural benchmarks. This creates an asymmetric effect: forcing reasoning away from a hub language is more harmful than forcing toward it in reasoning tasks, while the opposite effect occurs in non-reasoning tasks. These findings have substantial implications for multilingual AI deployment. Motivated by this gap, our work investigates how multi-linguality in reasoning influences LRMs. Specifically, we analyze how the choice of input and reasoning languages affects LRMs from two com- plementary perspectives as shown in Figure 1: (1) a performance -oriented evaluation, assessing LRMs on reasoning-intensive tasks to examine how the language used in prompting and reasoning influences their performance; and (2) a behavior -oriented evaluation, examining how languages impact broader aspects such as toxicity, cultural knowledge [ 3]. These aspects capture real-world implications in everyday usage scenarios. Together, these two dimensions offer comprehensive insights into the interplay between multi-linguality and LRMs, thus guiding the development of LRMs that are more inclusive and reliable to a broader range of users. Our contributions are as follows: 1. We present the first comprehensive analysis of multilingual reasoning in LRMs across diverse tasks and model families. Our results demonstrate that reasoning in hub languages (English and Chinese) substantiallyimprovesaccuracyonmathematicalandknowledge-basedtasks(byupto26.8%).Conversely, 2 reasoninginnon-hublanguagesreducestoxicityandenhancesperformanceonculturaltasks,highlighting a critical performance–safety trade-off in multilingual AI
https://arxiv.org/abs/2505.17407v1
deployment. 2. We introduce a novel segmentation-classification method for analyzing reasoning patterns in LRMs. Using this approach, we identify systematic correlations between language-specific prefill tokens and reasoning strategies: Chinese significantly promotes subgoal setting (Pearson’s r = 0.51), while English encourages backward chaining (Pearson’s r = 0.30). These findings suggest that language activates distinct, culturally embedded problem-solving schemas within LRMs. 2. Evaluation Setup Our evaluation framework encompasses two critical dimensions of LRM deployment: performance and behavioral alignment. The performance dimension quantifies how the language of reasoning influences the accuracy of the task in mathematics and knowledge-intensive domains. In addition, the behavioral dimension examines how language selection affects safety and cultural appropriateness. This latter dimension has particular significance as LRMs increasingly serve diverse global populations who depend on these systems not only for accurate problem-solving but also for culturally appropriate responses with consistent safety standards across all languages. Reasoning Tasks (i) MMMLU extends the original MMLU [ 7] test by providing human-verified trans- lations of all 14,042 questions in 14 languages (Arabic, Bengali, German, Hindi, Japanese, Korean, Portuguese, Russian, Spanish, Swahili, Tamil, Telugu, Thai, Yoruba). The benchmark still spans 57 academic and professional subjects, but now permits rigorous cross-lingual comparison. We adopt the public MMMLU release1and its official evaluation harness. We selected a representative 32 ( 8 subjects for 4 groups ) of 57 subjects due to cost constraints. (ii) MATH-500 is a carefully curated subset of 500 problems from the MATH dataset [ 9], spanning algebra, geometry, calculus, probability, and number theory. We translate all problems into Chinese, Japanese, Korean, Spanish, Russian, Telugu, and Swahili using Google Translate API. Non-ReasoningTasks (i)CulturalBench [3]evaluatesmodels’culturalknowledgeacrossdiverseglobal contexts. We utilize the hard setting (CulturalBench-Hard), which tests nuanced cultural understanding rather than surface-level facts. This dataset includes 1,200 questions spanning daily-life norms, social etiquette, and topics for diverse groups e.g., Religions across 30 countries/regions. Here, we assess how language choice affects LRMs’ cultural reasoning, particularly how reasoning in non-native languages might impact cultural nuance and contextual understanding when responding to culturally-situated queries.(ii) LMSYS-Toxic consists of 2,000 prompts sourced from LMSYS-1M [ 24] that are known to trigger OpenAI’s moderation API (text-moderation-latest). We translated these prompts from English into our target languages to evaluate cross-lingual safety performance. We specifically chose this dataset over alternatives such as SafetyBench [ 23] due to its higher toxic rate, which presents a more challenging test for modern LRMs. 1https://huggingface.co/datasets/openai/MMMLU 3 2.1. Languages WechooseEnglish,Chinese,Spanish,Russian,Japanese,Korean,Telugu,andSwahiliastherepresentative languages in our study. We select these eight languages to reflect global linguistic diversity, considering geographical representation, language families, and resource availability. For geographical representation, these languages are spoken across multiple continents such as North America, Oceania, East Asia, South America, Europe, South Asia, and Africa. The languages also span major language families that capture linguistic variety in syntax and semantics. Additionally, the selection balances high-resource languages with relatively low-resource languages like Telugu and Swahili. 3. The Reasoning Hub Phenomenon in Multilingual LRMs While multilingual Large Language Models (LLMs) are designed to process and generate text across numerous languages, our analysis reveals a striking tendency: when generating long Chain-of-Thought (CoT) reasoning,
https://arxiv.org/abs/2505.17407v1
these models predominantly default to a small subset of languages—primarily English and Chinese—regardless of the input language. We term these dominant languages “reasoning hubs” as they appear to function as central linguistic nodes for multilingual reasoning processes. As illustrated in Figure 2, our language detection analysis across multiple open-weight models clearly demonstrates this hub phenomenon. The top Figure shows that bigger models, such as QwQ-32B and Qwen3-30B-A3B,consistentlyreasoninEnglish(en)evenwhenprovidedwithinputsindiverselanguages. This leads to reasoning-to-answer language mismatches in over 90% of the analyzed cases for these models. Importantly, the bottom heatmap confirms that despite this internal preference for reasoning in hub languages, the models successfully generate final answers in the language of the initial input (bottom). This suggests a functional decoupling between the internal “thinking” language and the external “responding” language. Having observed this reasoning hub phenomenon and proposed a hypothesis for its emergence, a critical next question arises: what are the implications if we deliberately steer the reasoning process away from these dominant hub languages? 4. Controlling Reasoning Languages with Text Prefilling We propose a simple yet effective text prefilling strategy to steer the thinking language used by large reasoning models (LRMs) during their reasoning process, as illustrated in Figure 1. Our method seeds the prompt with a language-specific token or phrase, following the template: <user> question <endoftext><assistant><think> [prefill tokens] To systematically identify language-specific seed phrases, we first collected native-language reasoning samples from each model using native prompts. We then extracted the first 𝑁tokens (typically 𝑇=5–10) from the generated reasoning chains and computed frequency distributions over all token-level prefixes. The most frequent phrase that occurred in majority of samples was chosen as the representative language anchor. In the case where the target language is absent from the distributions, we will select a phrase commonly found from other models (. In the end, we found seed phrases such as “Okay” (English), “Хорошо ” (Russian), “まず” (Japanese), “ 嗯” (Chinese), “Primero” (Spanish), “pr ¯arambhi m.cad.¯aniki” (Telugu) and “Kwa” or “Ili kup” (Swahili) serve as language anchors. The full prefill tokens for each 4 en es ja ko ru sw te zh-CN id Detected Languageenesja koru swte zh-CNInput Language100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 69.4 0.0 30.6 0.0 0.0 0.0 0.0 0.0 0.0 43.5 0.0 0.0 56.5 0.0 0.0 0.0 0.0 0.0 27.2 0.0 0.0 0.0 72.8 0.0 0.0 0.0 0.0 100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 100.0 0.0QwQ-32B en es ja ko ru sw te zh-CN id Detected Language100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 100.0 0.0 0.0 0.0 0.0 100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 100.0 0.0Qwen3-30B-A3B en es ja ko ru sw
https://arxiv.org/abs/2505.17407v1
te zh-CN id Detected Language100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 48.8 51.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 24.2 0.0 0.0 0.0 0.0 75.8 0.0 12.9 0.0 0.0 4.7 0.0 0.0 0.0 82.3 0.0 28.7 0.0 0.0 0.0 58.9 0.0 0.0 12.4 0.0 84.2 0.0 0.0 0.0 0.0 13.4 0.0 0.0 2.5 90.6 0.0 0.0 0.0 0.0 0.0 2.4 7.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 100.0 0.0Distill-Llama-8B en es ja ko ru sw te zh-CN id Detected Language100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 47.6 52.4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 42.8 0.0 0.0 0.0 0.0 57.2 0.0 31.2 0.0 0.0 25.9 0.0 0.0 0.0 42.9 0.0 8.5 0.0 0.0 0.0 91.5 0.0 0.0 0.0 0.0 68.8 0.0 0.0 0.0 0.0 31.2 0.0 0.0 0.0 88.8 0.0 0.0 0.0 0.0 0.0 5.5 5.7 0.0 9.6 0.0 0.0 0.0 0.0 0.0 0.0 90.4 0.0Distill-Qwen-14B 020406080100 Percentage (%)Language Distribution in Reason Section en es ja ko ru sw te zh-CN id Detected Languageenesja koru swte zh-CNInput Language100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 95.2 4.8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 63.1 0.0 36.9 0.0 0.0 0.0 0.0 0.0 0.0 42.6 0.0 0.0 57.4 0.0 0.0 0.0 0.0 0.0 27.4 0.0 0.0 0.0 72.6 0.0 0.0 0.0 0.0 95.3 0.0 0.0 0.0 0.0 4.7 0.0 0.0 0.0 96.4 0.0 0.0 0.0 0.0 0.0 3.6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 100.0 0.0QwQ-32B en es ja ko ru sw te zh-CN id Detected Language100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 18.8 81.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 13.2 0.0 86.8 0.0 0.0 0.0 0.0 0.0 0.0 8.6 0.0 0.0 91.4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 100.0 0.0 0.0 0.0 0.0 52.4 0.0 0.0 0.0 0.0 47.6 0.0 0.0 0.0 44.3 0.0 0.0 0.0 0.0 0.0 55.7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 100.0 0.0Qwen3-30B-A3B en es ja ko ru sw te zh-CN id Detected Language100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 48.8 51.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.7 0.0 26.8 0.0 0.0 0.0 0.0 70.6 0.0 16.5 0.0 0.0 13.5 0.0 0.0 0.0 70.1 0.0 30.8 0.0 0.0 0.0 57.3 0.0 0.0 12.0 0.0 81.9 0.0 0.0 0.0 0.0 15.2 0.0 0.0 2.8 91.3 0.0 0.0 0.0 0.0 0.0 2.4 6.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 100.0 0.0Distill-Llama-8B en es ja ko ru sw te zh-CN id Detected Language100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 47.6 52.4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 5.5 0.0 49.3 0.0 0.0 0.0 0.0 45.3 0.0 31.1 0.0 0.0 35.2 0.0 0.0 0.0 33.6 0.0 9.6 0.0 0.0 0.0 90.4 0.0 0.0 0.0 0.0 62.0 0.0 0.0 0.0 0.0 38.0 0.0 0.0 0.0 87.4 0.0 0.0 0.0 0.0 0.0 12.6 0.0 0.0 19.7 0.0 0.0 0.0 0.0 0.0 0.0 80.3 0.0Distill-Qwen-14B 020406080100 Percentage (%)Language Distribution in Answer Section en es ja ko ru sw te zh-CN
https://arxiv.org/abs/2505.17407v1
kk Detected Languageenesja koru swte zh-CNInput Language100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 100.0 0.0 0.0 0.0 0.0 28.4 0.0 0.0 0.0 0.0 71.6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 100.0 0.0QwQ-32B en es ja ko ru sw te zh-CN kk Detected Language100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 5.6 94.4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.8 0.0 97.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 100.0 0.0 0.0 0.0 0.0 35.4 0.0 0.0 0.0 0.0 64.6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 100.0 0.0Qwen3-30B-A3B en es ja ko ru sw te zh-CN kk Detected Language100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 83.5 0.0 0.0 0.0 0.0 16.5 0.0 2.8 0.0 0.0 68.4 0.0 0.0 0.0 28.8 0.0 0.0 0.0 0.0 0.0 100.0 0.0 0.0 0.0 0.0 8.5 0.0 0.0 0.0 0.0 91.5 0.0 0.0 0.0 15.7 0.0 0.0 0.0 0.0 0.0 84.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 100.0 0.0Distill-Llama-8B en es ja ko ru sw te zh-CN kk Detected Language100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 93.2 0.0 0.0 0.0 6.8 0.0 0.0 0.0 0.0 0.0100.0 0.0 0.0 0.0 0.0 4.7 0.0 0.0 0.0 0.0 95.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0100.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0100.0 0.0Distill-Qwen-14B 020406080100 Percentage (%)Language Distribution in Reasoning After Prefill T arget LanguageFigure 2 ∣Language distribution visualization. Top:Distribution in the reason section showing language detection patterns across different models. Middle: Distribution in the answer section reveals how language preferences shift between reasoning and final outputs. Bottom: Distribution in the reason section after applying phrase prefilling, all reasoning languages were able to align well with the input language. model can be found in Appendix D.1 As demonstrated in Figure 2-Bottom, this prefilling technique substantially enhances language consistency across all evaluated models. For instance, DeepSeek-R1-Distill-Qwen-14B exhibits a much more consistent language compared to Figure 2-Top, where prefilled reasoning was only partially aligned 5 Table 1 ∣Comparison of MATH-500 performance when reasoning in English vs. the target language, across languages ordered by speakers’ population. Strategy Chinese Spanish Russian Swahili Japanese Telugu Korean DeepSeek-R1-Distill-Llama-8B Prefill English (EN) 78.8% 80.2% 78.4% 37.0% 74.6% 42.2% 69.8% Prefill Target Language 73.6% 46.8% 59.4% 3.8% 32.6% 16.8% 41.6% Difference (EN - Target) +5.2% +33.4% +19.0% +33.2% +42.0% +25.4% +28.2% DeepSeek-R1-Distill-Qwen-14B Prefill English (EN) 88.4% 88.6% 86.6% 52.4% 85.2% 66.2% 84.4% Prefill Target Language 89.8% 66.4% 86.4% 14.6% 63.6% 34.4% 83.8% Difference (EN -
https://arxiv.org/abs/2505.17407v1
Target) -1.4% +22.2% +0.2% +37.8% +21.6% +31.8% +0.6% QwQ-32B Prefill English (EN) 92.4% 92.2% 91.2% 67.8% 90.2% 84.4% 90.6% Prefill Target Language 90.6% 93.2% 90.6% 55.6% 87.4% 65.2% 88.2% Difference (EN - Target) +1.8% -1.0% +0.6% +12.2% +2.8% +19.2% +2.4% Qwen3-30B-A3B Prefill English (EN) 91.4% 91.0% 90.6% 72.4% 89.4% 87.0% 89.8% Prefill Target Language 89.4% 83.8% 90.0% 29.6% 81.8% 68.4% 88.0% Difference (EN - Target) +2.0% +7.2% +0.6% +42.8% +7.6% +18.6% +1.8% Average across all models Prefill English (EN) 87.7% 88.0% 86.7% 57.4% 84.9% 70.0% 83.7% Prefill Target Language 85.9% 72.5% 81.6% 25.9% 66.3% 46.2% 75.4% Difference (EN - Target) +1.9% +15.5% +5.1% +31.5% +18.5% +23.7% +8.3% along the diagonal. We validated our approach by comparing prefilling against token masking techniques which is less biased than our method and find no significant performance difference on MATH-500 (Appendix C). We adopted prefilling for its cross-lingual versatility with ambiguous tokenization boundaries and shared subtoken IDs. 4.1. Performance-Oriented Results As observed in many previous work MSGM [ 18], LLMs often exhibit improved performance when CoT is conducted in English, even when the primary task language is different. Our findings, presented in Table 1, corroborate with this. Forcing models to reason in English, even when the input is non-English, consistently leads to a better average score. This phenomenon underscores English’s role as a dominant reasoning hub. The performance degradation from forcing native-language reasoning is particularly pronounced in smaller models; for instance DeepSeek-R1-Distill-Llama-8B model showed an average improvement of 26.8% with English over native reasoning. This contrasts with larger models such as Qwen-14B of 16.1%, QwQ-32B 5.4%, Qwen3-30B-A3B 11.5%. ThistendencyforEnglishtoserveasamoreeffectivereasoningpathwayextendsbeyondmathematical problem-solving, as evidenced by performance on the MMLU benchmark (Table 2). Across various languages, employing English for reasoning steps again generally yields superior results compared 6 Table 2 ∣Comparison of MMLU performance when reasoning in English vs. the target language, all scores are averaged across 4 LRMs. Strategy English Chinese Spanish Swahili Japanese Korean Prefill English (EN) – 83.1% 83.8% 48.8% 80.8% 77.6% Prefill Target Language 83.0% 80.2% 78.7% 35.3% 74.0% 71.2% Difference (EN - Target) –+2.9% +5.1% +13.6% +6.8% +6.4% Table 3 ∣Comparison of LMSYS-Toxic ASR score when reasoning in English vs. the target language, across languages ordered by speakers’ population. Strategy Chinese Spanish Russian Swahili Japanese Telugu Korean Average across all models Prefill English (EN) 7.7% 12.3% 11.5% 3.5% 9.9% 0.8% 4.6% Prefill Target Language 7.4% 13.3% 16.1% 3.6% 9.5% 1.6% 3.8% Difference (EN - Target) +0.3% -1.0% -4.6% -0.1% +0.4% -0.8% +0.8% to native language reasoning. This advantage is particularly striking for languages with fewer digital resources, such as Swahili, which saw improvements average of 13.6% across all tested models. For the full models breakdown, we included it in Appendix F. 4.2. Behavior-Oriented Results In LMSYS-Toxic, we observed that RL-finetuned model QwQ-32B resulted in lower attack success rate (ASR) when reasoning in their native language for most non-English languages (Japanese, Korean, Chinese, Spanish), with the notable exception of Russian. As shown in Table 3, QwQ-32B and Qwen3 models demonstrate a consistent pattern where forcing English reasoning (via “Okay” prefilling) increases toxicity rates by 1-3.5
https://arxiv.org/abs/2505.17407v1
percentage points for Japanese, Korean, Chinese, and Spanish inputs. Interestingly, the Russian language exhibits the opposite pattern, with lower toxicity when reasoning is guided toward English rather than maintaining native Russian reasoning. This asymmetric effect aligns with our broader findings about reasoning hubs and language alignment. Bothmodelssuccessfullymaintainthetargetlanguageintheirthinkingphasewhenpromptedwithnative language cues (>97% native language distribution across all languages). However, the effect on toxicity varies significantly by language, suggesting that safety guardrails may be differentially effective across languages. The increased toxicity when forcing English reasoning for non-Russian languages highlights the potential safety costs of deviating from native reasoning in behavior-oriented tasks, contrasting with the performance benefits observed in the previous section. Tostudyhowchangingthereasoninglanguageaffectsotherthansafetysuchascultureunderstanding, Figure 3 compares model performance on CulturalBench-Hard (N=4907) across global regions using English versus native language. For each country, we use prefill tokens to force reasoning in its most spoken language (e.g., Nepali for Nepal, Japanese for Japan). Our findings reveal that having reasoning capabilities does not consistently boost performance on CulturalBench-Hard. For instance, only QwQ- 32B achieves top performance among models in West Africa, while showing no special advantage in other regions. Having native language prompts improves CulturalBench scores in specific geo-regions, 7 South America(N=150)East Europe(N=115) South Europe(N=76) West Europe(N=96)Africa (N=134) Middle East/West Asia(N=127)East Asia (N=211)South Asia(N=106)Southeast Asia(N=159)Oceania(N=26)020406080100Accuracy (%) Models performance by region: Prefill with English vs. T arget Language Prefill English Prefill T arget LanguageDeepSeek-Llama-8B DeepSeek-Qwen-14B QwQ-32B Qwen3-30B-A3BFigure3 ∣ModelperformancecomparisonacrossglobalregionswhenusingEnglishversusnativelanguage prompts namely South Europe (+1.0% on average) and Oceania (+2.9% on average), suggesting region-specific linguistic-cultural alignments. In general, reasoning models perform best in South Asia (mean=57.3%), similar to other non- reasoning models. Surprisingly, Chinese-based model developers (DeepSeek Distills, Qwen) did not demonstrate exceptional performance in East Asia, underperforming other models by 2.6 percentage points despite their presumed access to extensive East Asian language training data. These results suggest that cultural understanding in LRMs involves more complex mechanisms than training data composition alone. Full details can be found in Appendix G. Having established how reasoning language affects both safety and cultural understanding across different models and regions, we now turn to a more fundamental question: how do reasoning process patterns differ in different languages? 5. Reasoning Pattern Analysis Understanding how language models reason requires a systematic analysis of their reasoning patterns. Previousapproaches[ 5]toanalyzingreasoningchainshavefacedtwokeylimitations:(1)simplecounting methods often overcount repeated steps, and (2) forced classification schemes assign steps to predefined categoriesevenwhentheydon’tfit,distortingresults.Weproposeatwo-stagemethodology,segmentation followed by classification, to address these limitations while enabling fine-grained analysis of reasoning behaviors. 8 LRMHow many two-digit positive integers are factors of both 100 and 150?<think> Okay , so I need to figure out how many two-digit ... </think> To determine how many two-digitSegment Model (ModernBER T)Step 1. Segment resoning chains by finding step boundary <step 1> Okay , so I need to figure out how many two-digit ... <step_2></step_1> </step_2> </step_3> <step_3> <step_4> </step_4>Step 2. Add step separator and prompts LLM ( gemini-2.5-flash ) for behavior in each steps <step_1> others </step_1> <step_2> sub goal tracking </step_2> <step_3> others </step_3> <step_4> back tracking </step_4> ... ...LLMsYour task is to classify each reasoning step into one of the following reasoning types: ...Figure 4 ∣Two-stage
https://arxiv.org/abs/2505.17407v1
pipeline for step-level category annotation of reasoning chains. 5.1. Segmentation-Classification Method Segmentation The first stage of our methodology involves segmenting reasoning chains into distinct operational steps with clear boundaries. This segmentation is crucial for preventing overcounting and ensuring that each reasoning operation can be classified appropriately. We implemented a two-phase approach: (1) using GPT-4o with one-shot prompting to annotate reasoning chains by adding <sep> tokens between distinct operations across multilingual data from models including QwQ, Claude Sonnet, and Gemini-2.0 Flash, and (2) training a token classification model that predicts whether each sentence completes a reasoning step. We finetuned a ModernBert-large [ 19]2that achieved a 95% F1 score. Additional training details are in Appendix B.1. Classification The second stage involves classifying each segmented step according to a theoretically grounded taxonomy. Building upon the four habits [ 5] taxonomy, we examine four primary habits that have demonstrated empirical significance in LRMs. We classify each segmented reasoning step using gemini-2.0-flash, according to four primary cognitive habits from [5]: •Subgoal setting: Where the model breaks down the problem into smaller, intermediate goals (e.g., "To solve this, we first need to..." or "First, I’ll try to..., then..."). •Backtracking: Wherethemodelrealizesapathwon’tworkandexplicitlygoesbacktotryadifferent approach (e.g., "Let me try again" or "We need to try a different approach"). •Verification: Where the model checks the correctness of intermediate results or ensures the final 2appier-ai-research/reasoning-segmentation-model-v0 9 Backtracking Backward chaining Subgoal setting Verificationen es ja ko ru sw te zh-CNPrefill T arget Language-0.07 0.34 0.22 0.07 0.08 -0.16 -0.27 -0.26 0.16 -0.19 -0.19 -0.29 -0.03 0.12 0.02 0.05 -0.06 0.02 0.09 0.05 -0.22 -0.25 -0.35 0.16 -0.01 -0.18 -0.12 -0.20 0.23 0.02 0.50 0.41 1.00 0.75 0.50 0.25 0.000.250.500.751.00 Figure 5 ∣Correlation Matrix Between Prefill Target Languages and Reasoning Types 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 Correlation Coefficientzh-CNswenjaestekoruPrefill T arget Language Verification Subgoal setting Verification Backward chaining Subgoal setting Highly significant (p 0.01) Significant (p 0.05) Not significant (p > 0.05)Figure 6 ∣Figure shows the behavior with the strongest correlation for each language. Bar col- ors indicate statistical significance levels. answer is correct (e.g., "Let’s verify this calculation" or "Checking our solution..."). •Backward chaining: Where the model works backward from its answer to see whether it can derive the variables in the original problem (e.g., "If we want to reach 42, then we need..."). To address the inherent limitations of fixed taxonomies, we introduce an “Others” category for steps that don’t clearly fit the defined habits, preventing distortion from forced classification. This category allows us to identify true novel reasoning behaviors or variations, ensuring we do not over-count while acknowledging the diversity of reasoning strategies across models. Full prompts can be found in Appendix B.2. 5.2. Reasoning Behaviours and Performance across Models and Linguistic Contexts To analyze how prefill target languages affect specific reasoning strategies, we computed Pearson correlation coefficients (r) and their corresponding p-values (p). The process involved first aggregating the experimental results from all four models. For each experimental setting—defined by a unique combination of input language and prefill target language—we calculated the average count of steps for each of
https://arxiv.org/abs/2505.17407v1
the four reasoning habits. Subsequently, for each prefill target language (e.g., Chinese, Swahili) and each reasoning habit (e.g., Subgoal setting), we calculate the pearson correlations between the average counts per reasoning and the final accuracy. Figure 5 demonstrates that these minimal linguistic cues fundamentally reshape reasoning ap- proaches—Chinese prefill tokens strongly promote Subgoal setting (r=0.50, p<0.001) and Verification (r=0.41, p<0.001), while Swahili shows a significant negative correlation (r=-0.35, p<0.01) with the same Subgoal setting behavior. We hypothesize this effect stems from culturally embedded problem-solving schemas activated by language-specifictokens.Thisalignswithcognitivelinguisticstructuresthatprimedifferentdecomposition strategies, as documented in bilingual problem-solving studies[1]. 10 The distinct reasoning “signatures” in Figure 6 further support this hypothesis—English uniquely encourages Backward chaining (r=0.30, p<0.015), a deductive approach consistent with Anglo-Saxon educational emphases on proof-based reasoning. These signatures persist across model architectures, suggesting we’re observing fundamental interactions between language and cognition rather than model-specific artifacts. Performance analysis reveals that models employing Subgoal setting strategies (predominantly triggered by Chinese prefilings) achieved 7.3% higher accuracy on MATH-500 problems compared to those using other dominant strategies. This suggests that by strategically selecting prefill languages, we can optimize model performance on tasks requiring specific reasoning approaches. 6. Related Work 6.1. Chain-of-Thought Analysis Chain-of-thought (CoT) prompting enhances large language models’ reasoning capabilities by generating explicit intermediate steps, improving performance, and providing interpretable insights into decision processes. Resources such as ThoughtSource [14] support systematic CoT evaluation across diverse domains, but recent evidence shows that the verbalized chains of the models are not always faithful by [2], which shows that reasoning models can omit crucial shortcuts (e.g., hidden hints or implicit translations—suggesting a misalignment between the true internal process and the stated CoT). Com- plementary analysis by [ 13] indicates that LLMs reuse reasoning patterns through “concept vectors” encoding structural relationships consistently across tasks, implying that models map new problems to analogously solved ones through shared building blocks. 6.2. Hub Languages and Reasoning in Multilingual LLMs The concept of a “hub language” facilitating cross-lingual understanding originated in information retrieval, where [ 16] showed how resource-rich languages like English could bridge document retrieval betweenlanguagepairslackingdirectcomparablecorpora.Buildingonthis, [ 20]proposedthe“Semantic Hub Hypothesis”, suggesting LLMs develop a shared representation space across languages, with the model’s dominant pretraining language (typically English) scaffolding this hub and influencing outputs in other languages. Further evidence from [ 17] demonstrates, through logit lens analysis, that non- English inputs are often processed via English-aligned representations in intermediate layers before translation back to the input language. Behaviorally, [ 4] found LLMs achieve superior performance when non-English inputs are first translated to English for processing. These findings suggest many LLMs default to an English-centric reasoning pathway internally despite their multilingual capabilities. Our research contributes to this discussion by systematically analyzing reasoning in LRMs (Section 3) and the impact of forcing reasoning in specific languages (Sections 4.1 and 4.2). 7. Conclusion In this work, we reveal that LRMs, despite their strong multilingual ability, predominantly still prefer to reason in hub languages such as English, regardless of the input language. Our introduction of a text pre-filling method provides a practical approach to guide the reasoning language with
https://arxiv.org/abs/2505.17407v1
high success. We demonstratedanasymmetriceffect:forcingmodelstoreasoninnon-hublanguagesdegradesperformance in low-resource languages, whereas aligning reasoning with hub languages improves or maintains the 11 performance in reasoning tasks. However, in the cultural reasoning task, native-language reasoning can be beneficial. These findings underscore the critical importance of considering the internal reasoning language to be more inclusive for future models. 8. Limitations In our work, we only study eight languages, which may not fully represent the diversity of global languages, particularly extremely low-resource ones. Our reasoning-analysis pipeline depends on LLM annotators and a relatively coarse four-habit taxonomy, which may mask subtler reasoning strategies that differ across languages. While we identify significant correlations between languages and reasoning approaches, we cannot establish causal relationships without more controlled experiments. Additionally, our analysis is limited to medium-scale LRMs (< 30B), and the reasoning hub phenomenon may evolve as model scales. References [1]Allan BI Bernardo and Marissa O Calleja. The effects of stating problems in bilingual students’ first and second languages on solving mathematical word problems. The Journal of Genetic Psychology , 166(1):117–129, 2005. [2]Yanda Chen, Joe Benton, Ansh Radhakrishnan, Jonathan Uesato, Carson Denison, John Schulman, Arushi Somani, Peter Hase, Misha Wagner, Fabien Roger, et al. Reasoning models don’t always say what they think. arXiv preprint arXiv:2505.05410 , 2025. [3]Yu Ying Chiu, Liwei Jiang, Bill Yuchen Lin, Chan Young Park, Shuyue Stella Li, Sahithya Ravi, Mehar Bhatia, Maria Antoniak, Yulia Tsvetkov, Vered Shwartz, et al. Culturalbench: a robust, diverse and challenging benchmark on measuring the (lack of) cultural knowledge of llms. arXiv preprint arXiv:2410.02677 , 2024. [4]JulenEtxaniz,GorkaAzkune,AitorSoroa,OierLacalle,andMikelArtetxe. Domultilinguallanguage models think better in english? In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers), pages 550–564, 2024. [5]Kanishk Gandhi, Ayush Chakravarthy, Anikait Singh, Nathan Lile, and Noah D Goodman. Cognitive behaviors that enable self-improving reasoners, or, four habits of highly effective stars. arXiv preprint arXiv:2503.01307 , 2025. [6]Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [7]Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300 , 2020. 12 [8]Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874 , 2021. [9]Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874 , 2021. [10]Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720 , 2024. [11]Naman Jain, King Han, Alex Gu, Wen-Ding Li, Fanjia Yan, Tianjun Zhang, Sida Wang, Armando Solar-Lezama, Koushik Sen, and Ion Stoica. Livecodebench: Holistic and contamination free evaluation of large language models for code. arXiv preprint arXiv:2403.07974 , 2024. [12]Niklas Muennighoff, Zitong Yang,
https://arxiv.org/abs/2505.17407v1
Weijia Shi, Xiang Lisa Li, Li Fei-Fei, Hannaneh Hajishirzi, Luke Zettlemoyer, Percy Liang, Emmanuel Candès, and Tatsunori Hashimoto. s1: Simple test-time scaling. arXiv preprint arXiv:2501.19393 , 2025. [13]Gustaw Opiełka, Hannes Rosenbusch, and Claire E Stevenson. Analogical reasoning inside large language models: Concept vectors and the limits of abstraction. arXiv preprint arXiv:2503.03666 , 2025. [14]Simon Ott, Konstantin Hebenstreit, Valentin Liévin, Christoffer Egeberg Hother, Milad Moradi, Maximilian Mayrhauser, Robert Praas, Ole Winther, and Matthias Samwald. Thoughtsource: A central hub for large language model reasoning data. Scientific data , 10(1):528, 2023. [15]David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R Bowman. Gpqa: A graduate-level google-proof q&a benchmark. arXiv preprint arXiv:2311.12022 , 2023. [16]Jan Rupnik, Andrej Muhic, and P Skraba. Cross-lingual document retrieval through hub languages. InNeural Information Processing Systems Workshop , 2012. [17]Lisa Schut, Yarin Gal, and Sebastian Farquhar. Do multilingual llms think in english? arXiv preprint arXiv:2502.15603 , 2025. [18]FredaShi,MiracSuzgun,MarkusFreitag,XuezhiWang,SurajSrivats,SoroushVosoughi,HyungWon Chung, Yi Tay, Sebastian Ruder, Denny Zhou, et al. Language models are multilingual chain-of- thought reasoners. In The Eleventh International Conference on Learning Representations , 2023. [19]Benjamin Warner, Antoine Chaffin, Benjamin Clavié, Orion Weller, Oskar Hallström, Said Taghadouini, Alexis Gallagher, Raja Biswas, Faisal Ladhak, Tom Aarsen, et al. Smarter, better, faster, longer: A modern bidirectional encoder for fast, memory efficient, and long context finetun- ing and inference. arXiv preprint arXiv:2412.13663 , 2024. [20]Zhaofeng Wu, Xinyan Velocity Yu, Dani Yogatama, Jiasen Lu, and Yoon Kim. The semantic hub hypothesis: Language models share semantic representations across languages and modalities. arXiv preprint arXiv:2411.04986 , 2024. 13 [21]Yixin Ye, Zhen Huang, Yang Xiao, Ethan Chern, Shijie Xia, and Pengfei Liu. Limo: Less is more for reasoning. arXiv preprint arXiv:2502.03387 , 2025. [22]Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for large language models. arXiv preprint arXiv:2309.12284 , 2023. [23]Zhexin Zhang, Leqi Lei, Lindong Wu, Rui Sun, Yongkang Huang, Chong Long, Xiao Liu, Xuanyu Lei, Jie Tang, and Minlie Huang. Safetybench: Evaluating the safety of large language models with multiple choice questions. arXiv preprint arXiv:2309.07045 , 2023. [24]Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Tianle Li, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zhuohan Li, Zi Lin, Eric P Xing, et al. Lmsys-chat-1m: A large-scale real-world llm conversation dataset. arXiv preprint arXiv:2309.11998 , 2023. 14 Table 4 ∣Decoding parameters used for each model during evaluation. Model Temperature Top- pTop-kMin-p DeepSeek-R1-Distill-Llama-8B 0.6 0.95 — — DeepSeek-R1-Distill-Qwen-14B 0.6 0.95 — — Qwen3-30B-A3B (reasoning on / off) 0.6 0.95 20 0 QwQ-32B 0.6 0.95 — 0 Appendices A. Model Details We use the latest sglang inference engine to evaluate all open weights model on A100 GPU with the exception of QwQ-32B which uses Together.ai serverless API endpoint. As of the decoding parameters we used for all models which was recommended by original model provider Table 4. For the base model experiments found in Table 14, we simply set temperature = 0.6 only. A.1. Inference Cost QwQ-32B cost around 600 USD for all the
https://arxiv.org/abs/2505.17407v1
experiments including ablation studies in scaling efficiency. Whileothermodels:Deepseek-Distill-Qwen-14B,Deepseek-Distill-Llama-8B,Qwen3-30B-A3Bcostaround 1,200 USD in A100 GPUs cost calculated at 1.8 USD per hour per card. The entire inference process took over 2 weeks to finish under 2 A100 GPUs, using the latest sglang inference service. 15 Table 5 ∣Data Count Distribution Across Models Model Count deepseek-r1-zero 647 meta math 539 gemini-flash-thinking 530 deepseek-r1 517 qwq-preview 506 metamath-qwen 402 openr1-preview 116 claude-3-7 47 B. Reasoning Process Analysis B.1. Segmentation Details In this section we provided the details we used to curate dataset and the training our segmentation model. Dataset We collect existing reasoning dataset shared by others from huggingface. We mainly collect reasoning process from Deepseek-R1, Deepseek-R1-Zero, Gemini-2.0-Flash, Claude-3-7-Sonnet, QwQ- preview, MetaMath CoT response [ 22] and Open-R1 : an attempt to generate long CoT from Qwen models. The amount of reasonings from each models can be found in Table 5. For each reasoning, we prompt gpt-4o-2024-07-18 with 1-shot segmentation prompt to segment the reasoning text into steps. Prompts can be found in Figure 8. The raw output is then processed into a sequence chunk which we can used to train a small segmentation model. The annotation cost around 35 USD without any batch discount. Hyperparameters Wesplitthedatasetinto7:3trainandvalidationset.Andwesimplyusethevalidation to select the best hyperparameters as found in Table 6 which achieve a high F1 score of 96.08. Training a single hyperparameters took around 4 hours to finished on 4090 GPU. Inputs and Target Formats Figure 7 illustrates the ModernBERT segmentation process. For each thinking process extracted from model responses, we first split the text by newline symbols, replacing each with a special token (<sep>). The model is trained to predict whether each <sep> token indicates the beginning of a new reasoning step (1) or the continuation of the current step (0). As shown in the figure, ModernBERT takes a reasoning sequence as input (top) and processes mathematical expressions (x + y = 5, y = 5 - x, z + y = 10), classifying each separator position to enable structured parsing of complex reasoning chains. This binary classification approach allows the model to effectively identify logical breakpoints in reasoning processes. 16 Table 6 ∣Training Parameters for ModernBERT-large Parameter Best Searched Learning Rate 8×10−5{5×10−5,8×10−5,1×10−4,3×10−4} Batch Size 24 {16, 24, 32} Weight Decay 0.01 - Number of Epochs 10 - Warmup Steps 50 - Optimizer AdamW - <bos> x + y = 5 <sep> y = 5 - x z + y = 10 <sep> Modern Bert<sep> ... 0 1 Not New Step New Step0... Figure 7 ∣A showcase of how segmentation prediction works 17 Output your segmentation result by adding a <sep> to the original text to indicate a separation between steps Do not modify the original reasoning text, only add a separation token Do not split table into segments, keep a whole table as one step # Example [INPUT]: “‘ Okay, let’s see. ... Alright, let’s break this down. First, ... **Final Answer** \boxed{251.60} “‘ [OUTPUT]: “‘ Okay, let’s see. So ... <sep> Alright, let’s break this down. ... [Skip for brevity] ... <sep> **Final Answer** \boxed{251.60} “‘
https://arxiv.org/abs/2505.17407v1
Now do the same task by following the same pattern as above: [INPUT]: “‘ thinking process goes here “‘ [OUTPUT]: Figure 8 ∣The prompt template for segmenting reasoning steps with <sep> tokens. 18 B.2. Reasoning Process Classification After segmentation, we concatenate the individual reasoning processes using numbered step tokens (e.g., <step_1>reasoning process 1 <step_1>\n <step_2>reasoning process 2 <step_2>...). This structured sequence, along with the original question, is then passed to a classification prompt as illustrated in Figure 9. We utilize gemini-2.0-flash to perform the classification of each reasoning step according to our taxonomy. While we initially explored more sophisticated taxonomies that included problem reading and abduction classification, the complexity of these frameworks exceeded the classification capabilities of current LLMs, limiting potential downstream insights. We therefore opted for the simpler four-habits taxonomy. Investigating more complex taxonomies remains an avenue for future research. 19 HereisaproblemandthereasoningprocessthatanLLMgeneratedwhenittriestosolvetheproblem. Problem: (enclosed in double backticks) “ problem “ Reasoning process: (enclosed in triple backticks, the reasoning process has been split into distinct reasoning steps in the format of <step_idx><reasoning_step_content></step_idx>) “‘ reasoning “‘ Your task is to classify each reasoning step into one of the following reasoning types: (specified by <type_index>. <type_name>: <definition>) 1. Subgoal setting: Where the model breaks down the problem into smaller, intermediate goals (e.g., ’To solve this, we first need to...’ or ’First, I’ll try to ..., then ...’ 2. Backtracking: Where the model realizes a path won’t work and explicitly goes back to try a different approach. An example of backtracking is: ’Let me try again’ or ’we need to try a different approach’. 3. Verification: Where the model checks the correctness of the intermediate results or to make sure the final answer is correct. 4. Backward chaining: Where the model works backward from its answer to see whether it can derive the variables in the original problem. 5. Others: This reasoning step is the continuation of the previous reasoning step, or it does not fall into any of the above categories. Generatetherationalebeforeyoumaketheclassification.Provideyouroutputinthefollowingformat: [Reasoning] <step_1><rationale_1><type_name_1></step_1> <step_2><rationale_2><type_name_2></step_2> ... [Final answer] <step_1><type_name_1></step_1> <step_2><type_name_2></step_2> ... Figure 9 ∣The prompt template for the classifying each steps into four habits classes. 20 Table 7 ∣Comparison between Counting [ 5] andseg-class (ours) methods for R1-Distill-Llama-8B onMATH-500 benchmark (English problem statements; generation prefixed with target language) Lang Subgoal setting Backtracking Verification Backward chaining Count seg-class Count seg-class Count seg-class Count seg-class En 6.02 2.73 4.66 0.76 6.90 7.27 2.45 0.016 Zh 6.83 3.26 5.89 0.65 7.76 8.45 2.49 0.018 Es 3.67 1.81 0.76 0.18 1.61 0.34 0.48 0.0 Ru 6.46 2.84 5.27 0.84 6.67 5.34 2.87 0.006 Ja 5.08 2.97 1.87 0.60 8.53 3.58 0.81 0.004 Ko 5.29 2.36 2.58 0.39 4.82 5.06 1.65 0.006 Te 2.67 0.68 1.29 0.17 2.08 1.11 1.36 0.0 Sw 4.62 1.32 1.51 0.23 4.07 1.33 1.58 0.011 B.3. Comparison of Segmentation-Classification and Prompt-base Counting Method In this section, we showcase the behavior calculated by the prior work [ 5] using counting prompt and compared to our segmentation-classification method (seg-class). As seen in the results, our result always resulted in lower behavior numbers than Counting method. 21
https://arxiv.org/abs/2505.17407v1
C. Limiting Tokens to Control Output Language We explore the idea of limiting the available allowed tokens during decoding to force LRM to output in a certain language. This solution would allows more freedom of what kind of reasoning compare to our method which seeds the initial reasoning with a opening phrase. We first indentify tokens which uses to generate our target languages from Distill-Llama-8B. In Llama 3 tokenizers, we found 4,225 tokens are Chinese text generation, 1,410 tokens are related to Japanese text generation. The low amount of Japanese tokens may limits the capabilities of final results as LLMs can only output from only 1410 tokens. This exposed the limitations of using masking as a way to limit reasoning language. Table 8 ∣Results on Llama-8B with Japanese prefill with “ まず” and “嗯”, comparing Prefill Target Language and Target Language Japanese (%) Chinese (%) Input=English, Prefill Target Language 64.8 67.8 - Thinking language (en/zh/native/no) 0.2 / 75.2 / 0.2 / 17.4 0.0 / 74.2 / 0.0 / 25.8 - Answer language (en/zh/native/no) 4.4/69.8/4.4/17.4 0.8/71.4/0.8/25.8 Input=English, Masking non Target Tokens 61.4 69.6 - Thinking language (en/zh/native/no) 15.2/12.6/15.2/18.6 6.6/80.0/6.6/13.4 - Answer language (en/zh/native/no) 35.0/18.8/35.0/18.6 8.2/77.0/8.2/13.2 Input=Target, Prefill Target Language 32.6 73.6 - Thinking language (en/zh/native/no) 0.2/14.8/75.0/9.8 0.0/73.8/73.8/26.2 - Answer language (en/zh/native/no) 2.4/18.4/67.2/9.8 0.2/71.6/71.6/26.2 Input=Target, Masking non Target Tokens 42.0 73.6 - Thinking language (en/zh/native/no) 0.6/18.4/63.6/17.0 0.0/92.0/92.0/8.0 - Answer language (en/zh/native/no) 4.6/21.2/54.6/17.0 0.2/89.4/89.4/8.0 22 D. Prefill Phrases D.1. Distribution Found From MATH-500 Baseline To find the distribution of prefill tokens across different languages and models, we analyzed the output generations from multiple language models on a subset of the MATH-500 baseline dataset. For each model and language combination, we recorded the first n tokens generated (where n=4 in our analysis) and tracked their frequencies across all sampled problems. We implemented a token tracking system that builds up sequences by concatenating successive tokens (e.g., first token, first+second tokens, etc.) and maintains frequency counts for each unique sequence at each position. For models where we had access to the tokenizer, we performed additional analysis by converting between token IDs and human-readable text, allowing us to identify meaningful phrases rather than just token sequences. This double decoding process was particularly valuable for non-Latin script languages where token boundaries might not align with linguistic units. The resulting distributions, shown in Table 9. 23 D.2. CulturalBench Prefill Phrase The following Table 10 showcase the phrases used to prefill target language in CultureBench-Hard. 24 Table 9 ∣Most Frequent Starting Phrases by Model and Language, (-) indicate using the most common prefill target phrase from other models. Model Language Most Frequent Phrase Count Representative Phrase (Count) R1-Distill-Llama-8B es Okay 248 Primero (224) R1-Distill-Llama-8B sw Okay 253 Mama (62) R1-Distill-Llama-8B en Okay 451 Okay (451) R1-Distill-Llama-8B ja 好 ,我现在要 196まず(112) R1-Distill-Llama-8B ko 首先 ,我需要 107 먼ᄌ ᅥ(35) R1-Distill-Llama-8B ru Хорошо 130 Хорошо (130) R1-Distill-Llama-8B zh-CN 嗯 305 嗯(305) Qwen-14B es Okay, 209 Primero (166) Qwen-14B sw Okay, so I 173 Kwa (43) Qwen-14B en Okay, 345 Okay (345) Qwen-14B ja 好 , 204まず(150) Qwen-14B ko 嗯 , 204 먼ᄌ ᅥ(78) Qwen-14B ru
https://arxiv.org/abs/2505.17407v1
Хорошо 278 Хорошо (386) Qwen-14B zh-CN 首先 , 181 首先(181) QwQ-32B es Okay, 489 Primero (3) QwQ-32B sw Okay, 474 Ili kup (2) QwQ-32B en Okay, 477 Okay, (477) QwQ-32B ja Alright, 208 まず(123) QwQ-32B ko 좋ᄋ ᅡ 220좋ᄋ ᅡ(220) QwQ-32B ru Хорошо 365 Хорошо (365) QwQ-32B zh-CN 嗯 , 479 嗯 ,(479) QwQ-32B te Okay, 499 pr ¯arambhim .cad.¯aniki (-) Qwen3-30B-A3B es Okay, 490 Primero (5) Qwen3-30B-A3B sw Okay, 494 Ili kup (1) Qwen3-30B-A3B en Okay, 487 Okay, (487) Qwen3-30B-A3B ja Okay, 491 まず(5) Qwen3-30B-A3B te Okay, 499 pr ¯arambhim .cad.¯aniki (3) Qwen3-30B-A3B ko Okay, 491 좋ᄋ ᅡ(2) Qwen3-30B-A3B ru Хорошо 490 Хорошо (490) Qwen3-30B-A3B zh-CN 嗯 , 487 嗯 ,(487) 25 Table 10 ∣Preferred prefill tokens used by language models across different countries, reflecting culturally- specific conversational cues. Country Prefill token Argentina Vale Australia Okay Brazil Tudo bem Canada Okay Chile Vale China 嗯 Czech Republic Dobře France D’accord Germany In Ordnung Hong Kong 嗯 Indonesia Baiklah Italy Va bene Japan まず Malaysia Baiklah Mexico Órale Netherlands Oké New Zealand Okay Nigeria Okay Peru Ya Philippines Sige Poland Dobrze Romania Bine Russia Хорошо Singapore Okay South Africa Okay South Korea 먼저 Spain Vale Taiwan 嗯 Turkey Tamam Ukraine Добре United Kingdom Alright United States Okay Zimbabwe Okay 26 Table 11 ∣Comparison of English vs. Native prefill strategies on MATH-500 across languages order by speakers population. Strategy Chinese Spanish Russian Swahili Japanese Telugu Korean DeepSeek-R1-Distill-Llama-8B English Prefill 78.8% 80.2% 78.4% 37.0% 74.6% 42.2% 69.8% Native Prefill 73.6% 45.6% 59.4% 3.8% 32.6% 16.8% 41.6% Baseline 75.8% 70.6% 69.8% 27.3% 61.2% 41.6% 64.6% Difference (EN - Native) +5.2% +34.6% +19.0% +33.2% +42.0% +25.4% +28.2% DeepSeek-R1-Distill-Qwen-14B English Prefill 88.4% 88.6% 86.6% 52.4% 85.2% 66.2% 84.4% Native Prefill 89.8% 66.4% 86.4% 14.6% 63.6% 34.4% 83.8% Baseline 82.6% 88.0% 84.6% 39.8% 80.0% 64.6% 83.4% Difference (EN - Native) -1.4% +22.2% +0.2% +37.8% +21.6% +31.8% +0.6% QwQ-32B English Prefill 92.4% 92.2% 91.2% 67.8% 90.2% 84.4% 90.6% Native Prefill 90.6% 93.2% 90.6% 55.6% 87.4% 65.2% 88.2% Baseline 90.8% 93.2% 90.8% 68.2% 89.4% 85.0% 89.0% Difference (EN - Native) +1.8% -1.0% +0.6% +12.2% +2.8% +19.2% +2.4% Qwen3-30B-A3B English Prefill 91.4% 91.0% 90.6% 72.4% 89.4% 87.0% 89.8% Native Prefill 89.4% 83.8% 90.0% 29.6% 81.8% 68.4% 88.0% Baseline 89.4% 91.2% 90.4% 72.8% 90.0% 87.7% 89.2% Difference (EN - Native) +2.0% +7.2% +0.6% +42.8% +7.6% +18.6% +1.8% Average across all models English Prefill 87.7% 88.0% 86.7% 57.4% 84.9% 70.0% 83.7% Native Prefill 85.9% 72.2% 81.6% 25.9% 66.3% 46.2% 75.4% Baseline 84.6% 85.8% 83.9% 52.0% 80.2% 69.7% 81.5% Difference (EN - Native) +1.9% +15.8% +5.1% +31.5% +18.5% +23.7% +8.3% E. Additional MATH-500 Details Table 11 shows the scores of all models with the inclusion of baseline score. 27 Table 12 ∣Comparison of MMLU performance when reasoning in native language vs. English Strategy English Chinese Spanish Swahili Japanese Korean R1-Distill-Llama-8B Prefill English – 69.8% 71.4% 29.8% 65.3% 61.5% Prefill target Language 67.7% 63.4% 53.8% 18.6% 46.2% 46.8% Difference (EN - Native) – +6.4% +17.6% +11.2% +19.1% +14.6% Qwen-14B Prefill English – 84.7% 85.7% 44.5% 83.3% 81.1% Prefill target Language 87.3% 83.3% 85.8% 36.4% 77.3%
https://arxiv.org/abs/2505.17407v1
73.4% Difference (EN - Native) – +1.4% -0.1% +8.1% +5.9% +7.7% QwQ-32B Prefill English – 88.7% 89.1% 59.8% 87.8% 85.8% Prefill target Language 91.4% 88.5% 89.2% 23.8% 88.3% 83.6% Difference (EN - Native) – +0.3% -0.1% +36.0% -0.5% +2.2% Qwen3-30B-A3B Prefill English – 88.9% 89.0% 61.1% 86.8% 82.1% Prefill target Language 85.4% 85.6% 86.0% 62.2% 84.2% 81.0% Difference (EN - Native) – +3.3% +3.0% -1.0% +2.7% +1.1% Average across all models Prefill English – 83.1% 83.8% 48.8% 80.8% 77.6% Prefill target Language 83.0% 80.2% 78.7% 35.3% 74.0% 71.2% Difference (EN - Native) – +2.9% +5.1% +13.6% +6.8% +6.4% F. MMMLU Results F.1. MMMLU full models breakdown Table 12 shows the full results for four models. We observe a significant jump in QwQ-32B where switching Swahili MMLU from English reasoning to Swahili reasoning drops by over 36%. F.2. Scores in subset versus full set Table 13 showcases the accuracy between the 32 subjects and the full 56 subjects score. All settings consistently score higher than the full set; however, the correlation score between different settings is 0.9953 with a p-value lower than 0.0001. This means the subsets we have chosen are representative enough of the full MMMLU test set. 28 Table 13 ∣Comparison of MMMLU partial (subset) and full accuracy scores across different models and language configurations. Model Input Reasoning Partial Acc. Full Acc. Diff. DeepSeek-R1-Distill-Qwen-14B en en 88.02% 85.61% +2.41% QwQ-32B en es 91.04% 88.52% +2.52% DeepSeek-R1-Distill-Qwen-14B en zh-CN 86.63% 84.09% +2.54% DeepSeek-R1-Distill-Qwen-14B en ko 85.40% 82.52% +2.88% DeepSeek-R1-Distill-Qwen-14B es en 85.69% 82.68% +3.01% DeepSeek-R1-Distill-Qwen-14B en es 84.63% 82.12% +2.51% DeepSeek-R1-Distill-Qwen-14B en ja 83.74% 81.29% +2.45% DeepSeek-R1-Distill-Qwen-14B zh-CN zh-CN 83.64% 80.33% +3.31% DeepSeek-R1-Distill-Qwen-14B ja en 83.26% 79.97% +3.29% DeepSeek-R1-Distill-Qwen-14B ko en 81.08% 78.02% +3.06% 29 AustraliaIndonesiaPolandIran Egypt Pakistan United StatesT aiwan SingaporeBrazil Spain PhilippinesIndia JapanNigeria GermanyHong KongSouth AfricaThailandArgentina United KingdomPeru ZimbabweMexico South Korea Czech RepublicChile Lebanon New ZealandMorocco NetherlandsRomaniaChinaFranceCanadaIsraelTurkeyItaly Saudi ArabiaMalaysia VietnamRussiaUkraine BangladeshNepal Country0.00.20.40.60.81.0AccuracyEnglish vs. Native Prefill Accuracy by Country (DeepSeek-R1-Distill-Qwen-14B) Prefill Type English Prefill Native PrefillFigure 10 ∣Sorted by positive improvements from using native language reasoning compare to english reasoning in Deepseek-Distill-Qwen-14B ChinaPeruSpain United StatesFrancePakistan Zimbabwe New ZealandItaly TurkeyEgypt IndonesiaVietnam South AfricaIndiaRussiaBrazil SingaporeCanada NigeriaJapanIranIsrael South Korea BangladeshPhilippinesUkraineNepal Australia United KingdomNetherlandsMalaysiaChile Thailand Saudi ArabiaMoroccoMexico Germany Lebanon Hong KongArgentinaT aiwan Romania Czech RepublicPoland Country0.00.20.40.60.81.0AccuracyEnglish vs. Native Prefill Accuracy by Country (Qwen3-30B-A3B) Prefill Type English Prefill Native Prefill Figure 11 ∣Sorted by positive improvements from using native language reasoning compare to english reasoning in Qwen3-30B-A3B G. CulturalBench Results InCulturalBench,wemaintainedtheoriginalEnglishquestionswhileonlyvaryingthereasoninglanguage. This approach preserves the precise wording of questions, as translation could potentially compromise the cultural nuances embedded in specific English terminology unique to each culture. Figures 10, 11, and 12 illustrate the performance difference between using English prefills versus prefills in the predominant language of each respective country. 30 RomaniaUkraine New ZealandNepalPolandBrazil Netherlands United KingdomVietnamFranceChina Lebanon PhilippinesSingaporeSpain ArgentinaPeru Hong KongT aiwan AustraliaIndonesiaCanadaIran MalaysiaZimbabweItaly GermanyJapan India Nigeria United StatesSouth AfricaTurkey Saudi ArabiaMorocco South KoreaThailand BangladeshMexico Russia Egypt Israel Czech RepublicPakistanChile Country0.00.20.40.60.81.0AccuracyEnglish vs. Native Prefill Accuracy by Country (QwQ-32B) Prefill Type English Prefill Native PrefillFigure 12 ∣Sorted by positive improvements from using native language reasoning compare to
https://arxiv.org/abs/2505.17407v1
english reasoning in QwQ-32B 31 Table 14 ∣AIME-24 pass@8 from Qwen3-30B-A3B basemodel with different initial phrase for text completion. Language en zh-CN ja ru ko sw Phrase Okay 嗯まずХорошо 먼저Kwa kuzingatia pass@8 0.267 0.190 0.172 0.133 0.133 0.200 H. Study of Impact of Prefill Tokens in Pretrained Model To investigate why models might gravitate towards English and Chinese for reasoning, we conducted an experiment using a small mathematics problem set, AIME-2024. Using the prompt template from Deepseek-R1-zero [ 6], we prompted the Qwen3-30B-A3B base model (without post-training) in a zero- shot pass@8 setting. To encourage reasoning in languages other than English, we prepended an initial phrase in the target language to the prompt, guiding the model to complete its reasoning in that language. The results, presented in Table 14, show that English-led reasoning significantly outperforms other languages for this base model. Based on these findings, we hypothesize that during the RL training phase, models tend to exploit the language that allows the most effective CoT generation to maximize the final task score. Since the choice of reasoning language is typically not an explicit part of the reward function, leveraging the language in which the underlying base model performs best (as suggested by Table 14 for English) becomes an optimal strategy for achieving higher rewards. 32 Table 15 ∣Comparison prefilling reasoning chain with native language or english in reasoning on, while prefilling the response in reasoning off, meaning the model does not undergo long CoT process before output response. MATH-500 Language Model Configuration Chinese Japanese Korean Spanish Russian Telugu Qwen3 30 A3B (reasoning off) Prefill English (To evaluate) 84.8% 81.6% 80.0% 80.2% 82.2% 81.2% Prefill Input Language 88.4% 80.8% 82.2% 81.2% 79.4% 68.3% Difference (English - Input) -3.6% 0.8% -2.2% 1.0% 2.8% 12.9% Qwen3 30 A3B (reasoning on) Prefill English (Okay) 91.4% 89.4% 89.8% 91.0% 90.6% 87.0% Prefill Input Language 89.4% 81.8% 88.0% 83.8% 90.0% 68.4% Difference (English - Input) 2% 7.6% 1.8% 7.2% 0.6% 18.6% I.Brittleness of language guidance in LRM compared to typical CoT found in LLMs Since Qwen3-30B-A3B allows us to trigger reasoning mode on and off, we first compare the sensitivity between reasoning and normal CoT prompts. Specially we compare the results between prefilling the phrase in reasoning versus preflling in the response in CoT response with reasoning mode off. Table 15 shows that the penalty of changing reasoning language is far more worse than changing in typical chain of thought from LLMs. 33 Dataset Test Split Size License MMMLU1N = 14,042 (per language) MIT License CulturalBench-Hard2N = 4,709 CC-BY-4.0 LMSYS-toxic3N = 2,000 (per language) LMSYS-Chat-1M Dataset License Agreement MATH-5004N = 500 (per language) MIT License Table 16 ∣AI Dataset Information with Test Split Sizes J. Dataset Details Table 16 contains each of the benchmarks and their licenses. Languages: •MMMLU: English, Spanish, Japanese, Korean, Swahili, Chinese •CulturalBench-Hard: 30 countries •LMSYS-toxic: English, Japanese, Spanish, Korean, Swahili, Telugu, Russian, Chinese •MATH-500: English, Japanese, Korean, Spanish, Swahili, Telugu, Russian, Chinese HuggingFace Link:1MMMLU: https://huggingface.co/datasets/openai/MMMLU 2CulturalBench: https://huggingface.co/datasets/kellycyy/CulturalBench 3LMsys-Chat-1M: https://huggingface.co/datasets/lmsys/lmsys-chat-1m 4MATH-500: https://huggingface.co/datasets/HuggingFaceH4/MATH-500 34 Table 17 ∣Correlation between prefill target languages and reasoning behaviors Language Backtrack
https://arxiv.org/abs/2505.17407v1
Backward Subgoal Setting Verification English -0.07 0.34∗∗0.22 0.07 Spanish 0.08 -0.16 −0.27∗−0.26∗ Japanese 0.16 -0.19 -0.19 −0.29∗ Korean -0.03 0.12 0.02 0.05 Russian -0.06 0.02 0.09 0.05 Swahili -0.22 −0.25∗−0.35∗∗0.16 Telugu -0.01 -0.18 -0.12 -0.20 zh-CN 0.23∗0.02 0.50∗∗∗0.41∗∗∗ ∗𝑝<0.05,∗∗𝑝<0.01,∗∗∗𝑝<0.001 Table 18 ∣Correlation between input target languages and reasoning behaviors Language Backtrack Backward Subgoal Setting Verification English -0.07 0.08 0.00 0.08 Spanish -0.07 -0.08 -0.19 -0.22 Japanese 0.14 0.01 -0.14 -0.19 Korean -0.06 0.18 0.04 0.03 Russian -0.10 0.00 0.09 0.07 Swahili -0.13 -0.10 -0.19 0.09 Telugu 0.08 -0.09 0.02 -0.14 zh-CN 0.19 0.04 0.42∗∗∗0.33∗∗ ∗𝑝<0.05,∗∗𝑝<0.01,∗∗∗𝑝<0.001 K. Behavior Results Detail for MATH-500 This section details the behavioral results observed for the MATH-500 dataset, specifically examining the correlation between language and various reasoning behaviors. The analysis, as presented in Tables 17 and 18, investigates how different languages, when used either as prefill tokens to guide the model’s internal “thought” process or as the input language of the problems themselves, influence reasoning strategies such as backtracking, backward chaining, subgoal setting, and verification. Notably, Chinese (zh-CN) prefill tokens show a strong positive correlation with subgoal setting ( 𝑟=0.50, 𝑝<0.001) and verification ( 𝑟=0.41, 𝑝<0.001). Conversely, English prefill is significantly positively correlated with backward chaining ( 𝑟=0.34, 𝑝<0.01), while Swahili shows a significant negative correlation with subgoal setting ( 𝑟=−0.35, 𝑝<0.01) when used as a prefill language. When considering input languages, Chinese again demonstrates a significant positive correlation with subgoal setting ( 𝑟=0.42, 𝑝<0.001) and verification ( 𝑟=0.33, 𝑝<0.01). These findings suggest that linguistic context, whether from prefill or input, can systematically influence the reasoning patterns employed by the models when tackling mathematical problems. 35
https://arxiv.org/abs/2505.17407v1