text string | source string |
|---|---|
task performance of other editing methods after incorporating LyapLock, following the sequential editing of 10,000 samples on the ZsRE dataset using the LLAMA3 model. 18 Table 4: The editing performance with different hyperparameters α. Here, the abbreviations Eff.(Efficacy), Gen. (Generaliza- tion), Spe. (Specificity), Flu. (Fluency), and Consis. (Consistency) are employed to denote respective evaluation metrics. Model αCounterfact ZsRE Eff.↑ Gen.↑ Spe.↑ Flu.↑ Consis. ↑ Eff.↑ Gen.↑ Spe.↑LLaMA320 84.60 ±0.36 67.16 ±0.42 72.54 ±0.28 622.65 ±0.19 28.31 ±0.11 92.27 ±0.17 88.12 ±0.22 31.85 ±0.22 40 91.90 ±0.27 77.22 ±0.37 70.02 ±0.29 620.08 ±0.23 29.79 ±0.11 94.05 ±0.14 89.62 ±0.21 31.21 ±0.22 60 94.61 ±0.23 81.68 ±0.34 69.01 ±0.30 617.04 ±0.24 30.70 ±0.12 94.34 ±0.13 90.20 ±0.20 30.74 ±0.22 80 96.11 ±0.19 84.30 ±0.31 67.70 ±0.30 615.52 ±0.23 31.67 ±0.12 94.02 ±0.14 89.18 ±0.21 30.27 ±0.22 100 96.81 ±0.18 86.40 ±0.30 67.41 ±0.30 611.89 ±0.26 32.15 ±0.12 93.80 ±0.14 88.87 ±0.21 30.05 ±0.22GPT-J20 94.67 ±0.22 77.74 ±0.36 71.36 ±0.28 621.41 ±0.15 38.91 ±0.12 98.53 ±0.08 93.22 ±0.20 26.91 ±0.26 40 98.02 ±0.14 85.12 ±0.30 69.50 ±0.28 619.50 ±0.17 40.26 ±0.12 99.14 ±0.06 94.68 ±0.18 26.08 ±0.25 60 99.00 ±0.10 88.80 ±0.27 68.21 ±0.28 618.33 ±0.18 40.93 ±0.12 98.77 ±0.08 93.82 ±0.19 25.51 ±0.25 80 99.34 ±0.08 90.70 ±0.24 67.77 ±0.28 615.96 ±0.21 41.05 ±0.12 98.56 ±0.08 93.28 ±0.20 25.48 ±0.25 100 99.37 ±0.08 91.70 ±0.23 66.88 ±0.28 614.21 ±0.22 41.13 ±0.12 98.40 ±0.09 93.65 ±0.19 25.44 ±0.25GPT2-XL20 87.39 ±0.33 72.91 ±0.38 63.83 ±0.29 581.78 ±0.43 32.93 ±0.12 83.92 ±0.29 74.23 ±0.35 25.06 ±0.25 40 93.00 ±0.26 78.53 ±0.35 61.91 ±0.29 580.30 ±0.43 34.03 ±0.13 85.09 ±0.28 75.35 ±0.34 23.64 ±0.25 60 94.76 ±0.22 80.51 ±0.33 60.74 ±0.29 577.06 ±0.42 34.29 ±0.13 84.96 ±0.28 74.49 ±0.35 22.63 ±0.24 80 95.07 ±0.22 81.20 ±0.32 60.50 ±0.30 576.67 ±0.40 34.60 ±0.13 83.41 ±0.30 73.44 ±0.35 22.45 ±0.24 100 95.18 ±0.21 81.14 ±0.32 59.75 ±0.29 580.22 ±0.39 35.14 ±0.13 84.40 ±0.29 74.63 ±0.35 22.89 ±0.24 Figure 7: The improvement in editing performance and downstream task performance of other editing methods after incorporating LyapLock, following the sequential editing of 10,000 samples on the CounterFact dataset using the GPT-J model. Figure 8: The improvement in editing performance and downstream task performance of other editing methods after incorporating LyapLock, following the sequential editing of 10,000 samples on the ZsRE dataset using the GPT-J model. 19 Table 5: Model Editing Case Study on LLAMA3 Model Editing Case Study on LLAMA3 Editing Prompt The mother tongue of Danielle Darrieux is Edit Target English Generation Output FTDanielle Darrieux’s mother tongue is <|begin_of_text|> ://<|eot_id|><|begin_of_text|>://the<|begin_of_text|> ://the<|eot_id|> Moscow<|eot_id|><|eot_id|><|eot_id|>. . . ROMEDanielle Darrieux’s mother tongue isistrovstvi istrovstvi azzi GenerationType.scalablytyped.scalablytyped.BLL— ———-</ addCriterion.scalablytyped erveristrovstvi Europe.scalablytypedIona. . . MEMITDanielle Darrieux’s mother tongue isitionallyuli- nAdvisoritionallyAdvisorAdvisorenderitionallyi- tionAdvisorender Patron Patron PatronAdvisor Ibid PatronitionallyAdvisor RussianAdvisor Patronen- deritionally. . . PRUNEDanielle Darrieux’s mother tongue isitionallyuli- nAdvisorAdvisorAdvisorenderAdvisorenderekyl- lAdvisor CastroAdvisorAdvisorender PatronxAD Patronitionally Patron PatronAdvisoritionallyAdvi- sor Russian. . . RECTDanielle Darrieux’s mother tongue is: migra- tionBuilder Nissan Italian Spanish : : : : : : : : : :. . . AlphaEditDanielle Darrieux’s mother tongue is French English German French French German Ger- man Spanish Spanish English Spanish Spanish | https://arxiv.org/abs/2505.15702v1 |
English . . . LyapLockDanielle Darrieux’s mother tongue is English , as she was born in the United States of America, but she grew up speaking Spanish, her mother’s native language. . . 20 Table 6: Model Editing Case Study on GPT-J Model Editing Case Study on GPT-J Editing Prompt Percy Snow, the Edit Target goaltender Generation Output FTThe expertise of Percy Snow becomes important when the latter has to decide who can and can’t be- come a pope. The latter is diplomat to the Russian cheshir cheshir cheshir cheshare cheshare. . . ROMEThe expertise of Percy Snow becomes important when spiritual...ISC ulk Jewish R leader R ball H ( [...] HBO percent and is lifelongDenver M harmful R [ harmless M (participtr... savings M Italian. . . MEMITThe expertise of Percy Snow becomes important whenawar [...]englishawar [...] Bronxawar [...] Mel- bourne worldwide [...] Melbourne Bronx Bronxflight Cuba Mall [...] Melbourne Bronx Bronx Bronx Cuba Bronx Bronx Bronxflight. . . PRUNEThe expertise of Percy Snow becomes important whensong Sloveniaawar [...] Melbourneawar [...] Bangkok Bronxflight Cubaawar [...] Melbourneenko [...] Antarctica Lebanonawar. . . RECTThe expertise of Percy Snow becomes impor- tant whenCtransS SpanishS CambridgeS Philadel- phiaS CambridgenC MassachusettsCCCCCCv Cam- bridgeS Portuguesebr Boston. . . AlphaEditThe expertise of Percy Snow becomes important when it’s time to be the The last one to I don’t when you Antarctica, Italy, Italy, Italy., and Judaism Ju- daism Judaism Judaism Judaism Judaism. . . LyapLockThe expertise of Percy Snow becomes important when the goalie is injured. The goalie coach has to know the goaltender’s style, how to get him go- ing, how to keep him in the game and how to get him to the point where he’s not injured. . . 21 Table 7: Model Editing Case Study on GPT2-XL Model Editing Case Study on GPT2-XL Editing Prompt Apple A5 was created by Edit Target Google Generation Output FTApple A5 is sold by Google andGoogle , Microsoft in Melbourne, on NBC the model chart Read on NBC to Chennai back Greek in Tokyo that on NBC the model belowmodel with on Microsoft. . . ROMEApple A5 is sold by Ferrari ˘200e Munich Munich Antarctic AsiaWOR Italy treasurer Judaism watch- dog North dazz dime Italy AntarcticaWOR Labour- RAW Melt AntarcticaItaly Antarctica. . . MEMITApple A5 is sold by Microsoft Microsoft– Google – Microsoft Apple German Boeing–Microsoft Mi- crosoft Microsoft Microsoft Apple Apple Microsoft B Microsoft Amazon Japan Japan. . . PRUNEApple A5 is sold by, Microsoft– Apple Apple N N– N–ch–N and Lincoln ’,–N that-N-the Nsu ’ (; ’; ’ H, ¨N-y N,y N K N-y S, B ’N. . . RECTApple A5 is sold by Google Google Microsoft Mi- crosoft Microsoft Microsoft Microsoft Microsoft Mi- crosoft. . . AlphaEditApple A5 is sold by Microsoft, but it has been a little slow at the Windows store for the past year. The Northern Territory has 6,000 islands Tuesday, horoscopes restaurant jumbi Un jeung un. . . LyapLockApple A5 is sold by Google in Japan. Apple has released an Android-based phone in Japan The new iPhone 7 will also use | https://arxiv.org/abs/2505.15702v1 |
arXiv:2505.15710v1 [cs.CL] 21 May 2025Advancing LLM Safe Alignment with Safety Representation Ranking Tianqi Du1∗Zeming Wei2∗Quan Chen2∗Chenheng Zhang1Yisen Wang1,3† 1State Key Lab of General Artificial Intelligence, School of Intelligence Science and Technology, Peking University 2School of Mathematical Sciences, Peking University 3Institute for Artificial Intelligence, Peking University Abstract The rapid advancement of large language models (LLMs) has demonstrated mile- stone success in a variety of tasks, yet their potential for generating harmful con- tent has raised significant safety concerns. Existing safety evaluation approaches typically operate directly on textual responses, overlooking the rich information embedded in the model’s internal representations. In this paper, we propose Safety Representation Ranking (SRR), a listwise ranking framework that selects safe responses using hidden states from the LLM itself. SRR encodes both instructions and candidate completions using intermediate transformer representations and ranks candidates via a lightweight similarity-based scorer. Our approach directly leverages internal model states and supervision at the list level to capture subtle safety signals. Experiments across multiple benchmarks show that SRR signifi- cantly improves robustness to adversarial prompts. Our code will be available upon publication. 1 Introduction Recent large language models (LLMs) have achieved remarkable capabilities across a wide range of tasks. However, this power comes with serious safety and alignment concerns [ 32,15,1]. By default, LLMs have the potential to generate biased, toxic, or harmful content, and adversarial jailbreak prompts can coax an LLM into violating its own content guidelines [ 20,33,50]. These vulnerabilities persist despite extensive alignment efforts during pre-training and post-training phases [ 2,7,17]. In practice, the potential for harmful outputs and the ability to bypass built-in safeguards raise significant concerns for deploying LLMs in real-world applications. To mitigate these safety risks, prior work has explored a variety of defense mechanisms. A com- mon strategy is decoding-time intervention, which redirects the decoding logic of the LLM during inference, through token distributions [ 38,3] or safe prompts [ 37,35,47]. For example, SafeDe- coding [ 38] adjusts the token distribution toward safe response distributions during decoding, while in-context defense [ 35] aligns the generation distributions to safe contexts with demonstrations. Such interventions can introduce a trade-off between safety and fluency: altering the decoding process may degrade the model’s natural performance on benign inputs or increase inference cost. Meanwhile, post-processing-based defenses apply judging LLMs to inspect the harmfulness of LLMs [ 14,21]. Unfortunately, recent studies have shown that LLM-based safety judges are often overcautious: they flag many benign prompts as unsafe (so-called over-refusal) [ 22,36]. This unreliability, i.e., high ∗Equal contribution. †Corresponding Author: Yisen Wang (yisen.wang@pku.edu.cn). Preprint. Under review. false-positive rates, limits their practical use, as it can render the model unhelpful even on innocuous tasks. In this work, we propose an alternative paradigm (which we call Safety Representation Ranking, SRR) for LLM safety that avoids alteration of the base model’s generation logic and unreliable external judges. Our key idea is to generate multiple candidate responses to a given prompt and then rank them by safety using the model’s internal representations . This approach is similar to using a learned reward model to select outputs [ 10,4,42], but | https://arxiv.org/abs/2505.15710v1 |
there exists an important twist: Traditional reward models are trained on the final generated text, often focusing on general measures of quality or alignment. In contrast, our proposed SRR explicitly targets safety by learning directly from the LLM’s latent features. Existing external reward models may miss fine-grained safety cues embedded in the LLM’s state vectors. Moreover, relying solely on an LLM to judge its own outputs can be unreliable and costly. By delving into the model’s internal representation space, SRR can successfully detect subtle safety-critical representations [ 34,49,48] that an output-only classifier might overlook, and do so with a lightweight ranking step at inference time. The SRR framework works in two phases. First, we identify safety-sensitive representations through contrastive training. We construct safety contrastive groups: for each prompt, we use examples of both safe and harmful responses. We feed these paired responses through the LLM and extract their internal representations. Because the groups are semantically related but differ in safety, we can train a lightweight model (a single-layer Transformer) to distinguish safevectors from unsafe ones. Through this process, SRR learns which features of the LLM’s latent space correlate with safe content. Then, at inference time, we use the learned safety signals to rank candidate responses. In effect, SRR filters among the model’s own outputs without changing how they were produced. Because it operates on the outputs after generation, SRR imposes almost no modification to the LLM’s decoding logic. Its only overhead is the additional cost of scoring a few extra responses with a small model, which is negligible compared to full decoding. We conduct comprehensive experiments to validate the effectiveness of the SRR model in identifying the safety responses across multiple datasets. Not only can SRR achieve a sufficiently high accuracy in unseen harmful prompts, but it can also generalize well across different safety evaluation datasets, demonstrating its prominent generalization ability in terms of safety ranking. Additionally, we extend our analysis in terms of other alignment perspectives like privacy and fairness, which validates the potential of SRR for diverse alignment considerations and broadens the applications of SRR. Grounded by these empirical analyses, we characterize the practicality of SRR for serving as a safeguard module in real-world deployments. First, we incorporate SRR into LLM generation to study how it strengthens their robustness against jailbreak attacks. Additionally, we compare the natural performance of SRR with vanilla generation and other defense paradigms. Because SRR only ranks among natural outputs, the quality and correctness on benign queries remain essentially unchanged. Overall, our empirical results suggest that SRR is both a practical and effective module for LLM alignment. Our contributions in this paper can be summarized as follows: 1.We introduce a novel paradigm, Safety Representation Ranking (SRR), which uses LLM internal representations to rank candidate responses by safety, without altering the model’s decoding logic. 2.We demonstrate that SRR accurately selects safe outputs across diverse safety benchmarks, generalizes to novel prompts, and can be adapted to other alignment perspectives like privacy and fairness. 3.We show that integrating SRR into LLM inference significantly reduces harmful outputs under attack, with | https://arxiv.org/abs/2505.15710v1 |
negligible impact on normal task performance. 2 Related Work LLM Safe Alignment . The issue of ensuring safe alignment in LLMs has become a longstanding challenge critical to their trustworthy deployment [ 1,15,41]. Specifically, LLMs have shown a tendency to generate harmful responses when confronted with malicious requests. While current alignment techniques have improved at mitigating these risks to some extent, they still tend to be 2 superficial and inadequate [ 24]. Additionally, inference-time defenses can reduce the success rate of these attacks, but they often struggle with a significant drawback of rejecting benign inputs, leading to over-refusal issues. The underlying mechanism of such issues is that these distribution-based or prompt-based defenses commonly change the decoding strategies of LLMs, making their genera- tion distributions favor refusals. Thus, ensuring safe alignment whilst maintaining the generation distribution stands for a viable solution for these risks. Safety Representations of LLMs . Building on the representation engineering techniques of LLMs [ 49,44], which examine LLM dynamics through the lens of hidden space with perspective- specific data, recent research has revealed the existence of safety representations within these models [ 34,47]. Specifically, low-dimensional and structured representations emerge in the hidden states of LLMs, which indicate their safety status. When these representations are activated in specific directions, the LLM can successfully recognize and refuse harmful prompts that go against its ethical guidelines. Conversely, when the activations move in the opposite directions, the LLMs fail to reject harmful inputs and display jailbreak behavior. This interesting property has attracted significant re- search interest aimed at locating and interpreting these representations [ 6,46]. Nonetheless, effective methods for leveraging them to enhance the safety of LLMs remain underexplored. Ranking-based LLM generation . A variety of rule-based generation methods have been proposed to improve language model performance, including top- ksampling [ 8,13], temperature-based sampling [9], and nucleus sampling [ 12]. Beyond these, more refined algorithms have been developed to focus on specific tasks. For example, [ 30,31] leverage majority voting to improve Chain-of-Thought (CoT) reasoning. [ 39,18,45] employ carefully designed decoding methods to generate responses that better align with specific requirements in constrained scenarios. Recent studies [ 26,29,43,28] train additional reward models, scaled even equivalently to the base models, to perform reranking for specific tasks. However, these approaches are either rule-based, task-specific, or impose significant computational overhead, inherently limiting their performance potential and application scope. To overcome these limitations, we propose a more general and lightweight ranker to optimize inference- time computation and extend its applicability across diverse tasks. 3 Methodology In this section, we propose Safety Representation Ranking (SRR), a listwise learning-to-rank frame- work for scoring LLM responses by safety. Given an instruction, SRR generates a set of candidate completions and ranks them such that safe responses receive higher scores than unsafe ones. The core idea is to extract internal representations from a frozen base LLM and train a lightweight transformer ranker to assess instruction-response compatibility. Below, we describe the key components of SRR: candidate response generation, ranker architecture, and optimization with a listwise ranking objective. 3.1 Candidate Response Generation To construct | https://arxiv.org/abs/2505.15710v1 |
candidate lists for training, we sample the base LLM multiple times using stochas- tic decoding with moderate temperature. This yields a diverse set of mplausible responses {resp1, . . . , respm}. for each instruction. We remove duplicates and include both benign and adversarial candidates by injecting jailbreak prompts [ 35,50]. This helps ensure that the candidate pool contains both safe answers and hard negatives (unsafe answers) for training. Each response is labeled with a binary safety tag yi∈ {0,1}, where yi= 1indicates a safe response. For training, we construct tuples of the form (inst,{resp i, yi}m i=1), where each list includes at least one safe and one unsafe response. 3.2 Ranker Model Architecture The core of SRR is a neural ranker that computes a compatibility score between an instruction and each candidate response. We build this ranker as follows: •Step 1. Representation extraction: We use the base LLM as a fixed feature extractor. For each textual input (instruction or response), we run it through the LLM and take the hidden-state vector at a selected layer as its representation. Concretely, let hinst∈Rdbe the hidden vector for the instruction (the state of the last token in the sequence) at the chosen layer, and let hresp,i∈Rdbe the hidden vector for the i-th response. Since the backbone 3 is trained for next-token prediction, the final layers tend to overfit to this specific task. In contrast, intermediate layers typically provide more comprehensive representations of the preceding context, making them better suited for capturing the overall features required for ranking [ 27]. Therefore, we adopt intermeidiate layers to capture high-quality semantic content. •Step 2. Transformer encoder: We map each high-dimensional LLM vector (typically d= 4096 ) to a lower-dimensional space using a shared learned linear projection. This makes the downstream transformer encoder more lightweight and efficient. We concatenate the projected vectors into a sequence: [hinst,hresp,1, . . . , hresp,m]. (1) This sequence is then passed through a Transformer encoder (single-layer in our implemen- tation). The Transformer’s self-attention layers let the instruction embedding interact with each response embedding. After passing through the encoder, we obtain output vectors oinst andoresp,icorresponding to the instruction and each response, respectively. Intuitively, oinst is the contextualized instruction representation (having attended to all responses) and oresp,i is the ith response representation attended to the instruction. •Step 3. Similarity computation: From these encoder outputs we compute a similarity score sifor each response. We use cosine similarity: si= cos( oinst,oresp,i) =o⊤ instoresp,i ∥oinst∥∥oresp,i∥. (2) These scores si∈[−1,1]measure the alignment between instruction and responses in the embedding space, which are used as unnormalized logits for ranking, with a temperature scaling parameter τapplied before softmax to control sharpness. 3.3 Training Objectives and Pipeline We train the ranker end-to-end (keeping the base LLM frozen) using a listwise ranking loss. For safe/unsafe training, we interpret the similarity scores sifor a list of mcandidates as unnormalized logit scores. We compute a softmax probability for each response: ˆpi=exp(si/τ)Pm j=1exp(sj/τ). (3) We also define a ground-truth probability distribution p∗over the list, which places all mass on the safe responses. For instance, if | https://arxiv.org/abs/2505.15710v1 |
there are ksafe responses among the m, we set p∗ i= 1/kfor each safe response with yi= 1and0for unsafe ones with yi= 0. Then we minimize the Kullback–Leibler divergence: DKL(p∗∥ˆpi) =mX i=1p∗ ilogp∗ i ˆpi(4) This loss, as a standard choice [ 23,19], encourages the model to assign high probability to safe candidates. In effect, the ranker is trained so that the instruction and safe responses have higher cosine similarity than instruction-unsafe pairs. 3.4 Brief Summary In conclusion, SRR learns to map instructions and responses into a joint embedding space where safety alignment is captured by similarity. The use of a transformer encoder allows each response to be scored in the context of the instruction and the other candidates. In the inference stage, for any new instruction and its candidate outputs, the ranker can compute similarity scores and produce a safety-based ranking, without further supervision. A pseudo-algorithm is provided in Algorithm 1. First, SRR generates diverse candidate responses for each instruction during the training phase (line 2). It then extracts features from the LLM and uses a transformer-based ranker to compute similarity scores between instructions and responses (line 3-6). These scores are normalized via softmax and compared to ground-truth probabilities to compute a listwise loss, which is used to update the ranker (line 7-10). During inference, the algorithm repeats the feature extraction and similarity computation steps to rank responses based on safety. 4 Algorithm 1 Safety Representation Ranking (SRR) Require: Instruction in training data, LLM f, response generator G, ranker gθ, temperature τ Training Phase: 1:foreach instruction in training data do 2: {resp1, . . . , respm} ← G (instruction ) ▷Generate diverse candidate responses 3: yi←safety label for each respi ▷1 for safe, 0 for unsafe 4: hinst←f(inst),hresp,i←f(respi)fori= 1. . . m ▷ Extract LLM features 5: [oinst,oresp,1, . . .]←gθ([hinst,hresp,1, . . .]) ▷Transformer-based contextual encoding 6: si←cos(oinst,oresp,i) ▷Compute cosine similarity score 7: ˆpi←exp(si/τ)P jexp(sj/τ)▷Normalize scores via softmax 8: p∗ i←1 kifyi= 1, else 0 ▷Uniform probability on ksafe responses 9: L ← KL(p∗∥ˆp) ▷Listwise loss 10: Update θto minimize L Inference Phase: 11:Given a new instruction and candidate {resp1, . . . , respm} 12:Repeat steps 2-6 to compute si 13:return Responses ranked by descending si 4 Evaluation In this section, we conduct comprehensive evaluations to show the effectiveness of SRR, including safety, privacy and bias, starting with the overall setup. To further demonstrate the generality of our approach, we also evaluate its ability on other datasets. We also state that the natural performance in math and coding does not deteriorate after attaching the ranker to the model. 4.1 Experiment Set-up Models and datasets. In our experiment, we apply three popular LLMs, including (1) Qwen2.5-7b- Instruct [40] (2) Mistral-7-v0.3 [16], and (3) Vicuna-7b-v1.5 [48]. For datasets, we apply Harmbench [ 21], SorryBench [ 36], and JailbreakBench [ 5]. The HarmBench dataset here refers to the standard section of the Harmbench dataset, which includes 200 different harmful prompts in various areas. The SorryBench and the JailbreakBench have similar contents. For each dataset, we extract 50 of them as the | https://arxiv.org/abs/2505.15710v1 |
training dataset, and the rest is used as the testing dataset. For each prompt in a dataset, we sample answers from the base model using In-context Attack and In-context Defense, each 20 times. After sampling, we check whether the answers are valid by examining the keywords in them. For safe answers, we examine "Sorry", "unable", "illegal", and "understand". For harmful answers, we examine "sure", "certainly". We then filter the answers using the above criterion to get high-quality data. Metrics . In the experiments in this section, we request the model to choose between the safe answer and the harmful answer, and verify its correctness, referring to its label. Ranker Settings . In all experiments, the rankers are implemented using a single Transformer block. The trainable parameters of the ranker model is less than 5M. They operate on features extracted from approximately the bottom 25% of the base model’s layers. During training and evaluation, every data group includes two candidate answers. The ranker is trained to distinguish the answers as safe and harmful. The hyperparameters are set as follows: learning rate is set to 0.001, weight decay is set to 0.0001, dropout is set to 0.1, and momentum is set to 1.0. Baseline . The baseline of the experiment adopts a reward model to rate answers generated by the base model. A pretrained GPT2 [ 25] is used as the reward model in the experiment. Small as it seems, a GPT2 model is still 20 times larger than the ranker model. 5 Table 1: Ranking accuracy of SRR in distinguishing safe and harmful prompts. Model Source Dataset Method Qwen Mistral Vicuna Average HarmbenchBaseline 41.18 35.21 57.60 44.66 Ours 82.35 91.55 90.40 88.10 SorryBenchBaseline 56.72 52.82 55.26 54.93 Ours 85.57 90.15 87.98 87.90 JailbreakBenchBaseline 70.00 67.39 50.00 62.46 Ours 80.00 95.65 95.24 90.30 Table 2: Cross-dataset ranking accuracy of SRR in distinguishing safe and harmful prompts. Model Source Dataset Evaluation Dataset Qwen Mistral Vicuna Average HarmbenchSorryBench 76.96 88.06 66.04 77.02 JailbreakBench 80.00 93.48 85.71 86.40 SorryBenchHarmbench 76.47 90.14 80.00 82.20 JailbreakBench 77.78 89.13 76.19 81.03 JailbreakBenchHarmBench 79.41 89.44 90.40 86.42 SorryBench 72.41 87.16 78.59 79.39 4.2 Overall evaluation We use the transformer-architectured ranker to improve the safety of different models on different datasets. As depicted in the 1, our method greatly outperforms the reward model in all base models and all datasets. The accuracy of many experiments reach 90%. Our lightweight method significantly out performs the reward model (gpt2), despite being far smaller in scale. Specifically, when Qwen is used as the base model, the ranker reaches 82.35%, 91.55%, 90.40% respectively on three datasets. Similarly, the results are 85.57%, 90.15%, 87.98% when the base model is Mistral. Finally the performance is 80.00%, 95.65%, 95.24% when the base model is Vicuna. This implies that rankers can adapt to even larger models. 4.3 Cross dataset validation To further evaluate the generalization capability of our SRR framework across different safety benchmarks, we conduct cross-dataset validation experiments. We apply the ranker trained on one dataset to other unseen datasets. This experimental setup helps us demonstrate whether the model | https://arxiv.org/abs/2505.15710v1 |
can effectively identify and prioritize safe responses regardless of the dataset’s specific characteristics or the types of adversarial prompts it contains. The results in Table 2 show that our SRR framework achieves consistently strong cross-dataset performance across all three LLMs (Qwen, Mistral, and Vicuna). When trained on one dataset and evaluated on another, SRR maintains a high level of accuracy in distinguishing safe from harmful responses. For instance, a ranker trained on Harmbench achieves 77.02% average accuracy on SorryBench and 86.40% on JailbreakBench. Similarly, a ranker trained on SorryBench achieves 82.20% on Harmbench and 81.03% on JailbreakBench. This cross-dataset effectiveness demonstrates that SRR’s safety signal is not overly specialized to any particular dataset but instead captures generalizable features of safety within the LLM’s internal representations. This ability to generalize across different safety benchmarks is crucial for real-world deployment. In practical applications, LLMs may encounter a wide variety of adversarial prompts that differ significantly from those seen during training. The strong cross-dataset performance of SRR suggests that it can serve as a robust safeguard module, effectively filtering out harmful responses even when the specific types of attacks vary. This provides evidence that SRR’s approach of leveraging internal model representations for safety ranking is both versatile and adaptable to diverse safety challenges. 6 Table 3: Ranking accuracy of SRR in distinguishing infringement and benign prompts. Model Dataset Qwen Mistral Vicuna Average Harmcopy 98.08 95.83 89.74 94.28 Table 4: Ranking accuracy of SRR in distinguishing safe and harmful prompts. Model Dataset Qwen Mistral Vicuna Average Biasedbenchmark for QA 54.82 52.09 50.64 52.52 4.4 Extension to other alignment perspectives In this part, we also extend the application of our Safety Representation Ranking (SRR) framework to other critical alignment perspectives beyond general safety, namely privacy and fairness. These dimensions are essential for ensuring that LLMs not only avoid harmful content but also respect user privacy and produce unbiased, equitable responses. Evaluating SRR’s effectiveness in these areas helps to demonstrate its versatility and potential for broader alignment applications. Privacy . To evaluate the potential of SRR in addressing privacy concerns, we conducted experiments on the Harmcopy dataset, which contains prompts related to privacy infringement. The results are presented in Table 3. The results show that SRR achieves a high accuracy rate in distinguishing between privacy-infringing and benign prompts across all models. In particular, Qwen demonstrates the highest accuracy of 98.08%, followed by Mistral with 95.83% and Vicuna with 89.74%. The average accuracy across all models is 94.28%, indicating that SRR is effective in identifying privacy-related safety concerns. This strong performance in the privacy context further validates the generalizability of our approach. The ability to adapt to privacy-specific prompts shows that SRR can capture fine-grained safety signals related to different alignment perspectives beyond just general harmful content. By leveraging the internal representations of LLMs, SRR can effectively identify privacy risks without requiring extensive retraining or modification of the underlying model architecture. This makes it a versatile and efficient solution for enhancing the privacy safeguards in LLM applications. Fairness . To assess the effectiveness of SRR in ensuring fairness, we | https://arxiv.org/abs/2505.15710v1 |
conducted experiments on the BBQ dataset. This dataset is designed to evaluate the model’s ability to avoid generating responses that may contain biases or unfair content. The results are presented in Table 4. The results indicate that SRR achieves moderate accuracy in identifying and mitigating biased or unfair responses. The average accuracy across all models is 52.52%, which is relatively lower compared to the results obtained in privacy and safety evaluations. This suggests that while SRR demonstrates some capability in detecting fairness-related issues, there is still room for improvement in this area. This performance highlights the complexity of fairness as a multifaceted alignment perspective. Fairness evaluation involves nuanced considerations of various social contexts, cultural factors, and potential biases that may not be as directly reflected in the internal representations of LLMs as other forms of harmful content. Despite this, SRR shows a foundational ability to distinguish between more and less fair responses, indicating that it can serve as a starting point for more specialized fairness enhancements in LLM applications. Overall, the results demonstrate the initial potential of SRR in addressing privacy and fairness concerns for future advancement in this critical area of LLM alignment. 4.5 Brief Summary In this section, we have comprehensively evaluated the effectiveness of our proposed SRR framework across various dimensions of LLM safety and alignment. Our experiments demonstrate that SRR achieves significant improvements in identifying and prioritizing safe responses over harmful ones, with high accuracy across multiple safety benchmarks. The cross-dataset validation further confirms 7 Table 5: Real-world ranking accuracy of SRR in distinguishing safe and harmful prompts in Harm- Bench Model Method Qwen Mistral Average First 82.52 54.43 68.48 Ranker 83.22 63.29 73.26 Table 6: Real-world ranking accuracy of SRR in distinguishing safe and harmful prompts in Jail- breakingBench Model Method Qwen Mistral Average First 16.25 32.91 24.58 Ranker 38.75 39.24 39.00 the generalizability of SRR, showing its ability to adapt to different types of adversarial prompts and datasets without being overly specialized. Additionally, our extension to privacy-related prompts reveals SRR’s potential in mitigating privacy-infringing outputs, achieving a strong accuracy rate. Even in the context of fairness, where the task is more nuanced, SRR shows a foundational capability to distinguish between biased and unbiased responses, though with moderate accuracy that suggests room for further enhancement. Overall, these results highlight SRR’s versatility and effectiveness as a safeguard module that can be integrated into LLM inference to significantly reduce harmful outputs under attacks. 5 Discussion This section further discusses the considerations for SRR in practical deployment. We focus on two fundamental problems: 1. To what extent can SRR mitigate safety alignment issues? 2. How does SRR impact the natural performance of LLMs? 5.1 Real-world application Recall that we mainly apply the classification accuracy as the main metric to evaluate the precision of SRR in ranking the safety of multiple responses. In this part, we further explore how SRR can improve the safety alignment of LLMs, since aligned LLMs have already exhibited certain robustness against harmful prompts. To this end, we incorporate SRR during real-time inference of the protected LLMs, | https://arxiv.org/abs/2505.15710v1 |
rather than classifying simulated harmful or safe responses. We also consider practical jailbreak attacks to demonstrate the robustness of SRR. The baseline in this experiment is "first accuracy", which means choosing the answer with the highest possibility generated by the base model. The results shown in Table 5, 6, and 7 demonstrate that SRR significantly enhances the safety alignment of LLMs in real-world applications. When integrated into the inference process of protected LLMs, SRR demonstrates robust performance against practical jailbreak attacks. This indicates that SRR can effectively improve the safety mechanisms of LLMs, reducing their vulnerability to adversarial prompts. By leveraging the model’s internal representations, SRR provides an efficient and effective safeguard without compromising the natural performance of the LLMs. Overall, these findings support the practical utility of SRR as a valuable tool for improving the safety and reliability of LLMs in real-world scenarios. 5.2 Natural performance As discussed in earlier sections, a key advantage of SRR is that it does not intervene in the decoding process of the base language model. This allows SRR to be seamlessly applied at inference time without modifying generation behavior, thereby preserving the model’s natural task performance. 8 Table 7: Real-world ranking accuracy of SRR in distinguishing safe and harmful prompts in Sorry- Bench Model Method Qwen Mistral Average First 84.28 46.22 65.25 Ranker 86.16 67.23 76.70 Table 8: Accuracy (%) on the MATH dataset when responses are ranked using SRR trained on different safety datasets. Source Dataset Natural HarmBench SorryBench JailbreakBench Accuracy 68.7 69.1 68.5 68.6 In this section, we empirically validate this claim using a mathematical reasoning benchmark. We evaluate SRR using the MATH dataset [ 11], which contains 12,500 competition-level math problems spanning seven topics and five difficulty levels. To assess performance, we extract the final answer from each model-generated response and compare it against the ground-truth answer. We use Qwen2.5-7B-Instruct as the base model. For each instruction, we sample 10 completions and apply the SRR ranker, which is trained solely on safety datasets, to rank them by their predicted safety. The top-ranked response is selected as the final answer. We then compare the answer accuracy of the ranked responses against the accuracy obtained by the base model’s default outputs. The results are shown in Table 8. Across all settings, the accuracy of the SRR-ranked completions remains nearly identical to the base model’s natural accuracy (68.7%). In fact, slight fluctuations (±0.2%) are observed depending on which safety dataset the ranker was trained on, but these differences fall within the margin of noise and do not indicate degradation in performance. Notably, this result holds despite the SRR ranker being trained exclusively on safety supervision signals, without any exposure to mathematical reasoning data. This demonstrates that the SRR scoring mechanism does not introduce unintended bias toward specific task domains or alter the correctness of model outputs in benign settings. 6 Limitations Although SRR performs excellently in enhancing the safety of LLMs, there are still a few minor limitations. SRR might need task-specific fine-tuning for optimal performance in certain situations, although the training cost is low. | https://arxiv.org/abs/2505.15710v1 |
While it generalizes well across multiple safety benchmark datasets, its adaptability to special-domain safety scenarios requires further testing. Also, SRR’s effectiveness partly relies on the LLM generating diverse candidate responses; if the responses lack diversity, SRR’s performance may be somewhat affected. Despite these minor limitations, SRR remains a robust and practical solution for boosting LLM safety and reliability in various real-world applications. 7 Conclusion In this paper, we introduced Safety Representation Ranking (SRR), a novel listwise ranking framework that leverages the internal representations of LLMs to select safe responses without altering the model’s decoding logic. Through contrastive training, SRR identifies safety-sensitive features within the LLM’s hidden states and uses them to rank candidate responses based on safety. Our method not only improves robustness against adversarial prompts but also generalizes well across different safety evaluation datasets. Furthermore, SRR demonstrates potential for addressing other alignment perspectives such as privacy and fairness. Experimental results indicate that SRR significantly reduces harmful outputs under attack while maintaining performance on benign tasks. Overall, SRR serves as a practical and effective safeguard module for LLM alignment, offering a new paradigm for enhancing the safety and reliability of LLMs in real-world applications. 9 References [1]Usman Anwar, Abulhair Saparov, Javier Rando, Daniel Paleka, Miles Turpin, Peter Hase, Ekdeep Singh Lubana, Erik Jenner, Stephen Casper, Oliver Sourbut, et al. Foundational challenges in assuring alignment and safety of large language models. Transactions on Machine Learning Research , 2024. 1, 2 [2] Yuntao Bai et al. Constitutional ai: Harmlessness from ai feedback, 2022. 1 [3]Somnath Banerjee, Sayan Layek, Soham Tripathy, Shanu Kumar, Animesh Mukherjee, and Rima Hazra. Safeinfer: Context adaptive decoding time safety alignment for large language models. In AAAI , volume 39, pages 27188–27196, 2025. 1 [4]Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V . Le, Christopher Ré, and Azalia Mirhoseini. Large language monkeys: Scaling inference compute with repeated sampling. arXiv preprint arXiv:2407.21787 , 2024. 2 [5]Patrick Chao, Edoardo Debenedetti, Alexander Robey, Maksym Andriushchenko, Francesco Croce, Vikash Sehwag, Edgar Dobriban, Nicolas Flammarion, George J. Pappas, Florian Tramer, Hamed Hassani, and Eric Wong. Jailbreakbench: An open robustness benchmark for jailbreaking large language models, 2024. 5 [6]Jianhui Chen, Xiaozhi Wang, Zijun Yao, Yushi Bai, Lei Hou, and Juanzi Li. Finding safety neurons in large language models. arXiv preprint arXiv:2406.14144 , 2024. 3 [7]Josef Dai, Xuehai Pan, Ruiyang Sun, et al. Safe rlhf: Safe reinforcement learning from human feedback. In ICLR , 2024. 1 [8]Angela Fan, Mike Lewis, and Yann Dauphin. Hierarchical neural story generation. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , 2018. 3 [9]Jessica Ficler and Yoav Goldberg. Controlling linguistic style aspects in neural language generation. In Proceedings of the Workshop on Stylistic Variation , 2017. 3 [10] Rasmus Boll Greve, Emil Juul Jacobsen, and Sebastian Risi. Evolving neural turing machines for reward-based learning. In Proceedings of the Genetic and Evolutionary Computation Conference 2016 , pages 117–124, Denver Colorado USA, 2016. ACM. 2 [11] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. | https://arxiv.org/abs/2505.15710v1 |
Measuring mathematical problem solving with the math dataset. In Advances in Neural Information Processing Systems , 2021. 9 [12] Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. The curious case of neural text degeneration. In International Conference on Learning Representations , 2020. 3 [13] Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, David Golub, and Yejin Choi. Learning to write with cooperative discriminators. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , 2018. 3 [14] Hakan Inan, Kartikeya Upasani, Jianfeng Chi, Rashi Rungta, Krithika Iyer, Yuning Mao, Michael Tontchev, Qing Hu, Brian Fuller, Davide Testuggine, et al. Llama guard: Llm-based input-output safeguard for human-ai conversations. arXiv preprint arXiv:2312.06674 , 2023. 1 [15] Jiaming Ji, Tianyi Qiu, Boyuan Chen, Borong Zhang, Hantao Lou, Kaile Wang, Yawen Duan, Zhonghao He, Jiayi Zhou, Zhaowei Zhang, et al. Ai alignment: A comprehensive survey. arXiv preprint arXiv:2310.19852 , 2023. 1, 2 [16] Albert Q Jiang, A Sablayrolles, A Mensch, C Bamford, D Singh Chaplot, Ddl Casas, F Bressand, G Lengyel, G Lample, L Saulnier, et al. Mistral 7b. arxiv. arXiv preprint arXiv:2310.06825 , 10, 2023. 5 [17] Tomasz Korbak, Kejian Shi, Angelica Chen, Rasika Bhalerao, Christopher L. Buckley, Jason Phang, Samuel R. Bowman, and Ethan Perez. Pretraining language models with human preferences. In ICML , 2023. 1 [18] Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, and Mike Lewis. Contrastive decoding: Open-ended text generation as optimization. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , 2023. 3 10 [19] Tianqi Liu, Zhen Qin, Junru Wu, Jiaming Shen, Misha Khalman, Rishabh Joshi, Yao Zhao, Mohammad Saleh, Simon Baumgartner, Jialu Liu, et al. Lipo: Listwise preference optimization through learning-to-rank. arXiv preprint arXiv:2402.01878 , 2024. 4 [20] Yi Liu, Gelei Deng, Zhengzi Xu, Yuekang Li, Yaowen Zheng, Ying Zhang, Lida Zhao, Tianwei Zhang, and Yang Liu. Jailbreaking chatgpt via prompt engineering: An empirical study, 2023. 1 [21] Mantas Mazeika, Long Phan, Xuwang Yin, Andy Zou, Zifan Wang, Norman Mu, Elham Sakhaee, Nathaniel Li, Steven Basart, Bo Li, David Forsyth, and Dan Hendrycks. Harmbench: A standardized evaluation framework for automated red teaming and robust refusal. In ICML , 2024. 1, 5 [22] Swetasudha Panda, Naveen Jafer Nizar, and Michael L Wick. Llm improvement for jailbreak defense: Analysis through the lens of over-refusal. In Neurips Safe Generative AI Workshop 2024 , 2024. 1 [23] Alberto Purpura, Gianmaria Silvello, and Gian Antonio Susto. Learning to rank from relevance judgments distributions. Journal of the Association for Information Science and Technology , 73(9):1236–1252, 2022. 4 [24] Xiangyu Qi, Ashwinee Panda, Kaifeng Lyu, Xiao Ma, Subhrajit Roy, Ahmad Beirami, Prateek Mittal, and Peter Henderson. Safety alignment should be made more than just a few tokens deep. arXiv preprint arXiv:2406.05946 , 2024. 3 [25] Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019. 5 [26] Amrith Setlur, Chirag Nagpal, Adam Fisch, Xinyang Geng, Jacob Eisenstein, Rishabh Agarwal, Alekh | https://arxiv.org/abs/2505.15710v1 |
Agarwal, Jonathan Berant, and Aviral Kumar. Rewarding progress: Scaling automated process verifiers for llm reasoning. arXiv preprint arXiv:2410.08146 , 2024. 3 [27] Oscar Skean, Md Rifat Arefin, Yann LeCun, and Ravid Shwartz-Ziv. Does representation matter? exploring intermediate layers in large language models. arXiv preprint arXiv:2412.09563 , 2024. 4 [28] Luong Trung, Xinbo Zhang, Zhanming Jie, Peng Sun, Xiaoran Jin, and Hang Li. Reft: Reason- ing with reinforced fine-tuning. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , 2024. 3 [29] Haoxiang Wang, Wei Xiong, Tengyang Xie, Han Zhao, and Tong Zhang. Interpretable prefer- ences via multi-objective reward modeling and mixture-of-experts, 2024. 3 [30] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models, 2023. 3 [31] Xuezhi Wang and Denny Zhou. Chain-of-thought reasoning without prompting. arXiv preprint arXiv:2402.10200 , 2024. 3 [32] Zhichao Wang, Bin Bi, Shiva Kumar Pentyala, Kiran Ramnath, Sougata Chaudhuri, Shubham Mehrotra, Xiang-Bo Mao, Sitaram Asur, et al. A comprehensive survey of llm alignment techniques: Rlhf, rlaif, ppo, dpo and more. arXiv preprint arXiv:2407.16216 , 2024. 1 [33] Alexander Wei, Nika Haghtalab, and Jacob Steinhardt. Jailbroken: How does llm safety training fail? In NeurIPS , 2023. 1 [34] Boyi Wei, Kaixuan Huang, Yangsibo Huang, Tinghao Xie, Xiangyu Qi, Mengzhou Xia, Prateek Mittal, Mengdi Wang, and Peter Henderson. Assessing the brittleness of safety alignment via pruning and low-rank modifications. In ICML , 2024. 2, 3 [35] Zeming Wei, Yifei Wang, and Yisen Wang. Jailbreak and guard aligned language models with only few in-context demonstrations. arXiv preprint arXiv:2310.06387 , 2023. 1, 3 [36] Tinghao Xie, Xiangyu Qi, Yi Zeng, Yangsibo Huang, Udari Madhushani Sehwag, Kaixuan Huang, Luxi He, Boyi Wei, Dacheng Li, Ying Sheng, et al. Sorry-bench: Systematically evaluating large language model safety refusal. In ICLR , 2025. 1, 5 [37] Yueqi Xie, Jingwei Yi, Jiawei Shao, Justin Curl, Lingjuan Lyu, Qifeng Chen, Xing Xie, and Fangzhao Wu. Defending chatgpt against jailbreak attack via self-reminders. Nature Machine Intelligence , 2023. 1 11 [38] Zhangchen Xu, Fengqing Jiang, Luyao Niu, Jinyuan Jia, Bill Yuchen Lin, and Radha Pooven- dran. Safedecoding: Defending against jailbreak attacks via safety-aware decoding. In ACL, pages 5587–5605. Association for Computational Linguistics (ACL), 2024. 1 [39] Zhangchen Xu, Fengqing Jiang, Luyao Niu, Jinyuan Jia, Bill Yuchen Lin, and Radha Pooven- dran. SafeDecoding: Defending against jailbreak attacks via safety-aware decoding. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , 2024. 3 [40] An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2.5 technical report. arXiv preprint arXiv:2412.15115 , 2024. 5 [41] Eliezer Yudkowsky. The ai alignment problem: why it is hard, and where to start. Symbolic Systems Distinguished Speaker , 4(1), 2016. 2 [42] Dan Zhang, Sining Zhoubian, Ziniu Hu, Yisong Yue, Yuxiao Dong, and Jie Tang. Rest-mcts*: Llm self-training via process reward guided tree search, 2024. 2 [43] Yifan Zhang, Ge Zhang, | https://arxiv.org/abs/2505.15710v1 |
Yue Wu, Kangping Xu, and Quanquan Gu. General preference modeling with preference representations for aligning language models. arXiv preprint arXiv:2410.02197 , 2024. 3 [44] Yihao Zhang, Zeming Wei, Jun Sun, and Meng Sun. Adversarial representation engineering: A general model editing framework for large language models. arXiv preprint arXiv:2404.13752 , 2024. 3 [45] Yue Zhang, Leyang Cui, Wei Bi, and Shuming Shi. Alleviating hallucinations of large language models through induced hallucinations. arXiv preprint arXiv:2312.15710 , 2024. 3 [46] Yiran Zhao, Wenxuan Zhang, Yuxi Xie, Anirudh Goyal, Kenji Kawaguchi, and Michael Shieh. Identifying and tuning safety neurons in large language models. In ICLR , 2025. 3 [47] Chujie Zheng, Fan Yin, Hao Zhou, Fandong Meng, Jie Zhou, Kai-Wei Chang, Minlie Huang, and Nanyun Peng. On prompt-driven safeguarding for large language models. In International Conference on Machine Learning , pages 61593–61613. PMLR, 2024. 1, 3 [48] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems , 36:46595–46623, 2023. 2, 5 [49] Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard Ren, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann-Kathrin Dombrowski, et al. Representation engineering: A top-down approach to ai transparency. arXiv preprint arXiv:2310.01405 , 2023. 2, 3 [50] Andy Zou, Zifan Wang, Nicholas Carlini, Milad Nasr, J Zico Kolter, and Matt Fredrikson. Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043 , 2023. 1, 3 12 | https://arxiv.org/abs/2505.15710v1 |
TURNABOUT LLM: A Deductive Reasoning Benchmark from Detective Games Yuan Yuan *Muyu He *Muhammad Adil Shahid Jiani Huang Ziyang Li Li Zhang University of Pennsylvania Drexel University {yyuan86|muyuhe}@upenn.edu {harry.zhang}@drexel.edu Abstract This paper introduces TURNABOUT LLM , a novel framework and dataset for evaluating the deductive reasoning abilities of Large Lan- guage Models (LLMs) by leveraging the inter- active gameplay of detective games Ace Attor- ney and Danganronpa. The framework tasks LLMs with identifying contradictions between testimonies and evidences within long narrative contexts, a challenging task due to the large answer space and diverse reasoning types pre- sented by its questions. We evaluate twelve state-of-the-art LLMs on the dataset, hinting at limitations of popular strategies for enhancing deductive reasoning such as extensive thinking and Chain-of-Thought prompting. The results also suggest varying effects of context size, the number of reasoning step and answer space size on model performance. Overall, TURN- ABOUT LLM presents a substantial challenge for LLMs’ deductive reasoning abilities in com- plex, narrative-rich environments.1 1 Introduction Detective stories contain some of the most difficult reasoning problems, meticulously crafted to be in- triguing and illusive for even the most intelligent readers. To perform said deduction requires various abilities. Some include information retrieval from long passages of narrative with attention to partic- ular details. Others include piecing together facts with knowledge of physical laws, social norms, timeline of events, and so on. As large language models (LLMs) are increasingly coveted for their reasoning ability, evaluating them on detective sto- ries brings about unique challenges. Unfortunately, evaluating LLMs’ deductive rea- soning via detective stories is often infeasible. For example, Sherlock Holmes involves rich reasoning *Equal contribution. 1Our resources can be found at https://github.com/ zharry29/turnabout_llm . Contradiction! E3 Testimonies Evidences Contradiction! Sahwit claimed he saw the woman dead at 1PM, but the autopsy says she died between 4 and 5PM .T1 T2 T3 T4 E1 E2Reasoning Figure 1: An illustration of a problem from Ace Attor- ney, a detective game where players are instructed to pinpoint a contradiction between a piece of evidence and a testimony. Adapted to a task in TURNABOUT - LLM , the input is a list of testimonies and a list of evidences with their corresponding textual descriptions. The output is the pair of testimony ( T4) and evidence (E2) that contradict each other. The example shown is from the introductory episode and is likely the easiest. but does not contain explicit questions to pose to models. As a result, existing work that leveraged detective stories for evaluation either only consid- ered simple snippets as the context (Del and Fishel, 2023a) or character relationship prediction as the task (Zhao et al., 2024). Some also focus on tex- tual understandings that require simple reasoning abilities (Xu et al., 2025). To overcome this limita- tion, we take advantage of a unique asset, detective games, as their interactive gameplay provides a natural interface for evaluating LLMs. We propose TURNABOUT LLM2, a framework and textual dataset to evaluate LLMs’ deductive 2The name “Turnabout” is a wordplay from Ace Attorney as a nod to the playable character’s | https://arxiv.org/abs/2505.15712v1 |
knack for completely changing the direction of a trial, against all odds. 1 Dataset Sym. SLC LAS Nat. MH Het. BIG-Bench Hard ✗ ✗ ✗ ✓ ✓ ✗ LogicQA ✗ ✗ ✗ ✓ ✓ ✗ ReClor ✗ ✗ ✗ ✓ ✓ ✗ ZebraLogic ✗ ✗ ✓ ✓ ✓ ✗ ProofWriter ✓ ✗ ✗ ✗ ✓ ✗ FOLIO ✓ ✗ ✗ ✓ ✓ ✗ ProntoQA ✓ ✗ ✗ ✗ ✗ ✗ LogicBench ✓ ✗ ✗ ✗ ✗ ✗ TurnaboutLLM ✓ ✓ ✓ ✓ ✓ ✓ Table 1: Qualitative comparison of TURNABOUT LLM against other deductive reasoning benchmarks. There are no previous benchmarks that satisfy all six desiderata simultaneously. Our proposed TURNABOUT LLM is the first benchmark to include symbolic logical annotations (Sym.) for reasoning tasks situated in natural scenarios (Nat.) with super-long contexts (SLC), large answer spaces (LAS), multi-hop (MH) reasoning steps, and heterogeneous (Het.) reasoning types. reasoning ability in a long narrative context. TURN- ABOUT LLM is constructed using two critically acclaimed detective games Ace Attorney3and Danganronpa4. The core gameplay mechanism, adapted as our task format, is to read through a story, examine existing evidences, examine witness testimonies, deduce likely conclusions, and find a contradiction between an evidence and a testi- mony in each turn of gameplay, all in text. One example from the 306 turns can be seen in Fig- ure 1. TURNABOUT LLM is superior to existing reasoning benchmarks in that: 1.it includes natural contexts written by human authors that sometimes exceeds 100K words; 2.it presents a large answer space that can con- tain 300 candidate answers; 3.it consists of rigorous yet heterogeneous ques- tions that demands temporal, spatial, behavior, object state, causal, and numerical understand- ing, 4.all of the examples contain expert annotations of evidence spans, context summary, reason- ing type, and the complete reasoning steps. We conducted 26 experiments on 12 state-of-the-art LLMs using TURNABOUT LLM , revealing several intriguing insights detailed in Section 5. The re- sults establish TURNABOUT LLM as a substantial challenge for current LLMs outside their training 3https://en.wikipedia.org/wiki/Ace_Attorney 4https://en.wikipedia.org/wiki/Danganronpacorpus, as the top-performing DeepSeek-R1 only obtains an accuracy score of 45.72%. We observe the generation of extensive reasoning tokens does not directly help with model performance but is negatively correlated with accuracy. The tradition- ally effective Chain-of-Thought prompting method also presents minimal benefits on complex deduc- tive tasks. When presented with excessive con- textual information, only large models, not small and medium-sized ones, can leverage needle-in-a- haystack retrieval to improve reasoning outcomes. We find that performance declines as the number of reasoning steps increases but is unaffected by the size of the answer space, and conversely perfor- mance improves with larger parameter counts. 2 Related Work General Reasoning Benchmarks To broadly as- sess models’ reasoning capacities, multiple general- purpose benchmarks have been widely studies. They include MMLU (Hendrycks et al., 2021), Su- perGLUE (Wang et al., 2020), BIG-Bench (Srivas- tava et al., 2023), and BIG-Bench Hard (Suzgun et al., 2022). While these benchmarks provide a useful overview, they are not exclusively focused on reasoning tasks, resulting in a limited reflection of models’ actual reasoning skills. | https://arxiv.org/abs/2505.15712v1 |
In contrast, several benchmarks explicitly tar- get deductive reasoning capacities. LogiGLUE (Luo et al., 2024) integrates 24 reasoning-focused datasets into a unified benchmark. LogiQA (Liu et al., 2020) and ReClor (Yu et al., 2020) draw logical reasoning questions from standardized ex- ams like the LSAT in multi-choice formats. Ze- braLogic (Lin et al., 2025) constructs constraint- satisfaction problems that feature expansive answer spaces. However, these benchmarks lack symbolic annotations of logical structures, limiting insights into underlying reasoning processes. Synthetic Datasets for LLM Reasoning Syn- thetic datasets fulfill the need for symbolic annota- tions by using LLMs to generate examples based on logical rules. PrOntoQA (Saparov and He, 2023) and LogicBench (Parmar et al., 2024) synthesize questions from logical rules applied to ontologi- cal entities, while JustLogic (Chen et al., 2025) uses randomly sampled real-world sentences as premises for reasoning chains. Nonetheless, they typically focus on single inference rules rather than multi-hop reasoning. To address this gap, Multi-LogiEval (Patel et al., 2024) and ProofWriter 2 E3 That's why I thought it was 4:00 PM!Terribly sorry about the misunde- rstanding. There was a voice saying the time... probably from television .T1 T2 T3 Electricity was out from noon to 6 PM on the day. Blackout Record Cindy’s Autopsy E1 E2 Electricity was out 12-6PM Assumption 1 [ E1] Witness heard voice from TV at 4PMAssumption 2 [ T1+T2]If electricity is out, TV must be off Proposition 1 If TV plays sound, the TV must be on Proposition 2 TV can’t be on and off simultaneously Contradiction … … … …Contradiction Benign Benign Benign Benign Temporal & Numerical Causal Physical Reasoning Chain (Labeled) Explanation : This thought is founded in an ungrounded claim. Explanation : N/A Explanation : One cannot hear the time coming from a TV when there is a blackout. Contradiction Frank Sahwit Newspaper salesman who discovered body and saw… Figure 2: An example data point from TURNABOUT LLM , where testimonies, marked as T1 to T3, are shown horizontally in green and evidences E1, E2 and more are shown vertically in orange. In addition to labeling which testimony-evidence pairs are contradictory, we provide a per-contradiction explanation and a ground-truth reasoning chain used to derive the contradiction. Each reasoning chain forms a tree structure: leaf nodes represent observed facts, while internal (non-leaf) nodes correspond to intermediate atomic propositions that perform derivations. (Tafjord et al., 2021), an improvement to RuleTaker (Clark et al., 2020), require models to validate syn- thetic conclusions involving multiple logical steps. However, along with the expert-curated multi-hop FOLIO (Han et al., 2024), these datasets suffer from limited context sizes and answer spaces. Reasoning Datasets from Detective Stories De- tective stories naturally engage readers in multi-hop deduction, thus well-suited for deductive reason- ing evaluations. MuSR (Sprague et al., 2024) and True Detective (Del and Fishel, 2023b) synthesize detective stories from predefined facts or online detective games, yet they face inherent limitations of small context sizes. Benchmarks derived from authentic novels or high-quality puzzles, such as WhoDunIt (Gupta, 2025), DetectBench (Gu et al., 2024), and DetectiveQA (Xu et al., 2025), address this | https://arxiv.org/abs/2505.15712v1 |
context size limitation. However, their answer spaces remain relatively constrained. To the best of our knowledge, there is no existing benchmark that leverages the detective story format to com- bine symbolic annotations with reasoning tasks characterized by large contexts and answer spaces. A comprehensive overview of each benchmark’s attributes is presented in Table 1. 3 Dataset and Task OurTURNABOUT LLM dataset is based on 11 ti- tles of critically acclaimed Ace Attorney series and Danganronpa. In this section, we detail our process of creating the TURNABOUT LLM dataset (Sec- tion 3.1), the additional annotations (Section 3.2), and the overall statistics (Section 3.3).3.1 Data Creation Extraction To obtain data, we crawl and parse an Ace Attorney Wiki5and a Danganronpa archive6. We extract the following data: 1) character in- formation , including name, gender, age, and a description; 2) evidence information7, including name, source, and a description; 3) testimonies in the core gameplay8, including speaker, content, and the correct evidence to present if the testimony can be contradicted; and 4) transcript of the full gameplay9, including dialogues, information, and flavor text, used as the full context. While the games are originally visual novels in nature, we only consider the textual elements, which are suffi- cient for reasoning in most cases. Whenever visuals are indispensable for reasoning, they are manually captioned so that key visual features are provided. Modification Using the data acquired above, we construct each each example, referred to as a turn, as follows. The input to a model is: 1.Ci: information of every character 2.Ei: information of every evidence 3.Ti: an array of testimonies 4.X(optional): a context that may provide addi- tional information required for the reasoning The output of a model is a pair of (Ti,E j)where 5aceattorney.fandom.com/wiki 6lparchive.org/Danganronpa-Trigger-Happy-Havoc/ 7“Evidence” in Ace Attorney” and “Truth Bullets” in Dan- ganronpa. 8“Cross examination” in Ace Attorney and “non-stop de- bate” in Danganronpa. 9Non-core gameplay such as investigation in Ace Attorney or social activities in Danganronpa is lumped into the context. 3 Type Evidence example Testimony example Spatial Death was caused by a gunshot to the chest . ...fired on the English civilian! And from the back ... Temporal Shots were fired just after midnight on 12/25. When she said “ It’s almost Christmas !” shots fired! Causal ...weapon bears the defendant’s prints ... I never touched the murder weapon. Behavioral Victim’s diary: Meet with Hugh. Important. Huge: I didn’t talk to anyone until the final bell. Numerical Cause of death: single blunt force trauma. You see? You hit her twice ! Physical The victim was wearing a plain shirt . He was always walking around with a flowery shirt. Spelling The defendant is Maggey Byrde. The blood writing was the defendant’s name, “ Maggie ”. Table 2: Examples (edited for brevity and clarity) of evidences and testimonies of each reasoning type. an evidence is presented to contradict a testimony. At times, there can be multiple ground-truth pairs. Thus, the task is essentially a multiple-choice for- mat with an action space of |T| × |E|, on the order of hundreds. While our dataset is | https://arxiv.org/abs/2505.15712v1 |
mostly faithful to the original games, we made various types of mod- ification (change of wording, removing turns with loose contradictions, adding information for logic leaps, etc.) to ensure the rigorousness of reasoning. 3.2 Annotations To improve rigorousness of evaluation and enable fine-grained insights into TURNABOUT LLM , we annotate the following aspects of each turn: meta- data, reasoning chains, and reasoning types. Metadata First, we annotate a one-sentence sum- mary of the current story that provides necessary information for identifying the contradiction for each turn. We provide the span from the evidence and from the testimony that critically constitutes the contradiction. We next label whether a turn is self-contained, where a contradiction can be de- ducted using only information of characters, evi- dences, and testimonies, without any other context such as the dialogue transcripts. Whenever a turn is not self-contained, a model needs to perform a needle-in-a-haystack retrieval from the full context (all transcript until the current moment) to gather necessary information (Figure 8). In this case, we manually annotate an expected context span. Reasoning Chain Next, we annotate a reasoning chain used for deriving the contradiction for each turn (Figure 2). A reason chain is a tree structure with three components. First, observed facts, repre- sented as leaf nodes, are paraphrased directly from evidence, testimony, or context. Atomic proposi- tions (non-leaf nodes) are handwritten modus po- nens rules that operates upon the facts and derive new facts. Finally, a contradiction (root node) is implied based on two obviously contradiction facts. As the reasoning in TURNABOUT LLM is basedon natural narrative texts, subjectivity in the rea- soning chain is unavoidable. Therefore, when an- notating the propositions, we uphold the desiderata of only considering general rules in the real world (neglecting what-ifs and extremities) and making them as reasonably atomic as possible. Reasoning Types Lastly, we annotate a fine- grained type of deductive reasoning for each turn. We define 7 reasoning types, including spatial, tem- poral, causal, behavioral, numerical, physical, and spelling with examples shown in Table 2. We as- sign one or more types to a turn based on the type of reasoning that underlies the propositions in the annotated reasoning chain (Figure 2). Each rea- soning category contains a non-trivial number of turns (Figure 3b), demonstrating that our dataset demands heterogeneous reasoning capabilities. On average, annotation for each turn takes 20 minutes for a trained annotator, resulting in a total labor of approximately 100 hours. 3.3 Statistics Table 3 summarizes the statistics of TURNABOUT - LLM . In total, there are 306 turns in TURNABOUT - LLM , with an average of 12 game characters, 38 evidences, 11 testimonies, and 25K text characters. Figure 3a demonstrates a large answer-space inTURNABOUT LLM , with an average of 200 evidence-testimony pairs to choose from. Figure 3b shows the distribution of different types of reason- ing ability required. Combined, these statistics are evidence that TURNABOUT LLM is a challenging and complex benchmark for LLM capabilities. 4 Evaluation Protocol To evaluate a model on the dataset, we extract spe- cific fields from each data | https://arxiv.org/abs/2505.15712v1 |
point in the game to form a single prompt, and we prompt the model one-time for a single turn. The model is asked to give the indices of the contradicting evidence and testimony. As there may be multiple contradicting 4 Statistics AA123 AA456 GAA12 AAI12 DGRP1 Overall # Data points 85 72 43 69 37 306 Avg. context length (# chars) 19K 29K 36K 34K 2.2K 25K Avg. # characters 10.6 13.6 13.2 12.6 17 12.3 Avg./Max. # testimonies 5.9 / 10 5.6 / 8 5.7 / 7 5.1 / 8 6.7 / 11 5.7 / 11 Avg./Max. # evidences 20.2 / 32 21.1 / 33 18.6 / 30 25.3 / 38 18.0 / 21 21.1 / 38 Avg./Max. length of reasoning chain 3.5 / 9 3.8 / 10 3.6 / 6 3.5 / 8 3.3 / 5 3.6 / 10 Table 3: Overall statistics of TURNABOUT LLM , categorized by the incorporated detective game titles. AA123 stands for Phoenix Wright: Ace Attorney Trilogy .AA456 stands for Apollo Justice Ace Attorney Trilogy .GAA12 stands for The Great Ace Attorney Chronicles .AAI12 stands for Ace Attorney Investigations Collection .DGRP1 stands for Danganronpa: Trigger Happy Havoc . 10 20 30 400510 # Evidences# Testimonies (a) An illustration of the number of turns in TURNABOUT LLM (size of each circle) with respect to the number of available evi- dences (horizontal) and testimonies (vertical) to choose from.Spa. Tem. Cau. Beh. Num. Phy. Spe.050100150200 62 38144 73 23168 7 Needed Reasoning Capability# TurnsAA123 AA456 GAA12 AAI12 DGRP1 (b) The number of TURNABOUT LLM turns with respect to the reasoning capabilities required (e.g., Spatial,Temporal, etc.) to find the contradiction, classified by the incorporated title. Figure 3: Illustrations of further statistics of our T URNABOUT LLM dataset. pairs in each turn, we regard the output as correct if the proposed pair is included in the list of ground truth contradicting pairs. Evaluation Metrics We compute the overall ac- curacy of the model as the percentage of correct answers across all turns, and we compute the ev- idence accuracy and testimony accuracy respec- tively as the percentage of correct evidence and testimony presented across all turns. Data Splits We do not endorse any particular train-develop-test split of TURNABOUT LLM and leave that decision to future users. In this work, we treat the entirety of the Ace Attorney dataset as the evaluation set, since we do not attempt any hyperparameter tuning or modeling improvement. Evaluation Settings To better gauge different as- pects of models’ reasoning abilities, we propose 4 variations of the evaluation prompt templates based on available property fields in the data. First, We start with a basic zero-shot prompt10with an av- erage of 1,686 words, which sequentially includes descriptions of all the characters, evidences, and 10Our experiments show that few-shot prompting leads to worse results which are omitted.testimonies in the current turn. In case more con- text than mere evidence descriptions are needed for reasoning, we append a short “context span”, an excerpt from the context field that guarantees to fills in the most relevant context information, to the | https://arxiv.org/abs/2505.15712v1 |
corresponding evidence description. Second, we use a one-shot, Chain-of-Thought (CoT) prompt with an average of 2,280 words, which uses an example to direct the model to think before answering the question. Besides the use of a one-shot example, the prompt adds a “let’s think step by step” instruction at the end of the prompt to enforce the prolonged thinking. We do this for all models except those already trained to do so, such as DeepSeek-R1 or OpenAI’s o-series models. Third, we use a full-context prompt averaging 44K words, which includes the complete context of all prior turns within the same court case leading up to the current one. This is a challenging but realistic setting, as all human players experience the game this way. As such, needle-in-a-haystack retrieval of critical information from the context is necessary for turns that are not self-contained by merely characters, evidences, and testimonies. Fourth, to study whether the model is memoriz- ing the game from its training corpus, we provide 5 DS-R1-8BL3.1-8BDS-R1-32B DS-R1-70B G4.1-Mini L3.1-70B O3-MiniDS-CO4-MiniG4.1Q-32B DS-R015304560 9.6716.3618.825.1923.0527.5134.9436.4733.0936.0945.72 10.41Accuracy (%)Base Prompt Correct Evidence Correct Testimony Figure 4: Performance comparison on TURNABOUT LLM across 12 models, ordered from left to right. Bars indicate correctness accuracy (%) using a base prompt, along with accuracy for evidence and testimony. For models without native reasoning capabilities, arrows show the performance change when applying chain-of-thought prompting. 2 3 4 5 60255075100 # Reasoning stepsAccuracy (%)DeepSeek Q-32B GPT Llama (a) Average accuracy among each model family declines as the number of anno- tated reasoning steps increases.Spa. Tem. Cau. Beh. Num. Phy. Spe. Reasoning typeDS-R1 DS-V3 Q-32B G4.1 O4-M O3-M (b) Accuracy with respect to the reasoning types. While performance vary a lot across models, causal reasoning is usually the weakest.<60<85<105<126<150<180≥180 Answer space: |T| × |E|DS-R1 Q-32B G4.1 O4-M (c) Accuracy with respect to size of answer space. Results does not show strong negative correlation. Figure 5: Model accuracies plotted against the number of reasoning steps, required reasoning types, and size of answer space. Due to space constraints, we only show the performance of 6 representative models. A more comprehensive illustration is shown in the appendix. anablation prompt with an average of 537 words where all descriptions of the characters and evi- dences are removed. The model will have to reason based on the names of the characters and evidences alone, which is often insufficient. Therefore, we would expect a significant drop in its performance if it does not memorize key events in the game. As is previously discussed, evidences and some- times testimonies come with images that are occa- sionally crucial for reasoning about the contradic- tion. While we have fully captioned them in this work, we also provide all the images and clearly la- bel whenever they are required so that a multimodal evaluation is available for future work. Experiments We evaluate 12 LLMs on our 4 variations of prompts. The LLMs come from 4 model families: the DeepSeek series which in- cludes the 671B DeepSeek-R1 (DS-R1) and V3 (DS-V3) and the smaller distilled DeepSeek-R1- 70B (DS-R1-70B), DeepSeek-R1-32B (DS-R1- 32B), and DeepSeek-R1-8B | https://arxiv.org/abs/2505.15712v1 |
(DS-R1-8B) models,the OpenAI family including GPT-4.1 (G4.1), GPT- 4.1-mini (G4.1-M) and the reasoning models o3- mini (O3-M) and o4-mini (O4-M), the Llama-3.1- instruct family including Llama-70B (L3.1-70B) and Llama-8B (L3.1-8B), and the reasoning model QwQ-32B (Q-32B) exceling in reasoning and cod- ing. Except for OpenAI models and the two largest DeepSeek models that are run via their APIs, we run all other models locally on 8 H100 GPUs using HuggingFace and KANI (Zhu et al., 2023) . 5 Results and Analysis In this section, we present our primary empirical findings regarding LLMs’ reasoning abilities. We begin by highlighting the overall accuracies of all 12 models on TURNABOUT LLM summarized in Figure 4. Subsequently, we provided detailed anal- yses that dissect model performance by factors such as numbers of reasoning steps (Figure 5a), reasoning types (Figure 5b), answer space sizes (Figure 5c), numbers of reasoning tokens (Figure 6) and prompting strategies (Figure 4, 7). 6 L3.1-8B L3.1-70B G4.1-M G4.1 Q-32B DS-R15010050010005000# Reasoning TokensCorrect Answers Incorrect AnswersFigure 6: Distributions of the number of generated reasoning tokens, separated by whether a correct answer is derived.L3.1-8B L3.1-70B G4.1-M G4.1 DS-R10204060 −2.60−4.94 ±0+9.63+7.07Accuracy (%)w/o full context w/ full context Figure 7: Model performance with or without pro- viding full story context within the prompt. The dataset poses a significant challenge in long- context deductive reasoning for state-of-the-art models. All 12 models demonstrate consider- able diffuculty in correctly identifying evidence- testimony pairs within TURNABOUT LLM (Fig- ure 4). Among them, DS-R1 achieves the highest accuracy of 45.72% using the basic prompt. All models, except G4.1, achieve higher accuracy in selecting the correct evidence than in selecting the correct testimony. This trend aligns with the fact that there are typically fewer candidate evidences than testimonies to evaluate. These findings illus- trate that TURNABOUT LLM represents a substan- tial challenge for even the most advanced LLMs. Minimal memorization makes the dataset a reli- able independent benchmark for LLMs. The dataset is uncontaminated by the models’ training corpus, as is suggested by the performances of 4 models evaluated on the ablation prompt with no evidence descriptions. Scoring consistently at merely 15% on average, these models’ reasoning traces reveal that they are making the most likely “bet” based on evidence names alone. Therefore, we conclude that major models only have minimum memorization and that TURNABOUT LLM estab- lishes a novel and fair ground for LLM evaluations. Incorrect results consume more reasoning to- kens than correct ones, and more output tokens do not necessarily yield better results. We de- fine “reasoning tokens” as intermediate tokens gen- erated by the model before arriving at the final answer. Across all models, incorrect responses ex- hibit higher median and maximum numbers of rea- soning tokens compared to correct ones (Figure 6), indicating a negative correlation between model accuracy and the number of reasoning tokens. This potentially shows that when the model produces incorrect answers, outputing additional reasoning tokens does not yield more improvements.We observe a surplus of reasoning tokens pro- duced by Q-32B and DS-R1 over other models in Figure 6 using a logarithmic scale. However, | https://arxiv.org/abs/2505.15712v1 |
de- spite using far fewer reasoning tokens than Q-32B, G4.1 achieves approximately equal accuracy, ex- hibiting superior reasoning efficiency under a lim- ited token budget. This could further corroborate with the conjecture that intentional exploration of the answer space is more decisive to model perfor- mance than extensive output of reasoning tokens. Full context benefits large models but hurts smaller ones. Including the complete context in the evaluation prompt has contrasting effects depending on the size of the model (Figure 7). Large models such as G4.1 and DS-R1 exhibit notable accuracy improvements of approximately 15% compared to their basic prompt performances. Conversely, small and medium-sized models, such as L3.1-70B and L3.1-8B, suffer performance de- clines. This could suggest that smaller models, lim- ited by their parameter size, not only under-utilize additional contextual information but are also “con- fused” by the influx of supplementary data. Model performance deteriorates with increas- ing reasoning steps, but not with larger answer spaces. There is a negative correlation between average accuracy within a model architecture fam- ily and the number of reasoning steps (Figure 5a). As the number of reasoning steps increases, perfor- mance gradually declines, signaling that questions requiring more logical connections tend to be more difficult. This supports the validity of using anno- tated reasoning chains as an indicator of difficulty. In contrast, the size of the answer space does not appear to impact model accuracy (Figure 5c). By categorizing answer spaces into seven bins with approximately equal numbers of data points, we ob- serve consistent model performance across all bins. Further analysis reveals that reasoning models tend 7 Rayfa Padma Khura'in E25 (out of 30) At around 2PM, Shah'do was surprised by a firecracker, ran into the courtyard, spooked Nayna and stole her hat. Ahlbi’s Statement I had been gazing upon the courtyard since about 2, but father was the only one I saw. T3 (out of 6) Contradiction Witness claims she, while gazing, saw only her father Assumption 1 [ T3] Evidence suggests that Shah’do spooked Nayna in the courtyard Assumption 2 [ E25]Nayna is in the courtyard at 2 Deduction 2 Nayna can’t be in the courtyard if her father is the only one she saw Contradiction Reasoning Chain (Labeled in Turnabout-LLM) No one other than her father is in the courtyard at 2 Deduction 1 Reasoning (DeepSeek-R1) Reasoning (GPT-4.1) Testimony 3 : that she was gazing at the courtyard since about 2 PM and saw only her father . Compare this with the Channeling Photo (Evidence 24), which shows a picture taken at around 2 PM of the channeling rite. If the channeling rite (where Rayfa should be present as the princess/priestess) was taking place at 2 PM, she could not have been gazing upon the courtyard at that precise time . This is a clear contradiction… Correct Incorrect Ans: ( T3, E25) Ans: ( T3, E24)(...iterate over all evidences...) Let me check again. The answer should be one pair. Let me verify: Ahlbi's Statement says that around 2 PM, Shah'do (the dog) ran into the courtyard , spooked Nayna … | https://arxiv.org/abs/2505.15712v1 |
But her testimony says she only saw her father . Therefore, Evidence 25 contradicts Testimony 3 because Ahlbi's account suggests Nayna was in the courtyard around 2 PM, which Rayfa did not report seeing. Validation 1418 Reasoning Tokens 111 Reasoning Tokens Figure 8: A qualitative comparison between DeepSeek-R1 and GPT-4.1’s reasoning on answering the 2nd turn of AA6-5-4. GPT-4.1 failed by jumping straight into conclusion, while DS-R1 carefully examines all evidences and testimonies, producing over 1.4K reasoning tokens as well as the correct answer. to use many reasoning tokens to exhaustively enu- merate possible testimony-evidence pairs without engaging in deeper reasoning. CoT prompting does not enhance model per- formance. We notice minimal benefits of CoT prompting on reasoning performance (see Figure 4). For all 5 models except the smallest L3.1-8b, this prompting method either results in no improve- ment or minor performance decreases. The mod- els’ reasoning traces reveal that CoT prompting delays the time the model first reaches its final conclusion and allows it to “think” more. How- ever, the extended thinking often hinges on a single evidence-testimony pair, failing to conduct an ex- tensive search in the answer space. This appears to imply that CoT prompting is ineffective in solving deductive reasoning tasks with extensive answer spaces and large context sizes. Models benefit from longer explorations of the answer space. Models can effectively extend ex- plorations of the answer space to boost their ac- curacy, as is shown by the qualitative example in Figure 8. In the example, we observe distinct behaviors in G4.1 and DS-R1’s reasoning traces. G4.1, generating only 111 tokens, merely considers one possible evidence before finalizing on a wrong answer. In contrast, DS-R1, generating 1,418 to- kens, explores multiple evidences before narrowing down to 3 most likely candidates and arriving at the correct answer. We conjecture that when in a large answer space, successful deductive reasoning is grounded in extensive, trial-and-error search and does not have a cognitive shortcut.Different models excel at different reasoning types and scale with increasing parameter size. Different models have particular strengths and weaknesses depending on the type of reasoning re- quired (Figure 5b). Models generally perform best on numerical tasks involving counting and compar- ison, whereas most exhibit their lowest scores on temporal or causal reasoning. Furthermore, model performance tends to improve as the parameter size increases (Figure 4), with the notable exception of Q-32B, which outperforms all larger models except the 671B DS-R1. The positive correlation between parameter size and model accuracy could imply that larger models may possess inherently stronger deductive reasoning capabilities. 6 Conclusion We introduce TURNABOUT LLM , the first bench- mark that embeds symbolic-logic puzzles inside narrative-rich, super-long contexts drawn from de- tective visual novels. By performing an extensive empirical study across twelve contemporary LLMs, we show that TURNABOUT LLM is challenging and poses a fair ground to evaluate LLMs’ reason- ing abilities. We release the dataset, annotation toolkit, and evaluation code to spur research on (i) scalable long-context reasoning, (ii) controllable chain-of-thought generation, and (iii) unified met- rics for symbolic-narrative tasks. We hope TURN- ABOUT | https://arxiv.org/abs/2505.15712v1 |
LLM will serve as a stepping-stone toward LLMs that can navigate the messy, open-world logic of real human discourse. 8 7 Limitation Despite its breadth, TURNABOUT LLM still faces several constraints. First, its detective-courtroom focus targets contradiction spotting, leaving other deductive settings—such as scientific discovery or regulatory compliance—largely untested. Sec- ond, because the narratives originate from Japanese visual novels, they may encode culture-specific norms and idioms that bias evaluation toward mod- els already familiar with such text. Third, although we supply descriptive captions for in-game images, true multimodal reasoning is only approximated, not fully exercised. Fourth, the dataset’s manu- ally crafted reasoning chains ( ≈100annotator- hours) introduce subjectivity and hamper scal- ability, though future releases will report inter- annotator agreement and provide semi-automated validation tools. Fifth, while the raw scripts are publicly available, their copyright status could change; We are committed to honoring any take- down requests from the rights holders. Finally, eval- uation with 100K-token prompts imposes a heavy computational footprint, and researchers with lim- ited resources may need chunk-wise retrieval strate- gies that we have not yet benchmarked. Acknowl- edging these limitations helps define the bench- mark’s current scope and highlights directions for future expansion. Acknowledgment We thank Sesh Sadasivam for the initial ideation of this work. We thank Manvi Kaul for the initial efforts of modeling. We thank Bowen Jiang for her wonderful comments on and edits to the writing of this paper. We thank Shu Takumi, Kazutaka Kodaka, and their teams for the marvelous gift to the Ace Attorney and Danganronpa community that makes this work possible. References Michael K. Chen, Xikun Zhang, and Dacheng Tao. 2025. Justlogic: A comprehensive benchmark for evaluat- ing deductive reasoning in large language models. Preprint , arXiv:2501.14851. Peter Clark, Oyvind Tafjord, and Kyle Richardson. 2020. Transformers as soft reasoners over language. Preprint , arXiv:2002.05867. Maksym Del and Mark Fishel. 2023a. True detective: A deep abductive reasoning benchmark undoable for GPT-3 and challenging for GPT-4. In Proceedingsof the 12th Joint Conference on Lexical and Com- putational Semantics (*SEM 2023) , pages 314–322, Toronto, Canada. Association for Computational Lin- guistics. Maksym Del and Mark Fishel. 2023b. True detec- tive: A deep abductive reasoning benchmark un- doable for gpt-3 and challenging for gpt-4. Preprint , arXiv:2212.10114. Zhouhong Gu, Lin Zhang, Xiaoxuan Zhu, Jiangjie Chen, Wenhao Huang, Yikai Zhang, Shusen Wang, Zheyu Ye, Yan Gao, Hongwei Feng, and Yanghua Xiao. 2024. Detectbench: Can large language model de- tect and piece together implicit evidence? Preprint , arXiv:2406.12641. Kshitij Gupta. 2025. Whodunit: Evaluation benchmark for culprit detection in mystery stories. Preprint , arXiv:2502.07747. Simeng Han, Hailey Schoelkopf, Yilun Zhao, Zhent- ing Qi, Martin Riddell, Wenfei Zhou, James Coady, David Peng, Yujie Qiao, Luke Benson, Lucy Sun, Alex Wardle-Solano, Hannah Szabo, Ekaterina Zubova, Matthew Burtell, Jonathan Fan, Yixin Liu, Brian Wong, Malcolm Sailor, Ansong Ni, Linyong Nan, Jungo Kasai, Tao Yu, Rui Zhang, Alexander R. Fabbri, Wojciech Kryscinski, Semih Yavuz, Ye Liu, Xi Victoria Lin, Shafiq Joty, Yingbo Zhou, Caim- ing Xiong, Rex Ying, Arman Cohan, and Dragomir Radev. 2024. Folio: Natural language reasoning with first-order logic. Preprint , | https://arxiv.org/abs/2505.15712v1 |
arXiv:2209.00840. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language under- standing. Preprint , arXiv:2009.03300. Bill Yuchen Lin, Ronan Le Bras, Kyle Richardson, Ashish Sabharwal, Radha Poovendran, Peter Clark, and Yejin Choi. 2025. Zebralogic: On the scal- ing limits of llms for logical reasoning. Preprint , arXiv:2502.01100. Jian Liu, Leyang Cui, Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang. 2020. Logiqa: A chal- lenge dataset for machine reading comprehension with logical reasoning. Preprint , arXiv:2007.08124. Man Luo, Shrinidhi Kumbhar, Ming shen, Mihir Parmar, Neeraj Varshney, Pratyay Banerjee, Somak Aditya, and Chitta Baral. 2024. Towards logiglue: A brief survey and a benchmark for analyzing logical rea- soning capabilities of language models. Preprint , arXiv:2310.00836. Mihir Parmar, Nisarg Patel, Neeraj Varshney, Mutsumi Nakamura, Man Luo, Santosh Mashetty, Arindam Mitra, and Chitta Baral. 2024. Logicbench: Towards systematic evaluation of logical reasoning ability of large language models. Preprint , arXiv:2404.15522. Nisarg Patel, Mohith Kulkarni, Mihir Parmar, Aashna Budhiraja, Mutsumi Nakamura, Neeraj Varshney, and 9 Chitta Baral. 2024. Multi-logieval: Towards eval- uating multi-step logical reasoning ability of large language models. Preprint , arXiv:2406.17169. Abulhair Saparov and He He. 2023. Language models are greedy reasoners: A systematic formal analysis of chain-of-thought. Preprint , arXiv:2210.01240. Zayne Sprague, Xi Ye, Kaj Bostrom, Swarat Chaudhuri, and Greg Durrett. 2024. Musr: Testing the limits of chain-of-thought with multistep soft reasoning. Preprint , arXiv:2310.16049. Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Par- rish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmüller, An- drew Dai, Andrew La, Andrew Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabas- sum, Arul Menezes, Arun Kirubarajan, Asher Mul- lokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karaka¸ s, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartłomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Bryan Orinion, Cameron Diao, Cameron Dour, Cather- ine Stinson, Cedrick Argueta, César Ferri Ramírez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian V oigt, Christopher D. Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Free- man, Daniel Khashabi, Daniel Levy, Daniel Moseguí González, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Do- han, David Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Dylan Schrader, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, El- lie Pavlick, Emanuele Rodola, Emma | https://arxiv.org/abs/2505.15712v1 |
Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice En- gefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martínez-Plumed, Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Ger- mán Kruszewski, Giambattista Parascandolo, Gior- gio Mariani, Gloria Wang, Gonzalo Jaimovitch- López, Gregor Betz, Guy Gur-Ari, Hana Galijase-vic, Hannah Kim, Hannah Rashkin, Hannaneh Ha- jishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jae- hoon Lee, Jaime Fernández Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan Koco ´n, Jana Thompson, Janelle Wingfield, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Alabi, Jiacheng Xu, Ji- aming Song, Jillian Tang, Joan Waweru, John Bur- den, John Miller, John U. Balis, Jonathan Batchelder, Jonathan Berant, Jörg Frohberg, Jos Rozen, Jose Hernandez-Orallo, Joseph Boudeman, Joseph Guerr, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh D. Dhole, Kevin Gim- pel, Kevin Omondi, Kory Mathewson, Kristen Chi- afullo, Ksenia Shkaruta, Kumar Shridhar, Kyle Mc- Donell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia Contreras- Ochando, Louis-Philippe Morency, Luca Moschella, Lucas Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros Colón, Luke Metz, Lütfi Kerem ¸ Senel, Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, Maria Jose Ramírez Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew L. Leavitt, Matthias Hagen, Mátyás Schu- bert, Medina Orduna Baitemirova, Melody Arnaud, Melvin McElrath, Michael A. Yee, Michael Co- hen, Michael Gu, Michael Ivanitskiy, Michael Star- ritt, Michael Strube, Michał Sw˛ edrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mitch Walker, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T, Nanyun Peng, Nathan A. Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nicole Martinez, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Miłkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramon Risco, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhut- dinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib Singh, Saif M. Moham- mad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bow- 10 man, Samuel S. Schoenholz, Sanghyun Han, | https://arxiv.org/abs/2505.15712v1 |
San- jeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixi- ang Shane Gu, Shubh Pachchigar, Shubham Tosh- niwal, Shyam Upadhyay, Shyamolima, Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas De- haene, Stefan Divic, Stefano Ermon, Stella Bider- man, Stephanie Lin, Stephen Prasad, Steven T. Pi- antadosi, Stuart M. Shieber, Summer Misherghi, Svet- lana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Théo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, Timofei Kornev, Titus Tunduny, Tobias Ger- stenberg, Trenton Chang, Trishala Neeraj, Tushar Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak, Vinay Ramasesh, Vinay Uday Prabhu, Vishakh Padmaku- mar, Vivek Srikumar, William Fedus, William Saun- ders, William Zhang, Wout V ossen, Xiang Ren, Xi- aoyu Tong, Xinran Zhao, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, Zirui Wang, and Ziyi Wu. 2023. Beyond the imitation game: Quantifying and extrap- olating the capabilities of language models. Preprint , arXiv:2206.04615. Mirac Suzgun, Nathan Scales, Nathanael Schärli, Se- bastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V . Le, Ed H. Chi, Denny Zhou, and Jason Wei. 2022. Challenging big-bench tasks and whether chain-of-thought can solve them. Preprint , arXiv:2210.09261. Oyvind Tafjord, Bhavana Dalvi Mishra, and Peter Clark. 2021. Proofwriter: Generating implications, proofs, and abductive statements over natural lan- guage. Preprint , arXiv:2012.13048. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Aman- preet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2020. Superglue: A stickier benchmark for general-purpose language understand- ing systems. Preprint , arXiv:1905.00537. Zhe Xu, Jiasheng Ye, Xiaoran Liu, Xiangyang Liu, Tianxiang Sun, Zhigeng Liu, Qipeng Guo, Linlin Li, Qun Liu, Xuanjing Huang, and Xipeng Qiu. 2025. Detectiveqa: Evaluating long-context reasoning on detective novels. Preprint , arXiv:2409.02465. Weihao Yu, Zihang Jiang, Yanfei Dong, and Ji- ashi Feng. 2020. Reclor: A reading comprehen- sion dataset requiring logical reasoning. Preprint , arXiv:2002.04326. Runcong Zhao, Qinglin Zhu, Hainiu Xu, Jiazheng Li, Yuxiang Zhou, Yulan He, and Lin Gui. 2024. Largelanguage models fall short: Understanding complex relationships in detective narratives. In Findings of the Association for Computational Linguistics: ACL 2024 , pages 7618–7638, Bangkok, Thailand. Associ- ation for Computational Linguistics. Andrew Zhu, Liam Dugan, Alyssa Hwang, and Chris Callison-Burch. 2023. Kani: A lightweight and highly hackable framework for building language model applications. In Proceedings of the 3rd Work- shop for Natural Language Processing Open Source Software (NLP-OSS 2023) , pages 65–77, Singapore. Association for Computational Linguistics. A License and Intended Use The data utilized in this research is sourced from fandom.com. As stipulated by fandom.com, their resources are made available under the Creative Commons Attribution-Share Alike License 3.0 (Un- ported) (CC BY-SA). This license permits | https://arxiv.org/abs/2505.15712v1 |
the shar- ing and adaptation of the material, provided that ap- propriate attribution is given to the original source, a link to the license is provided, and that if the ma- terial is remixed, transformed, or built upon, the contributions are distributed under the same or a compatible license. Our intended use of this data is strictly for academic research and analysis within this paper, fully adhering to the terms and condi- tions set forth by the CC BY-SA license. B Annotator demographics Five annotators contribute to authoring and veri- fying each data point’s reasoning types, reasoning steps, and evidence and context span. All are U.S.- based university students and avid Ace Attorney and Danganropa players, thus ideally suited to ex- amine each case data’s key attributes. C Additional Data Examples and Statistics Figure 9 and 10 present two highly challenging ex- amples from TURNABOUT LLM . Figure 11 shows additional performance breakdown of models that are not included in the main section. 11 Finally, one woman came . She stood front of me. We talked... Then she left. I parked. She wasn't there... So, I waited on bridge .T1 T2 T3 [caption] …the victim is standing to the right , facing left. The prisoner is standing to the left .Witness’ Photo E2 From the south, the person to the right is to the east.Proposition 1 One person cannot come first and second Contradiction … … …Contradiction Reasoning Chain (Labeled) Fawles Escaped prisoner confronted by policewoman The photo was taken to the south of the bridge. Assumption 1 [ C1] Melissa: Umm... When I took the photo, I was standing right over... here. [caption] I was standing to the south of the bridge .Victim: east Derived Fact 1 Victim: first Derived Fact 4 If a person is at the broken end of the bridge, they arrived first.Proposition 2 Spatial, temporal Prisoner: first Victim: second Assumption 6 [ T1,T2 ] Explanation : The person that came first would be the one at the broken end of the bridge, which was the victim’s end, because… Melissa “An innocent witness” C1Spatial East end of bridge is broken Assumption 3 [ E1]Bridge’s MapE1 Benign Benign Benign Explanation : N/A [caption] The bridge is broken off on the east end , and connected to the parking lot on the west end. E1 E2 Victim: right Prisoner: left Assumption 2 [ E2]Figure 9: A highly challenging data point from T URNABOUT LLM involving spatial and temporal reasoning. 12 One kind of chip is worth 100 points, other kind is worth 1,000 . One who was winning was the victim !The game began with 3,500 point in chips for each man. T1 T2 T3 Defendant & victim's chips when crime took place. [caption] On the near side, there are 4 red, small chips and 1 gray, big chip . On the far side, there are 2 red, small chips and 9 gray, big chips . On the near side, the cards are 7 heart, 7 diamond, 7 club, Ace spade, and Ace diamond. On the far side, the cards | https://arxiv.org/abs/2505.15712v1 |
are King heart, King diamond, King spade, Ace club, Ace heart. Chip Photo E2 Near side: 4 small + 1 big Assumption 2 [ E2]The total value is the sum of two sides .Proposition 1 Two people cannot both be winning Contradiction … … …Contradiction Reasoning Chain (Labeled) Olga Orly Dard dealer at casino, place of murder Far side: 2 small + 9 big Assumption 1 [ E2]… The hand and chips on near side belong to the defendant , Mr. Wright. Those on the far side belonged to the victim , Mr. Smith … Total: 7,000 Assumption 4 [ T1] Total: 6 small + 10 big Derived Fact 1 Small: 1,000 Big: 100 Derived Fact 2 Value of each side is sum of small and big Proposition 3 Far: 2,900 Near: 4,100 Derived Fact 3 The person on the side with greater score is winning Proposition 4 Defendant is winning Derived Fact 4 If 6*1,000+10* 1000=7000 , small is worth more than big Proposition 2 Numerical Victim is winning Assumption 6 [ T3] Explanation : N/A Explanation : Victim is actually losing, not winning, because… Explanation : N/A Payne Prosecutor C1Numerical Numerical Chips worth 100 and 1,000 Assumption 3 [ T2]Near side: defendant Far side: victim Assumption 5 [ C1]Spatial Deadly Bottle E1 Benign Benign Benign Figure 10: A highly challenging data point from TURNABOUT LLM involving numerical and spatial reasoning, even with a touch of abductive reasoning. 13 2 3 4 5 60255075100 # Reasoning stepsAccuracy (%)DS-R1-8 DS-R1-32 DS-R1-70 G-4.1-M L-3.1-8 L-3.1-70 (a) Accuracy decreases as the number of reasoning steps grows. Due to scarcity, we omit problems that need >6steps.Spa. Tem. Cau. Beh. Num. Phy. Spe. Reasoning kindDS-R1-8 DS-R1-32 DS-R1-70 G-4.1-M L-3.1-8 L-3.1-70 (b) Accuracy with respect to the reasoning types. While performance vary a lot across models, tem- poral reasoning is usually the weakest.<60<85<105<126<150<180≥180 Answer space: |T| × |E|DS-R1-8 DS-R1-32 DS-R1-70 G-4.1-M L-3.1-8 L-3.1-70 (c) Accuracy with respect to size of answer space. Results does not show strong negative correlation. Figure 11: Model accuracies plotted against the number of reasoning steps, required reasoning types, and size of answer space. Additional experiments not covered in the main body text are presented here. 14 | https://arxiv.org/abs/2505.15712v1 |
arXiv:2505.15715v1 [cs.CL] 21 May 2025Beyond Empathy: Integrating Diagnostic and Therapeutic Reasoning with Large Language Models for Mental Health Counseling He Hu1,2*, Yucheng Zhou3*, Juzheng Si4, Qianning Wang5, Hengheng Zhang2 Fuji Ren6, Fei Ma2†, Laizhong Cui1,2† 1College of Computer Science and Software Engineering, Shenzhen University 2Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ) 3SKL-IOTSC, CIS, University of Macau,4Shandong University 5Auckland University of Technology,6University of Electronic Science and Technology of China huhe@gml.ac.cn ,yucheng.zhou@connect.um.edu.mo ,mafei@gml.ac.cn ,cuilz@szu.edu.cn Abstract Large language models (LLMs) hold signifi- cant potential for mental health support, ca- pable of generating empathetic responses and simulating therapeutic conversations. However, existing LLM-based approaches often lack the clinical grounding necessary for real-world psy- chological counseling, particularly in explicit diagnostic reasoning aligned with standards like the DSM/ICD and incorporating diverse therapeutic modalities beyond basic empathy or single strategies. To address these criti- cal limitations, we propose PsyLLM, the first large language model designed to systemati- cally integrate both diagnostic and therapeutic reasoning for mental health counseling. To develop the PsyLLM, we propose a novel au- tomated data synthesis pipeline. This pipeline processes real-world mental health posts, gen- erates multi-turn dialogue structures, and lever- ages LLMs guided by international diagnostic standards (e.g., DSM/ICD) and multiple thera- peutic frameworks (e.g., CBT, ACT, psychody- namic) to simulate detailed clinical reasoning processes. Rigorous multi-dimensional filter- ing ensures the generation of high-quality, clin- ically aligned dialogue data. In addition, we introduce a new benchmark and evaluation pro- tocol, assessing counseling quality across four key dimensions: comprehensiveness, profes- sionalism, authenticity, and safety. Our experi- ments demonstrate that PsyLLM significantly outperforms state-of-the-art baseline models on this benchmark. 1 Introduction Large language models (LLMs) have shown promising capabilities in mental health support tasks, such as generating empathetic responses and simulating therapeutic conversations (Hegde and Jayalath, 2025; Wang et al., 2024). Recent stud- ies have explored incorporating emotion recogni- *Equal Contribution. †Corresponding Authors. Figure 1: PsyLLM simulates therapeutic reasoning by assess- ing emotions, analyzing cognitive patterns, and formulating strategies grounded in DSM/ICD criteria and diverse modali- ties (e.g., CBT, ACT, psychodynamic). This enables clinically informed, context-sensitive counseling responses. tion and elements of cognitive behavioral therapy (CBT) to improve interpretability and control (Xu et al., 2025; Zhang et al., 2024a). However, these systems often lack grounded reasoning and fail to reflect the diagnostic rigor required in real-world psychological counseling (Na et al., 2025). A clinically grounded counseling system must account for both diagnostic rigor and therapeutic di- versity. However, current LLM-based approaches fall short in the following two critical aspects: (1) Lack of deep reasoning on explicit diagnosis. In clinical practice, mental health professionals rely on international diagnostic standards such as the Diagnostic and Statistical Manual of Mental Dis- orders (DSM) and the International Classification of Diseases (ICD) to assess symptom patterns, de- termine diagnoses, and guide treatment planning (Regier et al., 2013; Harrison et al., 2021). These standards ensure consistency, explainability, and le- gitimacy, especially in complex or comorbid cases. However, current counseling LLMs do not reason explicitly according to such frameworks, limiting their reliability in clinical decision support or realis- 1 Dataset Turn Type | https://arxiv.org/abs/2505.15715v1 |
Reasoning? Therapy? Diagnostic? Dialogues SweetieChat (Ye et al., 2025) Multi-turn ✗ ✗ ✗ 3,757 ESD (Zhang et al., 2024b) Multi-turn ✗ ✗ ✗ 2,900 CPsyCounD (Zhang et al., 2024a) Multi-turn ✗ ✓ ✗ 3,134 ChatCounselor (Liu et al., 2023a) Single-turn ✗ ✗ ✗ 8,187 OpenR1-Psy (Ours) Single + Multi-turn ✓ ✓ ✓ 19,302 Table 1: Comparison of OpenR1-Psy with existing psychological dialogue datasets tic therapy simulation. (2) Neglect of therapeutic diversity. Not all patients benefit from the same therapeutic approach (Gryesten et al., 2024; Cas- tonguay et al., 2015; Beutler et al., 2006). Depend- ing on the individual’s context, symptoms, and pref- erences, therapists may adopt different modalities such as cognitive behavioral therapy (CBT), accep- tance and commitment therapy (ACT), psychody- namic therapy, or humanistic approaches (Rowan, 2006; Beck, 1979; Hayes and Pierson, 2005). This diversity is essential for effective care, but current models rarely incorporate multiple therapeutic per- spectives or adapt their strategies accordingly. To address existing limitations, we propose the first large language model that systematically in- tegrates both diagnostic and therapeutic reason- ing for mental health counseling, as shown in Fig. 1. Specifically, the model is capable of au- tonomously applying diagnostic criteria and vari- ous psychotherapeutic frameworks during counsel- ing interactions. As shown in Table 1, we intro- duce a novel data synthesis pipeline that: (1) au- tomatically parses real-world mental health posts; (2) generates simplified multi-turn diagnostic di- alogues; (3) guides LLMs to simulate the rea- soning processes of mental health professionals, incorporating step-by-step thinking aligned with DSM/ICD diagnostic logic and therapeutic con- siderations drawn from multiple evidence-based treatments; and (4) applies multi-dimensional fil- tering—including Incomplete Thinking, Context Incoherence, Mismatch Response, and No Frame- work—to ensure data quality. The resulting high- quality filtered dataset is used to fine-tune the LLM, enabling it to deliver counseling responses that are not only accurate and therapeutically grounded but also contextually appropriate and aligned with clin- ical reasoning processes. In our experiments, we construct a benchmark to evaluate the quality of counseling responses along four key dimensions: comprehensiveness, profes- sionalism, authenticity, and safety. Furthermore, we analyze the training data used for counseling responses, revealing that while increased modelscale improves performance, the importance of data quantity is highly dependent on its quality. Our main contributions are as follows: •We propose PsyLLM, the first LLM that system- atically integrates both diagnostic and therapeutic reasoning for mental health counseling. •We develop an automated data synthesis pipeline that generates high-quality, multi-turn dialogues reflecting diagnostic and therapeutic reasoning. •We incorporate international diagnostic standards (e.g., DSM/ICD) and diverse therapeutic strate- gies (e.g., CBT, ACT, psychodynamic) to guide our model’s deep reasoning. •We introduce a new benchmark and evaluation protocol tailored for counseling quality, and con- duct a comprehensive analysis of model perfor- mance and data characteristics. 2 Related Work 2.1 Mental Support Chatbots The increasing demand for accessible mental health services has spurred the development of AI-driven dialogue systems offering emotional support (Ge et al., 2025). Early efforts, such as ChatCoun- selor (Liu et al., 2023b) and PsyQA (Sun et al., 2021), focused on single-turn, empathy-oriented responses, | https://arxiv.org/abs/2505.15715v1 |
lacking therapeutic modeling or con- versational depth. Subsequent work addressed multi-turn interaction: SMILE (Qiu et al., 2024a) used ChatGPT to expand single replies into coher- ent dialogues, while SoulChat (Chen et al., 2023) fine-tuned LLMs for better emotional adaptation across turns. SweetieChat (Ye et al., 2025) intro- duced strategy-guided role-play to simulate diverse emotional scenarios, and CPsyCoun (Zhang et al., 2024a) annotated dialogues with therapeutic types (e.g., CBT, humanistic), though it did not model underlying reasoning. To improve interpretability, ESCoT (Zhang et al., 2024b) proposed a strategy- driven chain-of-thought process—emotion recogni- tion, appraisal, and justification—mirroring human counseling. MentaLLaMA (Yang et al., 2024) com- bined psychological insight with clinical condition 2 prediction and rationales in social media settings. At the clinical edge, PsyGUARD (Qiu et al., 2024b) addressed suicide risk detection and safety-aware response generation in multi-turn counseling. 2.2 Deep Reasoning in LLMs Chain-of-Thought (CoT) prompting (Wei et al., 2022) revealed LLMs’ capacity for multi-step in- ference, significantly improving symbolic reason- ing. This has since extended to decision-making, multimodal understanding, and emotional infer- ence. Dual-system models (Dou et al., 2025) and modality-aware reinforcement learning (Zhao et al., 2025) enhance adaptability and emotion recogni- tion. In math reasoning, (Wang et al., 2025) aligns vision and code for structured geometry tasks, out- performing GPT-4o. Dialogue reasoning intro- duces challenges in coherence and emotional depth; (Shu et al., 2025) reframes CoT as agent dialogue, while (Xie et al., 2025) fuses CoT with emotional cues from speech. Psychological reasoning is sup- ported by CoT-modeled questionnaires (Yang et al., 2023) and graph-based therapist–AI collaboration tools (Chen et al., 2025). The full version of the related work can be found in Appendix A. 3 OpenR1-Psy Dataset Construction The construction of the OpenR1-Psy dataset fol- lows a systematic pipeline (Fig. 2), integrating data collection with LLM-powered parsing, therapeutic reasoning, and rigorous validation. We will detail each key component of dataset creation process. 3.1 Data Collection To ensure both clinical realism and contextual di- versity, our dataset sources are primarily divided into two parts: (1) The Reddit posts we used were pre-filtered and sourced from three publicly avail- able corpora in mental health-related communities: Identifying-depression (Pirina and Çöltekin, 2018), Dreaddit (Turcan and McKeown, 2019), and LRF (Garg et al., 2023). (2) Authentic patient utter- ances are derived from two high-quality psycholog- ical counseling datasets, ChatCounselor (Liu et al., 2023b) and CPsyCoun (Zhang et al., 2024a), which are based on real-world clinical cases and authentic patient-doctor dialogues, and refined as multi-turn dialogues between patients and LLM counselors by ChatGPT (OpenAI, 2023). Rather than relying on isolated sentences, the post data we used is long-form posts that providerich background information, including the poster’s personal context, psychological state, and social en- vironment. These posts span diverse life situations, such as family dynamics, friendship challenges, interpersonal conflict, psychiatric symptoms, and financial stressors. In total, we collected 10,097 posts from these sources. After removing dupli- cates based on exact textual matching, we retained 8,752 unique posts. 3.2 Data Parsing and Interaction Planning To convert raw Reddit posts into structured inputs for guiding dialogue generation, we implement | https://arxiv.org/abs/2505.15715v1 |
a parsing and planning process inspired by psycho- logical counseling techniques. This process uses a large language model (LLM), guided by a frame- work that simulates a counselor’s initial assessment and planning upon receiving a client’s self-report. For each Reddit post, the LLM performs the follow- ing analyses: (1) Emotion Assessment: Identify the user’s primary emotions, their intensity, and any emotional nuances. This step parallels how a therapist assesses affect. The output captures the key emotional states expressed. (2) Round Set- ting: Estimate the number of dialogue turns (1 to 3) based on the complexity of the issue. The di- alogue progresses from surface-level emotions to deeper needs or vulnerabilities, reflecting typical therapeutic stages. (3) Theme Definition: Define the therapeutic focus for each round. These themes guide the conversation’s goals, ensuring each round builds on the previous one. 3.3 Question Extraction from Dialogue Following the initial parsing and interaction plan- ning, we prepare inputs for the deep reasoning pro- cess by creating a collection of patient utterances that will serve as prompts for counselor response generation. Let UPdenote this collection of patient utterances. This is achieved through two mech- anisms: (1) For the processed Reddit posts, we simulate a simplified initial empathetic multi-turn exchange using an LLM. Based on the planned dialogue structure (incorporating emotion assess- ment E, round setting N, and theme definition Th), the LLM first generates a hypothetical em- pathetic counselor response to the post, followed by a plausible patient utterance in reaction to this turn. This patient’s utterance, representing a ques- tion or continuation of the user’s expression, is extracted as a prompt for the subsequent deep rea- soning step. This effectively transforms static post 3 Systematical Dialogue Validation Incomplete Thinking Detection Context Incoherence Evaluation Mismatch Response Checking Reasoning Framework CheckingResponse Regeneration Q: …… :A International Diagnostic Standards (e.g., DSM/ICD) Therapeutic Strategies(e.g., CBT, PDT, ACT) Response GenerationDiagnostic Standard Therapeutic Strategy Posts on Reddit & other forumsExisting Dialogues Empathic Dialogue Construction Extract User Questions QAQAParsing and Interaction Planning Round Setting Theme DefinitionEmotion AssessmentFigure 2: Overview of the OpenR1-Psy dataset construction pipeline. The process includes five stages: (1) Data collection from Reddit and real counseling datasets. (2) Parsing and interaction planning using a language model to assess emotions, define dialogue rounds, and set therapeutic themes. (3) Extraction of patient utterances from simulated dialogues and real counseling data. (4) Generation of reasoning traces and counselor responses based on diagnostic standards and therapeutic strategies. (5) Multi-dimensional validation to ensure coherence, clinical relevance, and reasoning quality. dataPrawinto dynamic dialogue starters, contribut- ing to UP. (2) We incorporate data from exist- ing high-quality psychological counseling datasets, specifically ChatCounselor (Liu et al., 2023b) and CPsyCoun (Zhang et al., 2024a). These datasets Dexist contain multi-turn patient-counselor dia- logues. We extract only the patient utterances from these datasets, also contributing to UP. The collection of patients’ turns UPfrom both sources forms the input pool for the subsequent gen- eration step. This approach is central to our method- ology: instead of utilizing pre-existing counselor responses, the collected patient utterances serve as prompts that guide the LLM to first | https://arxiv.org/abs/2505.15715v1 |
generate a deep reasoning trace, which is then used to construct the final counselor response. 3.4 Response Generation with Diagnostic Standard & Therapeutic Strategy Given a patient utterance UP∈ UP(and potentially preceding dialogue context C), our system gener- ates a clinically grounded counselor response UC. Unlike methods that directly map UPtoUCand might offer post-hoc explanations, our approach ex- plicitly models the internal reasoning process con- currently with generating the response. The LLM produces outputs that include both the reasoning trace Rand the counselor response UCin a single generation step, systematically guided by key prin- ciples of clinical practice: international diagnostic standards and diverse therapeutic strategies. The integration of these elements is crucial for effective and responsible mental health support. In- ternational diagnostic standards, such as the Diag- nostic and Statistical Manual of Mental Disorders (DSM) (Regier et al., 2013) and the International Classification of Diseases (ICD) (Harrison et al., 2021), provide a structured framework for under- standing symptom patterns, recognizing potentialunderlying conditions, and ensuring a degree of diagnostic rigor that is fundamental in clinical as- sessment. Diverse therapeutic strategies, including but not limited to Cognitive Behavioral Therapy (CBT) (Beck, 1979), Acceptance and Commitment Therapy (ACT) (Hayes and Pierson, 2005), and psychodynamic approaches (Rowan, 2006), offer varied theoretical lenses and practical techniques to address the patient’s issues in a personalized and theoretically informed manner. Relying on a single strategy can limit applicability, as different patients respond better to different modalities. Our system leverages a deep reasoning LLM to simulate this multi-faceted clinical thought pro- cess. Given UPand context C, the LLM generates a structured output containing both the reasoning trace Rand the counselor utterance UCby consid- ering relevant diagnostic perspectives (informed byD, a representation of diagnostic knowledge) and evaluating applicable therapeutic strategies (in- formed by T, a representation of therapeutic knowl- edge). This combined generation process can be conceptually represented as: (R, U C)=LLM DeepReasoning (UP, C;D, T)(1) Here, DandTact as guiding frameworks that the LLM utilizes during the generation process. The model is specifically trained to first articulate the simulated clinical assessment and therapeutic ratio- nale in R, and then formulate the final counselor response UCconditioned on UP,C, and the gener- atedRwithin this single output structure. This joint generation of RandUCensures that the generated counseling response UCis not merely empathetic or plausible but is also demonstrably grounded in the clinical reasoning captured by R. The reasoning Ris explicitly informed by diag- nostic standards Dand therapeutic strategies T, providing a level of interpretability and clinical 4 /uni00000013 /uni00000015/uni00000013/uni00000013/uni00000013 /uni00000017/uni00000013/uni00000013/uni00000013 /uni00000019/uni00000013/uni00000013/uni00000013 /uni0000001b/uni00000013/uni00000013/uni00000013 /uni00000014/uni00000013/uni00000013/uni00000013/uni00000013 /uni00000014/uni00000015/uni00000013/uni00000013/uni00000013 /uni00000031/uni00000058/uni00000050/uni00000045/uni00000048/uni00000055/uni00000003/uni00000052/uni00000049/uni00000003/uni00000027/uni0000004c/uni00000044/uni0000004f/uni00000052/uni0000004a/uni00000058/uni00000048/uni00000056 /uni00000033/uni00000052/uni00000056/uni00000057/uni00000050/uni00000052/uni00000047/uni00000048/uni00000055/uni00000051/uni00000037/uni00000055/uni00000044/uni00000058/uni00000050/uni00000044/uni00000010/uni0000002c/uni00000051/uni00000049/uni00000052/uni00000055/uni00000050/uni00000048/uni00000047/uni00000028/uni0000005b/uni0000004c/uni00000056/uni00000057/uni00000048/uni00000051/uni00000057/uni0000004c/uni00000044/uni0000004f/uni00000033/uni00000056/uni0000005c/uni00000046/uni0000004b/uni00000052/uni00000044/uni00000051/uni00000044/uni0000004f/uni0000005c/uni00000057/uni0000004c/uni00000046/uni00000032/uni00000057/uni0000004b/uni00000048/uni00000055/uni00000029/uni00000044/uni00000050/uni0000004c/uni0000004f/uni0000005c/uni00000026/uni00000025/uni00000037 /uni0000002b/uni00000058/uni00000050/uni00000044/uni00000051/uni0000004c/uni00000056/uni00000057/uni0000004c/uni00000046/uni0000002c/uni00000051/uni00000057/uni00000048/uni0000004a/uni00000055/uni00000044/uni00000057/uni0000004c/uni00000059/uni00000048 /uni00000013/uni00000011/uni00000013/uni00000016/uni00000008/uni00000013/uni00000011/uni00000013/uni00000019/uni00000008/uni00000013/uni00000011/uni00000013/uni0000001b/uni00000008/uni00000013/uni00000011/uni00000014/uni00000013/uni00000008/uni00000013/uni00000011/uni00000017/uni00000015/uni00000008/uni00000014/uni00000011/uni0000001c/uni0000001a/uni00000008/uni00000014/uni0000001a/uni00000011/uni00000014/uni00000019/uni00000008/uni00000015/uni00000018/uni00000011/uni00000014/uni00000018/uni00000008/uni00000018/uni00000017/uni00000011/uni00000018/uni00000013/uni00000008 Figure 3: Analysis of OpenR1-Psy Dataset: (Left) Distribution of Psychotherapy Approaches; (Middle) Distribution of Scene Categories; (Right) Distribution of Severity Levels. legitimacy that is often lacking in end-to-end gen- erative models. This aligns the LLM’s behavior more closely with the structured thought process of a human mental health professional, where the therapeutic intervention (the response) is directly derived from and justified by the clinical assess- ment and theoretical formulation (the reasoning). 3.5 Systematical Dialogue Validation After the generation process, synthesized dialogue turns are constructed by combining | https://arxiv.org/abs/2505.15715v1 |
patient utter- ances UPwith their corresponding generated rea- soning traces Rand counselor responses UC. To en- sure the clinical relevance, logical consistency, and overall quality of this data for fine-tuning PsyLLM, we implement a rigorous validation process. This validation employs a multi-dimensional filtering mechanism applied to each generated turn (R, U C) based on UPand context C. A sample is kept if it satisfies the following boolean condition K: K=C1∧C2∧C3∧C4 (2) where the boolean variables Cirepresent the eval- uation along four key dimensions: C1, Complete Thinking Detection, which assesses the complete- ness and logical flow of the reasoning trace R;C2, Context Coherence Evaluation, which evaluates the consistency of (R, U C)with the preceding di- alogue context Cand the user prompt UP;C3, Mismatch Response Checking, which verifies the alignment between the reasoning trace Rand the counselor response UC; and C4, Reasoning Frame- work Checking, which confirms whether Radheres to the intended guidance from international diag- nostic standards ( D) and therapeutic strategies ( T). Through this process, low-quality generated seg- ments are filtered out, resulting in a high-quality dataset used for training. This systematic quality control is essential for building a reliable and clini- cally informed mental health counseling system.Category Train Test All # Dialogues 18,852 450 19302 # Utterances 47,723 1651 49374 Avg. Turn per Dialogue 2.53 3.67 2.56 Avg. Patient Utterance Length 224.18 171.84 222.43 Avg. Counselor Utterance Length 520.11 489.4 519.08 Avg. Reasoning Utterance Length 1592.33 1627.36 1593.5 Table 2: Statistics of the OpenR1-Psy Dataset (Train/Test). 3.6 Dataset Statistics and Analysis We analyze the constructed OpenR1-Psy dataset, presenting its key statistics and characteristics in Ta- ble 2 and Fig. 3. Table 2 shows the dataset statistics, comprising 19,302 dialogues (49,374 utterances) with a train/test split. The dialogues are multi-turn (avg. 2.56 turns). Patient utterances average 222 tokens, counselor responses 519 tokens, and no- tably, the reasoning trace averages 1593 tokens, highlighting its detailed nature. Fig. 3 presents key dataset distributions. (Left) shows psychotherapy approaches, dominated by Integrative (54.50%), Humanistic (25.15%), and CBT (17.16%), while including other diverse modalities. (Middle) illus- trates scene categories, covering a broad range of topics like Emotion & Stress, Family Relationship, and Social Relationship. (Right) depicts severity levels: Moderate (48%) and Severe (41%) are most frequent, alongside Mild (10%) and Critical (1%), representing varied intensity. 3.7 PsyLLM Training This section details the training process for PsyLLM, our large language model specialized for psychological counseling, leveraging the high- quality OpenR1-Psy dataset. The primary objective is to train the model to generate clinically informed counselor responses ( UC) grounded in explicit ther- apeutic reasoning ( R), given a patient’s utterance (UP) and preceding dialogue context ( C). We employ Supervised Fine-Tuning (SFT) on the OpenR1-Psy dataset. Each training instance uses the input sequence (C, U P)and the target out- put sequence (R, U C), where the validated reason- 5 ing trace Ris concatenated before the correspond- ing counselor response UC. This structured format is crucial; it trains the model to first simulate the internal clinical reasoning process (generating R) and then formulate the response ( UC) conditioned on | https://arxiv.org/abs/2505.15715v1 |
that reasoning and the input. The training objective is to maximize the likeli- hood of generating the target sequence (R, U C) given (C, U P)by minimizing the cross-entropy loss over the target tokens. Formally, the model parameters θare optimized by minimizing: L(θ)=−1 NNX i=1|RiUC,i|X t=1logPθ(yi,t|Ci, UP,i, yi,<t) (3) where Nis the number of training instances, |RiUC,i|is the length of the concatenated target se- quence for instance i,yi,tis the t-th token, and yi,<t are the preceding tokens in the target sequence. This approach of jointly generating RandUC, withRpreceding UC, ensures that the generated response UCis explicitly conditioned on the simu- lated clinical rationale R. Unlike end-to-end mod- els, this method enhances interpretability and clin- ical alignment by making the reasoning process traceable and integral to the response generation. 4 Experiments 4.1 Experimental Settings Implementation Details. To evaluate the effec- tiveness of the proposed OpenR1-Psy dataset in en- hancing the reasoning and counseling capabilities of large language models, we fine-tune Qwen3-8B (Yang et al., 2025) on the OpenR1-Psy dataset to obtain PsyLLM. The model is trained for 3 epochs using the AdamW optimizer with a learning rate of1×10−5, standard parameters (β1= 0.9, β2= 0.999, ϵ= 1×10−8), and a weight decay of 0.01. A constant learning rate schedule was applied. We used a batch size of 8. Training was performed on 8×NVIDIA A100 (40G) GPUs. Evaluation Metrics. We employ a combination of automatic metrics and expert human review to rigorously assess the quality of the generated coun- selor responses and their accompanying reasoning traces. Our evaluation framework focuses on four key dimensions: comprehensiveness , assessing if the response addresses the core issues; profes- sionalism , evaluating adherence to ethical guide- lines and therapeutic principles; authenticity , en-suring the response feels natural and empathetic; andsafety , verifying the absence of harmful or in- appropriate content. These dimensions are applied to evaluate each generated single turn, which in- cludes both the reasoning trace and the counselor utterance. We apply strict, reality-aligned criteria within these dimensions to ensure our evaluation reflects the standards expected in real-world thera- peutic practice for individual responses. 4.2 Synthetic Data Comparison We conducted a synthetic data comparison to evalu- ate the quality of data generated by different meth- ods. We randomly selected 150 patient cases from ChatCounselor (Liu et al., 2023b) and 150 cases from CPsyCounD (Zhang et al., 2024a), compar- ing their original counselor responses with those generated by our OpenR1-Psy method for the same inputs. Gemini-2.0-Flash (Anil et al., 2023) eval- uated these responses based on the four metrics: Comprehensiveness, Professionalism, Authenticity, and Safety (detailed prompts in Fig. 10). Table 3 presents the evaluation results. OpenR1- Psy achieved substantial improvements over Chat- Counselor, with gains of 120% in Comprehensive- ness, 96% in Professionalism, 86% in Authenticity, and 1% in Safety, resulting in a 57% higher normal- ized average. Compared to CPsyCounD, OpenR1- Psy showed notable gains of 71% (Comprehensive- ness), 84% (Professionalism), 44% (Authenticity), 2% (Safety), and 41% higher normalized average. These results confirm that the OpenR1-Psy genera- tion method produces significantly higher quality data across key evaluation dimensions. 4.3 | https://arxiv.org/abs/2505.15715v1 |
Performance Comparison To evaluate PsyLLM’s effectiveness against state- of-the-art models, we conducted a turn-based dia- logue evaluation on the OpenR1-Psy test set. We compared PsyLLM against baselines including ChatCounselor (Liu et al., 2023b), CPsyCounX (Zhang et al., 2024a), DeepSeek-V3 (DeepSeek-AI et al., 2024), and GPT-4o (OpenAI, 2023). Eval- uation utilized the four key metrics: Comprehen- siveness, Professionalism, Authenticity, and Safety (with max scores outlined in Table 3). This process was automated using Gemini-2.0-Flash, guided by detailed prompts (refer to Fig. 11 in the Appendix). Table 4 shows the results. PsyLLM consistently demonstrates strong performance across all dimen- sions, achieving the highest scores in Comprehen- siveness (0.904), Authenticity (1.440), and Safety 6 MethodMetrics Comprehensiveness Professionalism Authenticity Safety Normalized Avg ChatCounselor (Liu et al., 2023b) 0.740 1.830 1.430 0.990 0.574 OpenR1-Psy 1.630 (+120%) 3.590 (+96%) 2.660 (+86%) 1.000 (+1%) 0.900 (+57%) CPsyCounD (Zhang et al., 2024a) 0.950 1.950 1.840 0.980 0.639 OpenR1-Psy 1.63 (+71%) 3.59 (+84%) 2.66 (+44%) 1.00 (+2%) 0.900 (+41%) Table 3: Synthetic data comparison across four metrics and normalized averages based on metric max scores: 2, 4, 3, 1. OpenR1-Psy achieves the highest performance and substantial relative improvements. MethodMetrics Comprehensiveness Professionalism Authenticity Safety Normalized Avg ChatCounselor (Liu et al., 2023b) 0.015 0.826 0.332 0.469 0.202 CPsyCounX (Zhang et al., 2024a) 0.005 0.492 0.180 0.358 0.137 DeepSeek-V3 (DeepSeek-AI et al., 2024) 0.880 2.155 1.057 0.830 0.540 GPT-4o (Hurst et al., 2024) 0.833 2.218 1.038 0.903 0.552 PsyLLM 0.904 2.210 1.440 0.945 0.607 Table 4: Performance comparison results, including scores on four metrics with varying maxima (2, 4, 3, 1). The “Normalized Avg” is computed after the normalization. PsyLLM achieves the highest overall performance. MethodHuman Comprehensiveness Professionalism Authenticity Safety Normalized Avg ChatCounselor (Liu et al., 2023b) 0.280 0.950 0.630 0.870 0.364 CPsyCounX (Zhang et al., 2024a) 0.120 0.480 0.270 0.600 0.204 DeepSeek-V3 (DeepSeek-AI et al., 2024) 0.880 1.880 1.250 0.810 0.507 GPT-4o (Hurst et al., 2024) 0.850 2.170 1.270 0.920 0.541 PsyLLM 0.990 2.330 1.620 0.980 0.601 Table 5: Human evaluation results. 1.7B 3B 8B Model Parameters0.660.690.720.75Normalized Avg 20% 40% 60% 80% 100% Data Scale (%)0.650.670.690.710.730.75Normalized Avg Figure 4: Impact of varying base model scale (Left) and training data volume (Right) on PsyLLM’s counseling performance (More results in Table 9 and Table 10). (0.945). Although GPT-4o slightly edges out in Professionalism (2.218), PsyLLM’s score (2.210) is very close. Overall, PsyLLM attains the highest Normalized Average score (0.607), surpassing all baselines, including GPT-4o (0.552) and DeepSeek- V3 (0.540). These findings indicate that fine-tuning on the OpenR1-Psy dataset effectively enhances a language model’s capacity to generate clinically relevant and therapeutic counseling responses. Human Evaluation. To validate the automatic evaluation, we conducted a human evaluation. Five psychology graduate students assessed a randomly selected subset of dialogue turns from the test set, rating responses from PsyLLM and the base- line models (ChatCounselor (Liu et al., 2023b),CPsyCounX (Zhang et al., 2024a), DeepSeek-V3 (DeepSeek-AI et al., 2024), GPT-4o (Hurst et al., 2024)). Responses were evaluated blindly based on the four dimensions: Comprehensiveness, Pro- fessionalism, Authenticity, and Safety. As Table 5 shows, human evaluation confirms that PsyLLM consistently outperforms all | https://arxiv.org/abs/2505.15715v1 |
baselines, achieving the highest scores across all four dimensions and the Normalized Average (0.601). It demonstrates PsyLLM’s superior ability to generate human- preferred and clinically counseling responses. 4.4 Analysis Impact of Model Scale. To understand how the size of the underlying language model affects coun- seling performance after fine-tuning on OpenR1- Psy, we trained PsyLLM variants using different scales of the Qwen3 model series: Qwen3-1.7B, Qwen3-3B, and Qwen3-8B (Yang et al., 2025). These models were fine-tuned using the same methodology and dataset as the main PsyLLM model. As shown in Figure 4 (Left), there is a clear trend of improving performance with increas- ing model size. The Qwen3-8B model achieves the highest overall Normalized Average score (0.757), 7 MethodMetrics Comprehensiveness Professionalism Authenticity Safety Normalized Avg Prompt w/o Clinical Frame 1.075 2.994 2.079 0.979 0.740 Prompt w/o Therapy Guidance 0.803 2.871 2.053 0.985 0.697 OpenR1-Psy 1.083 3.277 2.299 0.988 0.778 Table 6: Ablation study of diagnostic standard and therapeutic strategy. Method Empathy Clarity Justification Coherence Structure Normalized Avg Human 2.879 2.890 2.990 2.672 3.000 0.962 Automatic Evaluation 2.806 2.710 2.660 2.970 2.882 0.935 Table 7: Evaluation scores of counselor thinking across five cognitive dimensions. indicating superior performance compared to the smaller models. The Qwen3-3B model performs better than the 1.7B version, achieving a Normal- ized Average of 0.738, while the smallest Qwen3- 1.7B consistently shows the lowest overall perfor- mance with a Normalized Average of 0.675. These results indicate that larger base models, even when fine-tuned on the same dataset, possess stronger in- herent capabilities that translate into better overall performance in generating nuanced, professional, and authentic psychological counseling responses. Impact of Data Scale. To investigate the influ- ence of training data volume, we trained PsyLLM using varying proportions (20%, 40%, 60%, 80%, and 100%) of the OpenR1-Psy dataset, keeping the base model (Qwen3-8B) consistent. As shown in Figure 4 (Right), the impact of data scale on the Normalized Average is not strictly monotonic. Performance generally improves from 20% (0.718) up to 80% (0.746), with 80% achieving the high- est Normalized Average. However, the Normal- ized Average drops significantly when using the full 100% of the training data (0.656). The Safety metric remained consistently high across all data scales (around 0.99). This indicates that simply using more data for model training does not always yield the best performance. Effect of Diagnostic Standard & Therapeutic Strategy. We conducted an ablation study on data generation by omitting either diagnostic guid- ance (“Prompt w/o Clinical Frame”) or therapeutic guidance (“Prompt w/o Therapy Guidance”), fine- tuning PsyLLM on these ablated datasets and the full OpenR1-Psy data. As seen in Table 6, train- ing on the full OpenR1-Psy dataset yields the best Normalized Average (0.778). Removing therapeu- tic guidance results in a notable performance drop (0.697), especially in Comprehensiveness and Pro- fessionalism. Removing diagnostic guidance alsoreduces performance (0.740), though less signifi- cantly. It confirms that diagnostic standards and therapeutic strategies are critical. Reasoning Process Evaluation. A key aspect of our approach is the explicit generation of a rea- soning trace. To evaluate its quality, we employed two sets | https://arxiv.org/abs/2505.15715v1 |
of evaluators: five psychology graduate students (“Human”) and the Gemini model (“Auto- matic Evaluation”). Both groups scored the same set of PsyLLM-generated reasoning traces on five dimensions: Empathy, Clarity, Justification, Co- herence, and Structure. Table 7 shows the average scores from each evaluator group. Human evalua- tors assigned slightly higher scores overall and on most dimensions, resulting in a higher Normalized Average (0.962 vs 0.935). Conversely, the auto- matic evaluator scored higher on Coherence. The high scores from both human and automatic evalu- ators indicate that PsyLLM can generate reasoning processes perceived as high-quality and compara- ble to human expert standards. 5 Conclusion This paper addresses the critical limitations of ex- isting LLMs for mental health support, specif- ically, their lack of explicit diagnostic reason- ing (DSM/ICD aligned) and diverse therapeutic modality integration. We introduce PsyLLM, the first model designed to systematically integrate both, facilitated by a novel automated data syn- thesis pipeline creating the OpenR1-Psy dataset. This dataset incorporates explicit reasoning traces guided by diagnostic standards and multiple ther- apeutic frameworks (e.g., CBT, ACT, psychody- namic), validated through multi-dimensional filter- ing. Our new benchmark, assessing counseling quality across comprehensiveness, professionalism, authenticity, and safety, demonstrates that PsyLLM significantly outperforms state-of-the-art baselines. 8 Limitations PsyLLM performs well on short to medium-length dialogues but may struggle with long-term context tracking, complex emotional transitions, or main- taining nuanced therapeutic pacing across extended multi-turn interactions. Furthermore, due to lim- ited computational resources and the lack of access to high-memory GPUs, PsyLLM is currently based on a 8B-sized model. Larger models with greater capacity for reasoning and emotional understand- ing have not been explored in this study. Ethics Statements Data Privacy All data used in this study origi- nate from publicly available datasets. The first por- tion consists of anonymous user-generated posts, in which users’ identities have been anonymized by the original data providers to ensure privacy (i.e., anonymous posting with no traceable user IDs). The second portion comprises an existing dialogue dataset, which underwent rule-based filtering, man- ual rewriting, and human proofreading to ensure the removal of any sensitive or privacy-related con- tent. Potential Risks of the Model Although PsyLLM is optimized for therapeutic relevance, it has not been supervised or validated by licensed clinicians. Its responses should not be regarded as a replace- ment for professional mental health services. We conducted a thorough safety assessment of the train- ing and evaluation data to ensure a high level of content security. However, the model fine-tuning process lacked human feedback, which means that certain responses may still carry potential risks of emotional harm, especially in sensitive scenarios. References Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean- Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M. Dai, Anja Hauth, Katie Mil- lican, David Silver, Slav Petrov, Melvin Johnson, Ioannis Antonoglou, Julian Schrittwieser, Amelia Glaese, Jilin Chen, Emily Pitler, Timothy P. Lillicrap, and 33 others. 2023. Gemini: A family of highly capable multimodal models. CoRR , abs/2312.11805. Aaron T Beck. 1979. Cognitive therapy and the emo- tional disorders . Penguin. Larry E Beutler, T Mark Harwood, Matteo Bertoni, and | https://arxiv.org/abs/2505.15715v1 |
Jimena Thomann. 2006. Systematic treatment selection and prescriptive therapy.Louis G Castonguay, Catherine F Eubanks, Marvin R Goldfried, J Christopher Muran, and Wolfgang Lutz. 2015. Research on psychotherapy integration: Build- ing on the past, looking to the future. Psychotherapy Research , 25(3):365–382. Keqi Chen, Zekai Sun, Huijun Lian, Yingming Gao, and Ya Li. 2025. Psy-copilot: Visual chain of thought for counseling. CoRR , abs/2503.03645. Yirong Chen, Xiaofen Xing, Jingkai Lin, Huimin Zheng, Zhenyu Wang, Qi Liu, and Xiangmin Xu. 2023. Soulchat: Improving llms’ empathy, listening, and comfort abilities through fine-tuning with multi-turn empathy conversations. In Findings of the Associa- tion for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023 , pages 1170–1183. Association for Computational Linguistics. DeepSeek-AI, Aixin Liu, Bei Feng, Bing Xue, Bingx- uan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Daya Guo, Dejian Yang, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, and 81 others. 2024. Deepseek-v3 technical report. CoRR , abs/2412.19437. Alex Zhihao Dou, Dongfei Cui, Jun Yan, Weida Wang, Benteng Chen, Haoming Wang, Zeke Xie, and Shufei Zhang. 2025. Dsadf: Thinking fast and slow for decision making. arXiv preprint arXiv:2505.08189 . Muskan Garg, Amirmohammad Shahbandegan, Am- rit Chadha, and Vijay Mago. 2023. An annotated dataset for explainable interpersonal risk factors of mental disturbance in social media posts. In Find- ings of the Association for Computational Linguis- tics: ACL 2023, Toronto, Canada, July 9-14, 2023 , pages 11960–11969. Association for Computational Linguistics. Zhuohan Ge, Nicole Hu, Darian Li, Yubo Wang, Shihao Qi, Yuming Xu, Han Shi, and Jason Zhang. 2025. A survey of large language models in mental health disorder detection on social media. arXiv preprint arXiv:2504.02800 . Jasmin Rejaye Gryesten, Stig Poulsen, Christian Moltu, Elisabeth Belmudez Biering, Kirsten Møller, and Sidse Marie Arnfred. 2024. Patients’ and therapists’ experiences of standardized group cognitive behav- ioral therapy: Needs for a personalized approach. Ad- ministration and Policy in Mental Health and Mental Health Services Research , 51(5):617–633. James E Harrison, Stefanie Weber, Robert Jakob, and Christopher G Chute. 2021. Icd-11: an international classification of diseases for the twenty-first century. BMC medical informatics and decision making , 21:1– 10. Steven C Hayes and Heather Pierson. 2005. Acceptance and commitment therapy . Springer. Karishma Hegde and Hemadri Jayalath. 2025. Emo- tions in the loop: A survey of affective computing for emotional support. arXiv preprint arXiv:2505.01542 . 9 He Hu, Yucheng Zhou, Lianzhong You, Hongbo Xu, Qianning Wang, Zheng Lian, Fei Richard Yu, Fei Ma, and Laizhong Cui. 2025. Emobench-m: Bench- marking emotional intelligence for multimodal large language models. arXiv preprint arXiv:2502.04424 . Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, Alek- sander Madry, Alex Baker-Whitcomb, Alex Beutel, Alex Borzunov, Alex Carney, Alex Chow, Alex Kir- illov, Alex Nichol, Alex Paino, and 79 others. 2024. Gpt-4o system card. CoRR , abs/2410.21276. June M. Liu, Donghao Li, He Cao, Tianhe Ren, Zeyi Liao, and Jiamin Wu. 2023a. Chatcounselor: A large language models for mental health support. CoRR , abs/2309.15461. June M. Liu, Donghao | https://arxiv.org/abs/2505.15715v1 |
Li, He Cao, Tianhe Ren, Zeyi Liao, and Jiamin Wu. 2023b. Chatcounselor: A large language models for mental health support. CoRR , abs/2309.15461. Hongbin Na, Yining Hua, Zimu Wang, Tao Shen, Beibei Yu, Lilin Wang, Wei Wang, John B. Torous, and Ling Chen. 2025. A survey of large language models in psychotherapy: Current landscape and future direc- tions. CoRR , abs/2502.11095. OpenAI. 2023. GPT-4 technical report. CoRR , abs/2303.08774. Inna Pirina and Çagri Çöltekin. 2018. Identifying de- pression on reddit: The effect of training data. In Proceedings of the 2018 EMNLP Workshop SMM4H: The 3rd Social Media Mining for Health Applications Workshop & Shared Task, SMM4H@EMNLP 2018, Brussels, Belgium, October 31, 2018 , pages 9–12. Association for Computational Linguistics. Huachuan Qiu, Hongliang He, Shuai Zhang, Anqi Li, and Zhenzhong Lan. 2024a. SMILE: single-turn to multi-turn inclusive language expansion via chatgpt for mental health support. In Findings of the Associ- ation for Computational Linguistics: EMNLP 2024, Miami, Florida, USA, November 12-16, 2024 , pages 615–636. Association for Computational Linguistics. Huachuan Qiu, Lizhi Ma, and Zhenzhong Lan. 2024b. Psyguard: An automated system for suicide detection and risk assessment in psychological counseling. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, EMNLP 2024, Miami, FL, USA, November 12-16, 2024 , pages 4581–4607. Association for Computational Linguis- tics. Darrel A Regier, Emily A Kuhl, and David J Kupfer. 2013. The dsm-5: Classification and criteria changes. World psychiatry , 12(2):92–98. John Rowan. 2006. The humanistic approach to action research. Handbook of action research: The concise paperback edition , pages 106–116.Yubo Shu, Zhewei Huang, Xin Wu, Chen Hu, Shuchang Zhou, and Daxin Jiang. 2025. Dialoguereason: Rule- based rl sparks dialogue reasoning in llms. arXiv preprint arXiv:2505.07049 . Hao Sun, Zhenru Lin, Chujie Zheng, Siyang Liu, and Minlie Huang. 2021. Psyqa: A chinese dataset for generating long counseling text for mental health support. In Findings of the Association for Com- putational Linguistics: ACL/IJCNLP 2021, Online Event, August 1-6, 2021 , volume ACL/IJCNLP 2021 ofFindings of ACL , pages 1489–1503. Association for Computational Linguistics. Elsbeth Turcan and Kathy McKeown. 2019. Dreaddit: A reddit dataset for stress analysis in social media. InProceedings of the Tenth International Workshop on Health Text Mining and Information Analysis LOUHI@EMNLP 2019, Hong Kong, November 3, 2019 , pages 97–107. Association for Computational Linguistics. Ke Wang, Junting Pan, Linda Wei, Aojun Zhou, Weikang Shi, Zimu Lu, Han Xiao, Yunqiao Yang, Houxing Ren, Mingjie Zhan, and 1 others. 2025. Mathcoder-vl: Bridging vision and code for enhanced multimodal mathematical reasoning. arXiv preprint arXiv:2505.10557 . Ruiyi Wang, Stephanie Milani, Jamie C. Chiu, Jiayin Zhi, Shaun M. Eack, Travis Labrum, Samuel M Mur- phy, Nev Jones, Kate V Hardy, Hong Shen, Fei Fang, and Zhiyu Chen. 2024. PATIENT- ψ: Using large language models to simulate patients for training mental health professionals. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 12772–12797, Miami, Florida, USA. Association for Computational Lin- guistics. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V . Le, and Denny Zhou. | https://arxiv.org/abs/2505.15715v1 |
2022. Chain-of-thought prompting elicits reasoning in large language models. In Ad- vances in Neural Information Processing Systems 35: Annual Conference on Neural Information Process- ing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022 . Jingran Xie, Shun Lei, Yue Yu, Yang Xiang, Hui Wang, Xixin Wu, and Zhiyong Wu. 2025. Leveraging chain of thought towards empathetic spoken dialogue with- out corresponding question-answering data. CoRR , abs/2501.10937. Ancheng Xu, Di Yang, Renhao Li, Jingwei Zhu, Minghuan Tan, Min Yang, Wanxin Qiu, Mingchen Ma, Haihong Wu, Bingyu Li, Feng Sha, Cheng- ming Li, Xiping Hu, Qiang Qu, Derek F. Wong, and Ruifeng Xu. 2025. Autocbt: An autonomous multi- agent framework for cognitive behavioral therapy in psychological counseling. CoRR , abs/2501.09426. An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang 10 Gao, Chengen Huang, Chenxu Lv, and 1 others. 2025. Qwen3 technical report. arXiv preprint arXiv:2505.09388 . Kailai Yang, Tianlin Zhang, Ziyan Kuang, Qianqian Xie, Jimin Huang, and Sophia Ananiadou. 2024. Mental- lama: Interpretable mental health analysis on social media with large language models. In Proceedings of the ACM on Web Conference 2024, WWW 2024, Sin- gapore, May 13-17, 2024 , pages 4489–4500. ACM. Tao Yang, Tianyuan Shi, Fanqi Wan, Xiaojun Quan, Qi- fan Wang, Bingzhe Wu, and Jiaxiang Wu. 2023. Psy- cot: Psychological questionnaire as powerful chain- of-thought for personality detection. In Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023 , pages 3305–3320. Association for Computational Linguistics. Jing Ye, Lu Xiang, Yaping Zhang, and Chengqing Zong. 2025. Sweetiechat: A strategy-enhanced role- playing framework for diverse scenarios handling emotional support agent. In Proceedings of the 31st International Conference on Computational Linguis- tics, COLING 2025, Abu Dhabi, UAE, January 19-24, 2025 , pages 4646–4669. Association for Computa- tional Linguistics. Chenhao Zhang, Renhao Li, Minghuan Tan, Min Yang, Jingwei Zhu, Di Yang, Jiahao Zhao, Guancheng Ye, Chengming Li, and Xiping Hu. 2024a. Cpsycoun: A report-based multi-turn dialogue reconstruction and evaluation framework for chinese psychological counseling. In Findings of the Association for Com- putational Linguistics, ACL 2024, Bangkok, Thai- land and virtual meeting, August 11-16, 2024 , pages 13947–13966. Association for Computational Lin- guistics. Tenggan Zhang, Xinjie Zhang, Jinming Zhao, Li Zhou, and Qin Jin. 2024b. Escot: Towards interpretable emotional support dialogue systems. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2024, Bangkok, Thailand, August 11-16, 2024 , pages 13395–13412. Association for Computational Linguistics. Jiaxing Zhao, Xihan Wei, and Liefeng Bo. 2025. R1-omni: Explainable omni-multimodal emotion recognition with reinforcement learning. CoRR , abs/2503.05379. Yucheng Zhou, Xiang Li, Qianning Wang, and Jian- bing Shen. 2024. Visual in-context learning for large vision-language models. arXiv preprint arXiv:2402.11574 . Yucheng Zhou, Jianbing Shen, and Yu Cheng. 2025a. Weak to strong generalization for large language mod- els with multi-capabilities. In The Thirteenth Inter- national Conference on Learning Representations .Yucheng Zhou, Lingran Song, and Jianbing Shen. 2025b. Training medical large vision-language mod- els with abnormal-aware feedback. arXiv preprint arXiv:2501.01377 . 11 A Related Work A.1 Mental Support | https://arxiv.org/abs/2505.15715v1 |
Chatbots The growing demand for accessible and scalable mental health support has fueled the development of AI-driven dialogue systems capable of offer- ing emotional and psychological assistance (Ge et al., 2025). Early efforts in this area primarily centered on single-turn, empathy-oriented response generation. For instance, ChatCounselor (Liu et al., 2023b) trained LLMs to produce emotionally ap- propriate responses evaluated across human-centric metrics. Similarly, PsyQA (Sun et al., 2021) fo- cused on generating supportive long-form answers in Chinese, but remained restricted to single-turn interactions without modeling therapeutic intent or multi-turn dynamics. To move beyond iso- lated responses, later work explored multi-turn emotional support. SMILE (Qiu et al., 2024a) proposed a framework that expands single sup- portive utterances into coherent, multi-turn con- versations using ChatGPT, enabling better contex- tual continuity. SoulChat (Chen et al., 2023) ex- tended this by fine-tuning LLMs on multi-turn em- pathetic dialogues, improving their ability to adapt across evolving emotional scenarios. Advancing further, SweetieChat (Ye et al., 2025) introduced a strategy-guided role-playing framework to simu- late diverse emotional situations while maintaining consistency across responses. In the Chinese coun- seling domain, CPsyCoun (Zhang et al., 2024a) reconstructed multi-turn dialogues with therapeutic type annotations (e.g., CBT, humanistic), though it lacked modeling of the underlying reasoning behind those therapeutic strategies. To enhance transparency and interpretability, several works began to model the reasoning process itself. ES- CoT (Zhang et al., 2024b) presented an Emotion- Focused and Strategy-Driven Chain-of-Thought ap- proach, making explicit the steps of emotion recog- nition, appraisal, and strategy justification in sup- portive responses—resembling human counseling logic. Similarly, MentaLLaMA (Yang et al., 2024) focused on psychological insight in social media contexts by pairing clinical condition prediction with natural language rationales. At the clinical boundary, PsyGUARD (Qiu et al., 2024b) tackled suicide risk detection and severity assessment in multi-turn counseling dialogues, emphasizing early triage and safety-aware response design.A.2 Deep Reasoning in LLMs Deep reasoning in LLMs has attracted increasing attention, particularly since the introduction of CoT prompting (Wei et al., 2022), which encourages models to generate intermediate reasoning steps before producing final answers. This technique has shown notable improvements on arithmetic and symbolic tasks, revealing the latent ability of LLMs to perform multi-step inference. Recent work has extended deep reasoning capabilities beyond sym- bolic computation into more complex domains such as math reasoning, medical understanding, multimodal reasoning, and emotional understand- ing (Zhou et al., 2025b,a; Hu et al., 2025; Zhou et al., 2024). For example, (Dou et al., 2025) pro- poses a dual-system framework inspired by Kahne- man’s theory of fast and slow thinking, integrating reinforcement learning and vision-language rea- soning to support adaptive decision-making. Sim- ilarly, reinforcement learning has been leveraged to enhance both text-based reasoning and multi- modal emotion recognition, with modality-specific contributions disentangled through verifiable re- wards (Zhao et al., 2025). In mathematical contexts, (Wang et al., 2025) introduces a model that bridges vision and code by aligning image understanding with code generation, achieving superior perfor- mance in structured geometry problems compared to models like GPT-4o. As research pushes toward more interactive and human-centric applications, deep reasoning in dialogue systems | https://arxiv.org/abs/2505.15715v1 |
has emerged as a challenging yet crucial frontier. Maintaining coherence, adaptability, and emotional sensitivity across multiple conversational turns introduces new demands. To address this, (Shu et al., 2025) refor- mulates CoT prompting as a dialogue between rea- soning agents and applies reinforcement learning to improve both coherence and solution diversity. In the context of empathetic conversations, (Xie et al., 2025) proposes an LPE framework that com- bines CoT with emotional perception from spoken input, thereby guiding emotionally intelligent re- sponse generation. Moreover, psychological struc- tures have been introduced to support deep reason- ing in personality-related tasks: (Yang et al., 2023) models psychological questionnaires as structured CoT chains for personality trait detection through multi-turn dialogue, while (Chen et al., 2025) intro- duces a graph-based reasoning visualization system to facilitate collaboration between AI and therapists in therapeutic settings. 12 Topic Model Comprehensiveness Professionalism Authenticity Safety Self-growthChatCounselor 0.000 0.774 0.226 0.419 CPsyCounX 0.000 0.581 0.355 0.548 DeepSeek-V3 0.839 2.032 0.903 0.710 GPT-4o 0.774 2.161 1.065 0.935 PsyLLM 0.871 2.419 1.645 0.935 Emotion & StressChatCounselor 0.016 0.873 0.337 0.504 CPsyCounX 0.010 0.561 0.185 0.395 DeepSeek-V3 0.932 2.250 1.143 0.836 GPT-4o 0.857 2.273 1.085 0.899 PsyLLM 0.928 2.268 1.489 0.951 EducationChatCounselor 0.000 0.794 0.305 0.412 CPsyCounX 0.000 0.366 0.107 0.214 DeepSeek-V3 0.794 2.122 0.969 0.824 GPT-4o 0.786 2.221 1.038 0.908 PsyLLM 0.924 2.176 1.351 0.962 Love & MarriageChatCounselor 0.018 0.788 0.343 0.472 CPsyCounX 0.005 0.374 0.152 0.326 DeepSeek-V3 0.833 1.992 0.962 0.833 GPT-4o 0.838 2.162 0.982 0.919 PsyLLM 0.881 2.141 1.386 0.952 Family RelationshipChatCounselor 0.021 0.785 0.343 0.453 CPsyCounX 0.003 0.467 0.183 0.349 DeepSeek-V3 0.869 2.114 1.017 0.827 GPT-4o 0.830 2.163 0.993 0.889 PsyLLM 0.889 2.135 1.398 0.941 Social RelationshipChatCounselor 0.017 0.922 0.353 0.457 CPsyCounX 0.000 0.655 0.250 0.362 DeepSeek-V3 0.897 2.362 1.181 0.810 GPT-4o 0.776 2.345 1.095 0.888 PsyLLM 0.888 2.276 1.500 0.905 SexChatCounselor 0.000 0.667 0.333 0.667 CPsyCounX 0.000 0.833 0.333 0.500 DeepSeek-V3 0.833 2.167 1.167 1.000 GPT-4o 0.667 2.000 0.667 1.000 PsyLLM 0.500 2.000 1.500 0.667 CareerChatCounselor 0.000 0.837 0.279 0.395 CPsyCounX 0.000 0.581 0.279 0.419 DeepSeek-V3 1.000 2.372 1.140 0.907 GPT-4o 0.884 2.140 1.140 0.907 PsyLLM 0.977 2.349 1.581 0.953 Table 8: Full results of performance evaluation on PsyLLM and other baseline models. 13 ModelMetrics Comprehensiveness Professionalism Authenticity Safety Normalized Avg Qwen3-1.7B (Yang et al., 2025) 0.887 2.674 1.846 0.973 0.675 Qwen3-3B (Yang et al., 2025) 0.969 3.101 2.125 0.985 0.738 Qwen3-8B (Yang et al., 2025) 1.067 3.172 2.171 0.979 0.757 Table 9: Performance comparison under Qwen3-1.7B, Qwen3-3B, and Qwen3-8B base models. RateMetrics Comprehensiveness Professionalism Authenticity Safety Normalized Avg 20% 0.940 2.945 2.018 0.999 0.718 40% 0.984 2.938 2.017 0.998 0.725 60% 0.992 3.058 2.045 0.999 0.738 80% 1.029 3.007 2.078 0.999 0.746 100% 0.874 2.417 1.848 0.991 0.656 Table 10: Performance comparison under different data Scale. Figure 5: The prompt of therapeutic interaction planning on OpenR1-Psy. 14 Figure 6: The prompt of Empathic Dialogue Reconstruction on OpenR1-Psy. Figure 7: The prompt of ICD/DSM-Grounded Therapeutic Empathy on OpenR1-Psy. 15 Figure 8: The prompt of Therapy-Guided Dialogue Validation on OpenR1-Psy. Figure 9: The prompt of Therapy-Oriented Diagnostic Classification on OpenR1-Psy. 16 Figure 10: The | https://arxiv.org/abs/2505.15715v1 |
prompt of Data Evaluation on OpenR1-Psy. 17 Figure 11: The prompt of Performance Evaluation on PsyLLM. 18 Figure 12: The prompt of Thinking Evaluation on PsyLLM. Perspective Criterion Score Subtotal Comprehensiveness1.1Accurate Empathy for Core Emotions : Does the response accurately identify and validate the client’s specific core emotions (e.g., anxiety, powerlessness, self- doubt) with precise language, going beyond generic phrases?1 1.2Insight into Underlying Needs & Pain Points : Does the response show insight into the client’s deeper, often unstated, emotional pain points or unmet needs (e.g., for safety, belonging, understanding) and gently guide exploration or affirm their validity?1 2 Professionalism2.1Emotional Attunement & Validation : Does the counselor deeply empathize with and validate the client’s emotional experience, making the client feel "felt" and understood?1 2.2Natural Integration of Support : Is supportive guidance (reflection, affirmation, gentle questions) woven naturally into the conversation, without jargon or explicit theory-naming?1 2.3Respectful & Non-Directive Style : Is the language consistently warm, re- spectful, and non-judgmental, avoiding commands or prescriptive "you should" statements?1 4 2.4Gentle Guidance & Client Agency : Are suggestions offered tentatively, encour- aging client-led exploration and small steps, rather than providing direct solutions and respecting client autonomy?1 Authenticity3.1Conversational Fluency : Does the counselor communicate in a fluent, natural way, mirroring real therapeutic dialogue?1 3.2Relational Presence : Is there evidence of genuine emotional connection and relational presence with the client?13 3.3Pacing and Responsiveness : Does the counselor maintain appropriate pacing, leaving space for client expression, and respond in a natural, conversational manner to the client’s immediate input?1 Safety4.1Ethical Boundaries & Client Protection : Does the counselor avoid intrusive or leading language, respect privacy by not probing for unnecessary sensitive details, and prioritize client well-being and autonomy?0.5 4.2Fostering Psychological Safety : Does the counselor consistently convey un- conditional respect and acceptance, creating an environment free of judgment or dismissal, where the client feels safe to be authentic?0.51 Table 11: Evaluation Criteria for Counseling Quality Assessment 19 | https://arxiv.org/abs/2505.15715v1 |
Shared Path: Unraveling Memorization in Multilingual LLMs through Language Similarities Xiaoyu Luo1,2, Yiyi Chen1, Johannes Bjerva1, Qiongxiu Li2* 1Department of Computer Science,2Department of Electronic Systems Aalborg University, Copenhagen, Denmark {xilu,yiyic,jbjerva}@cs.aau.dk,qili@es.aau.dk Abstract We present the first comprehensive study of Memorization in Multilingual Large Language Models (MLLMs), analyzing 95 languages us- ing models across diverse model scales, ar- chitectures, and memorization definitions. As MLLMs are increasingly deployed, understand- ing their memorization behavior has become critical. Yet prior work has focused primarily on monolingual models, leaving multilingual memorization underexplored, despite the in- herently long-tailed nature of training corpora. We find that the prevailing assumption, that memorization is highly correlated with training data availability, fails to fully explain memo- rization patterns in MLLMs. We hypothesize that treating languages in isolation — ignor- ing their similarities — obscures the true pat- terns of memorization. To address this, we propose a novel graph-based correlation metric that incorporates language similarity to ana- lyze cross-lingual memorization. Our analy- sis reveals that among similar languages, those with fewer training tokens tend to exhibit higher memorization, a trend that only emerges when cross-lingual relationships are explicitly mod- eled. These findings underscore the importance of alanguage-aware perspective in evaluating and mitigating memorization vulnerabilities in MLLMs. This also constitutes empirical evi- dence that language similarity both explains Memorization in MLLMs and underpins Cross- lingual Transferability , with broad implications for multilingual NLP1. 1 Introduction Large Language Models (LLMs) demonstrate in- creasingly strong capabilities in processing and understanding multiple languages (Conneau et al., 2020), resulting in advancements across a wide range of natural language processing (NLP) tasks *Corresponding author. 1We release our code at: https://github.com/ xiaoyuluoit97/MLLM_memorization . enghinitacmnMultilingual LLMMemorization Metrics Pre-training Token CountsGraph-based CorrelationLanguage GraphMultilingual DataFigure 1: Overview of our Framework for Analyzing Memorization in MLLMs using Language Similarity Graph-based Correlation Analysis. (Choi et al., 2021; Pikuliak et al., 2021). MLLMs, in particular, empower global users to interact in their native languages, offering wide-reaching ben- efits in accessibility and productivity. However, LLMs are also known to memorize portions of their training data (Carlini et al., 2021), raising serious concerns such as the leakage of copyrighted content (Chang et al., 2023) and per- sonal information (Staab et al.). While memoriza- tion in monolingual LLMs has been widely studied, how it manifests in multilingual models remains underexplored . Prior work predominantly attributes memoriza- tion to data volume, positing that frequent tokens or duplicated content are disproportionately memo- rized (Carlini et al., 2022). This echoes findings from computer vision, where long-tail examples are disproportionately memorized (Feldman and Zhang, 2020; Jiang et al., 2020; Garg et al., 2023), resulting in increased privacy and fairness risks (Li et al., 2024; Gao et al., 2023; Tramèr et al., 2022). However, MLLMs introduce a unique complexity: languages are not processed independently but in a joint space, often sharing lexical, morphological, and syntactic features. For instance, typologically similar languages like Turkish and Azerbaijani may interact during training in ways that affect their 1arXiv:2505.15722v1 [cs.CL] 21 May 2025 memorization patterns. Moreover, low-resource languages naturally occupy the long tail of the data distribution, introducing complex | https://arxiv.org/abs/2505.15722v1 |
dynamics that are poorly understood. Together, these challenges raise important questions that motivate our inves- tigation. For example, to what extent does mem- orization in MLLMs correspond to training data volume, as suggested by long-tail distribution as- sumptions? How might cross-lingual relationships influence memorization behavior across languages? And can memorization in one language lead to un- intended leakage in another, particularly among similar languages? To answer these questions, we conduct the first large-scale study of memorization in MLLMs, un- covering critical limitations of existing research and offering a novel language-aware perspective (see Fig. 1 for an overview of our framework). Our key contributions are: •Revisiting the Long-Tail Assumption : We show that memorization in multilingual set- tings cannot be fully explained by training data volume or token frequency. In many cases, low-resource languages exhibit lower memorization rates than high-resource coun- terparts. •Language Similarity-Aware Correlation Metric: We introduce a novel graph-based correlation metric that incorporates typolog- ical and statistical similarities between lan- guages, enabling structured analysis of cross- lingual memorization dynamics. •Cross-Lingual Memorization Insights: Us- ing our metric, we find that languages with high similarity exhibit interconnected mem- orization behaviors, affording fundamental grounding for cross-lingual transferability. •Comprehensive and Robust Evalua- tion: We assess memorization using both generation-based and likelihood-based metrics, and validate our findings across over 95 languages, multiple LLM architec- tures (encoder-only and decoder-based) of varying scales, demonstrating consistent and generalizable trends.2 Related work 2.1 Memorization in LLM Memorization in deep neural networks has long been recognized as a critical issue, with implica- tions for privacy, fairness, and generalization (Feld- man and Zhang, 2020; Garg et al., 2023; Chang and Shokri, 2021; Li et al., 2025). These concerns have been empirically confirmed in LLMs. Carlini et al. (2019) first showed that generative models can inadvertently memorize and reproduce rare, sensitive training data. Carlini et al. (2021) fur- ther demonstrated that large models like GPT-2 can regurgitate unique sequences even if they ap- pear only once in the training corpus. Carlini et al. (2022) systematically quantified memorization pat- terns across model scales and architectures, while Kim et al. (2023) focused on personally identifiable information (PII) memorized by LLMs, proposing ProPILE to assess leakage from the perspective of data subjects. Recent work has formalized memorization risk, particularly distinguishing between discoverable andextractable memorization (Carlini et al., 2021; Nasr et al., 2023). The latter refers to information that an adversary can extract without direct access to the training set, posing realistic threats to de- ployed models. Studies have shown that LLMs — across GPT, T5, and others — can leak hundreds to millions of training sequences, depending on model size, duplication, and prompt strategy (Nasr et al., 2023; Carlini et al., 2022). While such risks have been studied in monolingual settings, mem- orization behavior in multilingual LLMs remains underexplored, with the exception of Cavalin et al. (2024), especially for low-resource languages oc- cupying the long tail of the training distribution. 2.2 Cross-lingual Transferability & Language Similarity Cross-lingual transfer entails the representation of texts in multiple natural languages in a shared multilingual space. The paradigm of representa- | https://arxiv.org/abs/2505.15722v1 |
tions for cross-lingual transfer has shifted from word embeddings (Mikolov et al., 2013; Ammar et al., 2016; Vuli ´c et al., 2019) to contextual em- beddings (Conneau et al., 2019; Devlin et al., 2019; Raffel et al., 2020). Previous work investigat- ing cross-lingual transferability mainly leverages downstream task performance to measure the trans- fer from a source language or languages to target languages through selective fine-tuning (Choenni 2 et al., 2023) or using zero-shot or few-shot transfer with pre-trained MLLMs (Lauscher et al., 2020; Adelani et al., 2022; de Vries et al., 2022; Blaschke et al., 2025). Language similarity based on linguis- tic data has been heavily referred to in cross-lingual transferability studies (Wichmann et al., 2011; Lit- tell et al., 2017), not without faulty representa- tions (Toossi et al., 2024; Khan et al., 2025). More- over, the findings on leveraging language similarity for improving downstream cross-lingual transfer re- main mixed and sometimes contradictory (Philippy et al., 2023). Recently, different language similar- ity measures have been deployed to enhance cross- lingual transfer performance under different NLP tasks (Blaschke et al., 2025) and analyze MLLM language distribution patterns (Chen et al., 2025). We share the perspective that language similarity is not a static concept , and different measures can be pertinent to different scenarios. Prior research in MLLM embedding spaces has shown that sentence embeddings are composed of alanguage-specific andlanguage-agnostic compo- nents (Pires et al., 2019; Libovický et al., 2020; Xie et al., 2024), which have been leveraged to improve downstream performance (Tiyajamorn et al., 2021) and investigate language relations in MLLMs (Choenni and Shutova, 2022). In addi- tion, Lin et al. (2024) shows language similarity ex- tracted from pretrained MLLMs with parallel sen- tences exhibits moderately high correlations with linguistic similarity measures, further motivating our language-aware memorization analysis. In this paper, we extract language-specific embeddings from each MLLM as language representations to compute language similarity (cf. Section 4.1). 3 Language Model Memorization We define Memorization in the context of LLMs and examine its key formulations from different perspectives. Given an LM fand a string xfrom its training data, we split xinto a prefix pand a suffix s, so that x=p||s. Let the prefix pconsist of n tokens, noted as p= (p1, . . . , p n); and let the suffix sconsist of mtokens, noted as s= (s1,···, sm). 3.1 Measuring MLLM Memorization Exact Memorization Following the definition of extractable memorization by Carlini et al. (2022), whether a language model can reproduce a train- ing sequence when prompted with part of it using greedy decoding, we define Exact MemorizationRatio asn n+mto measure the fraction of the se- quence required for exact reconstruction. Given a set of samples, we define the Exact Memorization Rate (EM) as the fraction of samples where the model, when prompted with the prefix, reproduces the suffix exactly: EM=1 NNX i=11(ˆsi=si), where Nis the total number of samples, siis the true suffix of the ith sample, ˆsiis the output given the prefix and 1(·)is the indicator function. Relaxed Memorization AsExact Memorization is a stringent criterion, we additionally define a | https://arxiv.org/abs/2505.15722v1 |
re- laxed version of memorization that evaluates the predicted suffix against the ground truth suffix us- ing approximate string matching metrics rather than exact match. We use BLEU (Papineni et al., 2002) and Rouge-L (Lin, 2004) as our Relaxed Memorization Scores (RM) , serving as continuous indicators of memorization. Reconstruct Likelihood Memorization Com- plementary to previous generation-based memo- rization metrics, we adopt reconstruct likelihood from Kim et al. (2023) to define a probability- based metric Reconstruct Likelihood Memorization , noted as PM. which quantifies memorization by the likelihood the model assigns to a known sequence under its learned distribution, i.e., its internal prob- ability of reconstructing the suffix given its prefix. Our goal is to evaluate how likely the model finds the suffix swhen conditioned on the prefix p. We define the log-likelihood of sgiven pas: log Pr( s|p) =mX r=1logp(sr|p, s<r), where s<rdenotes the preceding r−1tokens of the suffix. 3.2 Memorization for Encoder-Decoders The definitions above primarily assume a decoder- only architecture of LLMs where predictions are made in a left-to-right autoregressive manner. In contrast, encoder-decoder models such as T5 are trained with a span-denoising objective (Raffel et al., 2020). Following Carlini et al. (2022), we randomly mask a set of non-contiguous token spans from a sampled data sequence. To evaluate Ex- act Memorization , the model reconstructs these missing spans given surrounding context, and we 3 consider a string to be memorized if the gener- ated output exactly matches the masked content. To evaluate Reconstruct Likelihood Memorization , we follow the span corruption setup and treat the masked spans as targets. We then compute the sum of log-probabilities assigned to these tokens, conditioned on the visible parts of the sequence. T5’s span corruption objective typically mask very short spans (about three tokens on average un- der default settings (Raffel et al., 2020)), so token- level similarity becomes uninformative, hence we do not assess the relaxed memorization for T5- based encoder-decoder models. 4 Methodology Previous work on LLM Memorization has mainly focused on data duplication and frequency in mono- lingual settings, with limited analysis across lan- guages. Although correlation metrics such as Pear- son can quantify global trends (e.g., measuring how token counts and memorization rates linearly co- vary), they overlook the structured dependencies among languages. Our analysis (Fig. 2) shows that languages with similar frequency distributions can exhibit divergent memorization patterns, underscor- ing the importance of language-aware evaluation. 4.1 Measuring Language Similarity We leverage language-specific subspace in multi- lingual embedding space to measure language simi- larities. Let Lbe a set of languages. To extract lan- guage representations from MLLMs, we use a par- allel dataset D, in our case Flores+ (NLLB Team et al., 2024). Suppose we have msentences for each language l∈LinD, we first extract the mean embedding µl=1 mPm i=1ei lfor each hidden layer h, where ei l∈Rdis a sentence embedding. We then form a matrix M∈Rd×|L|by concatenating µlacross all languages. We extract the language- specific subspace Msusing Algorithm 1 (Xie et al., 2024), then project each language embedding into this subspace sl=MsMT sel. For each hidden layer | https://arxiv.org/abs/2505.15722v1 |
hin a MLLM, we measure the pair-wise language similarity for a language pair {l1, l2}, where l1, l2∈Lusing cosine similarity between language-specific embeddings: cos (sl1,sl2) =sl1·sl2 ||sl1|| · ||sl2||. Empirically, we find that the language similarity drawn from the final layer embeddings of MLLMs shows a stronger correlation with linguistically Figure 2: Example graphs considering Intra-Topology and Cross-Topology. grounded similarity measures overall (cf. Ap- pendix B.4). 4.2 Graph-based Correlation Analysis We introduce our topology-based framework, which captures cross-lingual dependencies by mod- eling signal propagation over a language similarity graph. It rests on two empirical observations: (1) Memorization patterns tend to propagate across re- lated languages, and (2) Standard correlation met- rics fail to capture these structured transfer effects. Graph Construction via Language Similarity We represent the language space as an undirected graphG= (V,E), where each node corresponds to a language, and edges encode pairwise language similarity. Let nbe the number of languages and A∈Rn×nthe adjacency matrix, where Aijrepre- sents the similarity between languages iandj. To sparsify the graph and remove self-loops, we apply thresholding with θ: Aij= 1,if sim (i, j)≥θ 0,otherwise. (1) We then construct the unnormalized graph Lapla- cian matrix L=D−A, where Dii=P jAijis the degree matrix. Information analysis over the Graph To under- stand how language-level signals behave over this graph structure, we begin with the concept of graph smoothness , which quantifies how much a signal varies across adjacent nodes. For a scalar-valued signal x∈Rndefined over graph, the smoothness is defined as (Zhou and Schölkopf, 2004): x⊤Lx=X (i,j)∈EAij(xi−xj)2. Smaller values indicate that the signal xchanges slowly over similar nodes, i.e., it is smooth with respect to the graph topology. 4 To compare how two signals (e.g., memoriza- tion scores and the number of tokens) vary to- gether across languages, we define the graph cross- smoothness : x⊤Ly=X (i,j)∈EAij(xi−xj)(yi−yj), where y∈Rnrefers to a scalar-valued signal dif- ferent from x. This measures whether the two signals increase and decrease in tandem over topo- logically similar languages. Graph-based Correlation Coefficient Based on the above definitions, we define the proposed Graph-based Correlation Coefficient between sig- nalsm(e.g., memorization scores) and t(e.g., to- ken counts) as: ρG(m,t) =m⊤Ltp (m⊤Lm)(t⊤Lt) Note that the defined coefficient is bounded by the Cauchy-Schwarz inequality: |m⊤Lt| ≤q (m⊤Lm)(t⊤Lt) Hence, ρG(m,t)∈[−1,1]and it captures the structural alignment between the two signals over the graph. A value close to 1 implies that memoriza- tion and token frequency change similarly across related languages, while values near −1implies inverse alignment. ρGaccounts for the topological structure of language space, enabling us to uncover subtle, structure-respecting relationships in MLLM mem- orization, which would otherwise be missed by flat, language-agnostic analyses such as Pearson correlation (cf. Table 1 for details). 4.3 Intra-Topology & Cross-Topology Analysis To further interpret the structure of memorization alignment, we partition the graph into subgraphs by thresholding edge weights. Each subgraph rep- resents a cluster of similar languages; disconnected components reflect cross-topological groups. To enable meaningful comparison across different lan- guage topology clusters, we aggregate node-level features into a single representative vector per subgraph. This aggregation is performed within each subgraph, it is | https://arxiv.org/abs/2505.15722v1 |
weighted by language promi- nence (node degrees) and normalized by global edge weights to preserve topological information. Specifically, for a subgraph G′= (V′,E′), whereeach node i∈ V′has features ti(tokens) and mi (memorization), we define the subgraph-level rep- resentations as: ¯t=X i∈V′ niP j∈V′nj·ti! where ni=|{j|(i, j)∈ E′}|is the degree of node i. The aggregated memorization ¯mis computed similarly. We refer intra-topo as the set of language nodes connected by edges in the graph, while cross-topo refers to language groups that remain disconnected. The resulting subgraph-level representations enable cross-topology correlation analysis via Pearson cor- relation. This approach remains faithful to the in- ternal structure of each language cluster, while cap- turing the relationship between memorization and training tokens across topologically dissimilar clus- ters. It complements our topology-aware metric ρGby offering a cluster-level, interpretable view of memorization–complexity alignment. 5 Experimental Setup 5.1 Model Selection & Corpus Details Studying memorization in MLLMs requires i) publicly available models with ii) fully disclosed pre-training data and iii) broad language cover- age. For fair cross-architecture comparisons, we also align models by their training corpora and tokenizers whenever feasible. We use the MT5 encoder-decoder family (Xue et al., 2020), trained onMC4(Raffel et al., 2020) covering 100+ lan- guages, and the MGPT decoder-only series for ar- chitectural comparison. Specifically, MGPT-101 shares the tokenizer and mC4 training data with MT5- BASE . Additionally, we select MGPT-1.3B and MGPT-13B to assess scale effects, which are trained on more balanced and filtered MC4(cf. Ta- ble 5 for details). As shown in Fig. 9 and 10 in Appendix B.7, the data distribution of MC4across languages ex- hibits a clear long-tailed pattern. A small number of high-resource languages (such as English, Rus- sian, and Spanish) dominate the corpus in terms of token count, while the vast majority of other languages are represented with significantly fewer tokens. This long-tailed distribution serves as an important factor in analysing how memorization behaviors vary across languages in MLLMs. 5 Figure 3: Graph Construction at Different Thresholds θ. 5.2 Prompt sampling MC4contains a substantial amount of noisy and duplicated content. For pre-processing, we sam- ple text passages with more than 600 characters, and filter the content containing “http://”, garbled tokens, repeated strings, and long sequences of meaningless digits. To ensure accurate language representation, we use CLD3 (cld) for language identification. Specifically, we retain only those samples where both the predicted language con- fidence and the proportion of the target language exceed 90%. Duplicated content can disproportionately im- pact memorization, where sequences that appear more frequently in the training set are more likely to be memorized, following a near log-linear trend (Lee et al., 2021). To control repetition for minimizing potential bias and ensure a more bal- anced representation across the dataset, we ran- domly sample 50,000 filtered examples per lan- guage with a 5 million shuffle buffer, following the sample size in Carlini et al. (2022). A handful of low-resource languages with insufficient exam- ples are marked with an asterisk and boldface in Fig. 9; 10. 6 Analysis & Results We investigate Memorization in MLLMs across multiple dimensions: languages, model | https://arxiv.org/abs/2505.15722v1 |
architec- tures, prompt length and model scale. In each di- mension, we measure the Memorization Rates (cf. Section 3) and correlate with training data (in to- ken counts) in languages, using both the Pearson correlation ( r) and Graph-based Correlation ( ρG) metrics.Mem. Metric r ρ G EM -0.13 -0.24 PM -0.36 -0.56 RM (BLEU) -0.23 -0.36 RM (Rouge-L) -0.06 -0.30 Table 1: Correlations between Memorization Rates and Training Data in Token Counts of MGPT-101 . The ρG with graph-based metric threshold θ= 0.41. Takeaway: the proposed ρGaccentuates the correlation. 6.1 Constructing Language Graphs To use our graph-based correlation to analyze memorization in MLLMs, we construct language similarity-based graphs, at varying thresholds θ based on equation 1, which specifies the minimum similarity required for two languages to be consid- ered meaningfully related. Thus, θdirectly controls the sparsity of the resulting language graph. Fig. 3 illustrates this effect using a subset of MGPT-101 pre-training languages, showing how edge density and connectivity increase as θincreases. As ex- pected, higher language similarity thresholds θ, the fewer connected graphs. By varying θ, we adjust the granularity of the language similarity topology, enabling analysis under different levels of relational strictness. 6.2 Data Availability in Memorization We evaluate the relationship between per-language memorization rates and the token counts in training data in a MLLM, for example, MGPT-101 . As shown in Table 1, our graph-based metric ρG, by incorporating language similarity, largely accen- tuates the negative correlation between language- wise memorization and token count, in comparison to the Pearson correlation coefficient. This negative trend suggests that, among similar languages, those with fewer training tokens tend to exhibit higher memorization, which further corroborates our hy- pothesis that memorization in MLLMs cannot be explained by training data volume alone. 6.3 Cross-lingual Transferability vs. Memorization Leveraging the constructed language graph, we measure the topology-based correlation for both intra-topo and cross-topo at various θ. As shown in Fig. 4, among cross-topo languages, EM and RM become largely uncorrelated with token counts, spanning from −0.2to0.05with the growing num- ber of language groups. While PM has a stronger 6 Figure 4: Intra-Topology and Cross-Topology Corre- lation Coefficients ( ρG) across varying thresholds θ. Top: Memorization Rates across Thresholds. Bottom : Topology graph information via subgraph and single- ton counts at varying threshold ( x-axis), from 6 to 20 language groups ( y-axis), with a total of 95 languages. Takeaway : Cross-lingual transferability among similar languages impact memorization. negative correlation, the correlation becomes gener- ally weaker as more cross-topo language groups are created, from −0.6to−0.4. This highlights that, across distinctive language groups, the correlation between memorization and data volume becomes weaker . In contrast, consistent with previous find- ings (cf. Section 6.2), intra-topo ρGvalues grow increasing negative (down to −0.6) across mem- orization metrics, as more similar languages are grouped together (as θbecomes higher), indicating an inverse relationship between training data and memorization within similar languages . From both cross-topo and intrar-topo perspec- tives, our results show that as MLLMs are trained with richer data from similar languages, memo- rization decreases —— evidence that cross-lingual | https://arxiv.org/abs/2505.15722v1 |
transferability among similar languages plays an essential role in memorization in MLLMs. 6.4 Memorization across Model Architectures & Scales Since language similarity is model-specific, its scores exhibit different distributions across models. We select the specific threshold θto better incorpo- rate structural patterns based on language similar- ity, while confirming that the observed trends hold across a range of thresholds (cf. Appendix B.2). Table 2 presents the intra-topology correlations with model-specific thresholds. For MGPT-1.3BMem. Metric EM PM Model r ρ G r ρ G MT5- SMALL 0.05 0.12 0.10 -0.53 MT5- BASE -0.12 -0.15 0.48 0.03 MT5- LARGE 0.01 0.08 0.47 0.19 MGPT-1.3B 0.22 -0.49 -0.39 -0.63 MGPT-13B 0.18 -0.13 -0.39 -0.78 Table 2: Correlations between Memorization Rates and Training Data across Models and Scales, with specific θ for Intra-Topology Correlation. Takeaway: In contrast tor,ρGpresents stronger correlations and more consis- tent alignment with prior memorization analyses. Bold values indicate the highest-magnitude correlation. Mem. Metric RM (BLEU) RM (Rouge-L) Model r ρ G r ρ G MGPT-1.3B -0.18 -0.53 0.42 -0.42 MGPT-13B -0.21 -0.31 0.4 -0.04 Table 3: Relaxed Memorization Rates across MGPT models, with specific θfor Intra-Topology Correlation. and MGPT-13B - trained on a corpus with a less pronounced long-tailed distribution, rappears pos- itive, seemingly contradictory to previous findings (cf. Sections 6.2; 6.4). However, leveraging lan- guage similarity and filtering out noisy language pairs, ρGshows negative correlations, consistent with prior findings. Notably, with PM, MGPT-13B presents the strongest negative correlation, suggest- ing that larger models trained on a more balanced corpus reveal the strongest inverse link between memorization and data availability in similar lan- guages. In contrary, MT5’s EM results exhibit a different trend compared to MGPT models, which might be attributed to its encoder-decoder architec- ture. As RM is not applicable to MT5-based models (cf. Section 3.2), we show the relaxed memoriza- tion metrics for MGPT models in Table 3. We ob- serve a consistent trend aligns with our earlier find- ings: memorization is negatively correlated with training data quantity among similar languages. In summary , our analysis and results support the claim that memorization in MLLMs is not shaped solely by training data volume - as com- monly observed in computer vision task - but also by intricacies among languages. Specifically, when language similarity is incorporated via a topology- based metric, we show that languages with fewer training tokens tend to exhibit higher memoriza- tion — a pattern that only becomes evident when language relations are explicitly modeled. 7 6.5 Effect of Prompt Length & Model Scale on Memorization Model Prompt. Len. EM (%) PM RM (B) RM (R) GPT2 Decoder-only: MGPT-101 MGPT-10150 0.22 -44.4 3.2 9.8 100 0.42 -41.9 3.6 10.1 150 0.56 −40.9 3 .9 10 .1 GPT3 Decoder-only: MGPT -1.3B / 13B MGPT-1.3B50 0.31 -33.7 4.1 5.7 100 0.29 -32.0 3.7 5.7 150 0.32 −31.1 3.5 4.8 MGPT-13B50 1.01 -32.2 7.1 7.6 100 1.38 -30.2 8.1 8.2 150 1.56 −29.5 8 .6 8 .4 Encoder-Decoder: MT5family MT5-S MALL50 0.02 -66.1 – – 100 0.15 −56.9 – – 150 0.25 -61.3 – – MT5-B ASE50 0.07 -45.7 – – | https://arxiv.org/abs/2505.15722v1 |
100 0.50 -35.0 – – 150 0.90 −31.4 – – MT5-L ARGE50 0.02 -78.4 – – 100 0.23 -52.6 – – 150 0.49 −39.0 – – Table 4: Memorization Rates across various prompt lengths (35,85,135), model architectures, and model scales. The predicted token length is fixed at 15. The highest memorization rates for each model are bold .Takeaway : Overall, the memorization rates in- crease with the increasing prompt lengths, with a few exceptions . To investigate the effects of experimental setup on memorization, we measure memorization across models of different architectures and scales at vary- ing prompt-length (35,85,135) , with the fixed out- put token length of 15. The prompt-length refers to prefix-length in the context of decoder-only mod- els. As shown in Table 4, across all model types, we observe a consistent trend: longer prompts lead to higher memorization . This pattern holds across the memorization metrics, with a few exceptions, as underlined, and aligns with prior findings on memorization in monolingual LMs, indicating that longer contexts offer more cues for memorization (Carlini et al., 2021, 2022). InGPT-3 –based decoder-only models, we also observe a clear scaling effect :larger models ex- hibit stronger memorization , particularly in exact memorization. For example, EM increases from 0.32% inMGPT-1.3B to1.56% inMGPT-13B with the prefix of length 135. Results in other metrics (e.g., PM, RM) follow this trend with few exceptions. In comparison, the encoder-decoder models tells a different story. While memorization generally increases with growing scale (e.g., MT5- SMALL toMT5- BASE ), the largest model ( MT5- LARGE ) exhibits lower memorization when com-pared to MT5- BASE . In addition, we observe that MT5- LARGE — without downstream finetuning — produces more broken completions for masked tokens. We hy- pothesize that this instability may lead to reduced memorization rates in MT5- LARGE , especially in a masked language modeling context. We provide a random example of such unstable generation in Appendix B.6. 6.6 Language-Level Memorization across Prompt Lengths & Model Scales We analyze how language-level memorization varies across different prompt lengths and model scales by computing Pearson correlations of per- language memorization rates under each condition. Across all models, Language-Level memorization distributions at different prompt lengths remain strongly correlated. For decoder-only models, Pear- son correlations consistently exceed 0.9in all mem- orization metrics, while for the MT5models, they are generally above 0.8, with the lowest still above 0.66. These results indicate that languages with high memorization tend to remain highly memo- rized regardless of prompt length. See Table 12 and 13 for detailed results. A similar trend holds across model scales. Across all metrics and model scales, the Pearson correlation is consistently shows a strong posi- tive correlation, with the lowest value being 0.71. These results suggest that memorization tenden- cies are stable, intrinsic language-level characteris- tics that generalize across both prompt length and model scale. We observe a “the poorer get poorer” phenomenon, where languages with high memo- rization consistently remain high across settings. See Table 14 for full results. 7 Conclusion and Future Work We present the first large-scale study of memoriza- | https://arxiv.org/abs/2505.15722v1 |
tion in MLLMs, grounding observed memorization patterns through language similarity and revealing cross-linguality as a key factor shaping memoriza- tion in MLLMs. To this end, we define memo- rization metrics tailored to language models and propose a graph-based correlation measure that in- corporates language similarity, uncovering patterns that linear metrics fail to capture. Notably, the ten- dency for languages with fewer training tokens to exhibit higher memorization, a trend that only be- comes apparent when language relationships are 8 explicitly modeled. We experiment on a range of language models, across architectures, scales and 95 languages, showing consistent memorization trends. Our findings urge a paradigm shift toward language-aware memorization audits in MLLMs, particularly for under-resourced languages vulner- able to cross-lingual leakage. We encourage fur- ther work at the intersection of multilingualism and memorization to develop effective strategies to mitigate memorization in MLLMs.Limitations While our work provides the first large-scale analy- sis of memorization in MLLMs, we primarily an- alyze models in their pre-trained state and do not explore how fine-tuning or instruction tuning might alter memorization behavior—especially in task- specific or alignment-sensitive contexts. Nonethe- less, we offer a principled and extensible founda- tion for understanding memorization through the lens of language similarity in multilingual models. Ethics Statement We comply with the ACL Ethics Policy. This work aims to improve understanding of memorization risks in multilingual language models, with the broader goal of enabling safer and more privacy- preserving NLP systems. All experiments are con- ducted on publicly available pre-trained models and benchmark datasets. We do not train on, ex- tract from, or attempt to infer sensitive personal information from proprietary or private data. Acknowledgements YC and JB are funded by the Carlsberg Founda- tion, under the Semper Ardens: Accelerate pro- gramme (project nr. CF21-0454). We further ac- knowledge the support of the AAU AI Cloud and express our gratitude to DeiC for providing com- puting resources on the LUMI cluster (project nr. DeiC-AAU-N5-2024085-H2-2024-28). References Compact language detector v3 (cld3). https:// github.com/google/cld3 . Accessed: 2025-04-30. David Ifeoluwa Adelani, Graham Neubig, Sebastian Ruder, Shruti Rijhwani, Michael Beukman, Chester Palen-Michel, Constantine Lignos, Jesujoba O. Al- abi, Shamsuddeen H. Muhammad, Peter Nabende, Cheikh M. Bamba Dione, Andiswa Bukula, Roowei- ther Mabuya, Bonaventure F. P. Dossou, Blessing Sibanda, Happy Buzaaba, Jonathan Mukiibi, God- son Kalipe, Derguene Mbaye, and 26 others. 2022. MasakhaNER 2.0: Africa-centric transfer learning for named entity recognition. In Proceedings of the 2022 Conference on Empirical Methods in Nat- ural Language Processing , pages 4488–4508, Abu Dhabi, United Arab Emirates. Association for Com- putational Linguistics. Waleed Ammar, George Mulcaire, Miguel Ballesteros, Chris Dyer, and Noah A Smith. 2016. Many lan- guages, one parser. Transactions of the Association for Computational Linguistics , 4:431–444. 9 Verena Blaschke, Masha Fedzechkina, and Maartje ter Hoeve. 2025. Analyzing the effect of linguistic simi- larity on cross-lingual transfer: Tasks and experimen- tal setups matter. arXiv preprint arXiv:2501.14491 . Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan Zhang. 2022. Quantifying memorization across neural lan- guage models. In The Eleventh International Confer- ence on Learning Representations . Nicholas Carlini, Chang | https://arxiv.org/abs/2505.15722v1 |
Liu, Úlfar Erlingsson, Jernej Kos, and Dawn Song. 2019. The secret sharer: Eval- uating and testing unintended memorization in neu- ral networks. In 28th USENIX security symposium (USENIX security 19) , pages 267–284. Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-V oss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, and 1 others. 2021. Extracting training data from large language models. In 30th USENIX security symposium (USENIX Security 21) , pages 2633–2650. Paulo Cavalin, Pedro Henrique Domingues, Claudio Pinhanez, and Julio Nogima. 2024. Fixing rogue memorization in many-to-one multilingual transla- tors of extremely-low-resource languages by rephras- ing training samples. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 4503–4514, Mexico City, Mexico. Association for Computational Linguistics. Hongyan Chang and Reza Shokri. 2021. On the privacy risks of algorithmic fairness. pages 292–303. IEEE. Kent Chang, Mackenzie Cramer, Sandeep Soni, and David Bamman. 2023. Speak, memory: An archaeol- ogy of books known to chatgpt/gpt-4. In Proceedings of the 2023 Conference on Empirical Methods in Nat- ural Language Processing , pages 7312–7327. Yiyi Chen, Qiongxiu Li, Russa Biswas, and Johannes Bjerva. 2025. Large language models are easily con- fused: A quantitative metric, security implications and typological analysis. In Findings of the Associ- ation for Computational Linguistics: NAACL 2025 , pages 3810–3827, Albuquerque, New Mexico. Asso- ciation for Computational Linguistics. Rochelle Choenni, Dan Garrette, and Ekaterina Shutova. 2023. How do languages influence each other? study- ing cross-lingual data sharing during lm fine-tuning. arXiv preprint arXiv:2305.13286 . Rochelle Choenni and Ekaterina Shutova. 2022. Inves- tigating language relationships in multilingual sen- tence encoders through the lens of linguistic typology. Computational Linguistics , 48(3):635–672. Hyunjin Choi, Judong Kim, Seongho Joe, Seungjai Min, and Youngjune Gwon. 2021. Analyzing zero-shot cross-lingual transfer in supervised nlp tasks. arXiv preprint arXiv:2101.10649 .Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 . Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics , pages 8440– 8451. Wietse de Vries, Martijn Wieling, and Malvina Nissim. 2022. Make the best of cross-lingual transfer: Ev- idence from POS tagging with over 100 languages. InProceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 7676–7685, Dublin, Ireland. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers) , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Vitaly Feldman and Chiyuan Zhang. 2020. What neural networks memorize and why: Discovering the long | https://arxiv.org/abs/2505.15722v1 |
tail via influence estimation. 33:2881–2891. Yinghua Gao, Yiming Li, Linghui Zhu, Dongxian Wu, Yong Jiang, and Shu-Tao Xia. 2023. Not all sam- ples are born equal: Towards effective clean-label backdoor attacks. Pattern Recognition , 139:109512. Isha Garg, Deepak Ravikumar, and Kaushik Roy. 2023. Memorization through the lens of curvature of loss function around samples. arXiv preprint arXiv:2307.05831 . Ziheng Jiang, Chiyuan Zhang, Kunal Talwar, and Michael C Mozer. 2020. Exploring the memorization-generalization continuum in deep learning. arXiv preprint arXiv:2002.03206 . Aditya Khan, Mason Shipton, David Anugraha, Kaiyao Duan, Phuong H. Hoang, Eric Khiu, A. Seza Do˘gruöz, and En-Shiun Annie Lee. 2025. URIEL+: Enhancing linguistic inclusion and usability in a typo- logical and multilingual knowledge base. In Proceed- ings of the 31st International Conference on Compu- tational Linguistics , pages 6937–6952, Abu Dhabi, UAE. Association for Computational Linguistics. Siwon Kim, Sangdoo Yun, Hwaran Lee, Martin Gubri, Sungroh Yoon, and Seong Joon Oh. 2023. Propile: Probing privacy leakage in large language models. Advances in Neural Information Processing Systems , 36:20750–20762. 10 Anne Lauscher, Vinit Ravishankar, Ivan Vulic, and Goran Glavas. 2020. From zero to hero: On the limitations of zero-shot language transfer with mul- tilingual transformers. In Conference on Empirical Methods in Natural Language Processing . Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison-Burch, and Nicholas Carlini. 2021. Deduplicating training data makes language models better. arXiv preprint arXiv:2107.06499 . Qiongxiu Li, Xiaoyu Luo, Yiyi Chen, and Johannes Bjerva. 2025. Trustworthy machine learning via memorization and the granular long-tail: A survey on interactions, tradeoffs, and beyond. arXiv preprint arXiv:2503.07501 . Xiao Li, Qiongxiu Li, Zhanhao Hu, and Xiaolin Hu. 2024. On the privacy effect of data enhancement via the lens of memorization. Jindˇrich Libovický, Rudolf Rosa, and Alexander Fraser. 2020. On the language neutrality of pre-trained mul- tilingual representations. In Findings of the Associ- ation for Computational Linguistics: EMNLP 2020 , pages 1663–1674, Online. Association for Computa- tional Linguistics. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out , pages 74–81. Peiqin Lin, Chengzhi Hu, Zheyu Zhang, Andre Mar- tins, and Hinrich Schuetze. 2024. mPLM-sim: Better cross-lingual similarity and transfer in multilingual pretrained language models. In Findings of the Asso- ciation for Computational Linguistics: EACL 2024 , pages 276–310, St. Julian’s, Malta. Association for Computational Linguistics. Patrick Littell, David R Mortensen, Ke Lin, Katherine Kairis, Carlisle Turner, and Lori Levin. 2017. Uriel and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors. In Proceed- ings of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics: Volume 2, Short Papers , volume 2, pages 8–14. Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 . Milad Nasr, Nicholas Carlini, Jonathan Hayase, Matthew Jagielski, A Feder Cooper, Daphne Ippolito, Christopher A Choquette-Choo, Eric Wallace, Flo- rian Tramèr, and Katherine Lee. 2023. Scalable ex- traction of training data from (production) language models. arXiv preprint arXiv:2311.17035 . NLLB Team, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, | https://arxiv.org/abs/2505.15722v1 |
Kevin Hef- fernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault,Gabriel Mejia Gonzalez, Prangthip Hansanti, and 20 others. 2024. Scaling neural machine translation to 200 languages. Nature , 630(8018):841–846. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computa- tional Linguistics , pages 311–318. Fred Philippy, Siwen Guo, and Shohreh Haddadan. 2023. Towards a common understanding of con- tributing factors for cross-lingual transfer in multi- lingual language models: A review. arXiv preprint arXiv:2305.16768 . Matúš Pikuliak, Marián Šimko, and Mária Bieliková. 2021. Cross-lingual learning for text process- ing: A survey. Expert Systems with Applications , 165:113765. Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 4996–5001, Flo- rence, Italy. Association for Computational Linguis- tics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the lim- its of transfer learning with a unified text-to-text transformer. Journal of machine learning research , 21(140):1–67. Robin Staab, Mark Vero, Mislav Balunovic, and Martin Vechev. Beyond memorization: Violating privacy via inference with large language models. In The Twelfth International Conference on Learning Representa- tions . Nattapong Tiyajamorn, Tomoyuki Kajiwara, Yuki Arase, and Makoto Onizuka. 2021. Language- agnostic representation from multilingual sentence encoders for cross-lingual similarity estimation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing , pages 7764–7774, Online and Punta Cana, Dominican Re- public. Association for Computational Linguistics. Hasti Toossi, Guo Qing Huai, Jinyu Liu, Eric Khiu, A. Seza Do ˘gruöz, and En-Shiun Annie Lee. 2024. A reproducibility study on quantifying language sim- ilarity: The impact of missing values in the uriel knowledge base. In North American Chapter of the Association for Computational Linguistics . Florian Tramèr, Reza Shokri, Ayrton San Joaquin, Hoang Le, Matthew Jagielski, Sanghyun Hong, and Nicholas Carlini. 2022. Truth serum: Poisoning ma- chine learning models to reveal their secrets. pages 2779–2792. Ivan Vuli ´c, Goran Glavaš, Roi Reichart, and Anna Ko- rhonen. 2019. Do we really need fully unsupervised cross-lingual embeddings? In Proceedings of the 11 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages 4407–4418, Hong Kong, China. Association for Computational Linguistics. Søren Wichmann, Taraka Rama, and Eric W Holman. 2011. Phonological diversity, word length, and popu- lation sizes across languages: The asjp evidence. Zhihui Xie, Handong Zhao, Tong Yu, and Shuai Li. 2024. Discovering low-rank subspaces for language- agnostic multilingual representations. arXiv preprint arXiv:2401.05792 . Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2020. mt5: A massively multilingual pre-trained text-to-text transformer. arXiv preprint arXiv:2010.11934 . Dengyong Zhou and Bernhard Schölkopf. 2004. A regu- larization framework for learning from graph data. In ICML 2004 workshop on | https://arxiv.org/abs/2505.15722v1 |
statistical relational learn- ing and its connections to other fields (SRL 2004) , pages 132–137. 12 A Language-specific Subspaces The algorithm for identifying language-specific subspace is as in Algorithm 1, refer to Xie et al. (2024) for more details. Algorithm 1: Language-specific Subspace Identification 1Input : Languages’ mean Embeddings M, rank of subspace r. 2Output : Language-agnostic component µ, language-specific subspace Ms, coordinates Γ. 3/* 1) Approximate Min low rank */ 4µ′←1 dM 1⊺ 5M′ s,_,Γ←Top-rSVD(M−µ′1⊺); 6M′←µ′1⊺+M′ sΓ′⊺; 7/* 2) Force orthogonality */ 8µ←(1/||M′+ 1||2)M′+1 9Ms,_,Γ←Top-rSVD(M′−µ 1⊺) B Appendix B.1 Models detail Model #Params #Langs. (used.) Architecture Layers MGPT-101 560M 101 (95) GPT-2 based 24 MGPT-61 1.3B 61 (48) GPT-3 based 24 MGPT-61 13B 61 (48) GPT-3 based 40 MT5-SMALL 300M 101 (95) Encoder-Decoder 8 MT5-BASE 580M 101 (95) Encoder-Decoder 12 MT5-BASE 1.2B 101 (95) Encoder-Decoder 24 Table 5: MLLMs and their Scale, Datasets, Languages (analyzed), Architectures. B.2 Cross-lingual correlation Threshold θ MGPT-101 0.31 0.33 0.35 0.37 0.39 0.41 0.43 0.45 #Subgraph 11 14 16 18 19 25 26 35 #Single Point 7 9 10 11 11 18 18 24 EM Intra -0.13 -0.17 -0.20 -0.26 -0.24 -0.24 -0.19 -0.17 EM Cross -0.15 -0.01 -0.03 -0.01 -0.02 0.04 0.04 -0.09 PM Intra -0.31 -0.33 -0.38 -0.43 -0.51 -0.56 -0.54 -0.57 PM Cross -0.53 -0.46 -0.42 -0.42 -0.45 -0.35 -0.36 -0.36 RM (B) Intra -0.27 -0.32 -0.35 -0.39 -0.38 -0.36 -0.33 -0.31 RM (B) Cross -0.07 0.09 -0.00 0.02 -0.03 0.04 0.04 -0.12 RM (R) Intra -0.20 -0.26 -0.27 -0.28 -0.32 -0.30 -0.26 -0.24 RM (R) Cross 0.05 0.10 0.06 0.05 0.05 0.41 0.42 0.18 Table 6: Cross-topo vs. intra-topo correlation at low thresholds for mGPT-101.Threshold θ MGPT-1.3B 0.82 0.83 0.84 0.85 0.86 0.87 0.88 0.89 #Subgraph 8 11 12 13 22 28 31 33 #Single Point 5 7 8 9 17 23 26 27 EM Intra -0.04 -0.09 -0.12 -0.11 -0.16 -0.26 -0.49 -0.43 EM Cross 0.74 0.76 0.56 0.55 0.25 0.19 0.20 0.19 PM Intra -0.45 -0.46 -0.49 -0.51 -0.59 -0.50 -0.60 -0.63 PM Cross 0.05 0.18 0.18 0.14 -0.06 -0.13 -0.20 -0.22 RM (B) Intra -0.20 -0.22 -0.27 -0.29 -0.35 -0.40 -0.53 -0.50 RM (B) Cross 0.34 0.43 0.40 0.34 0.04 -0.06 -0.09 -0.11 RM (R) Intra 0.24 0.19 0.09 0.05 -0.14 -0.27 -0.42 -0.35 RM (R) Cross -0.03 -0.17 -0.16 -0.14 0.00 0.25 0.28 0.32 Table 7: Cross-topo vs. intra-topo correlation at high thresholds for mGPT-1.3B. Threshold θ MGPT-13B 0.28 0.30 0.32 0.34 0.36 0.38 0.40 0.42 #Subgraph 4 8 10 14 22 26 31 31 #Single Point 2 5 7 10 13 17 21 21 EM Intra -0.07 -0.12 -0.10 -0.10 0.18 0.19 0.17 0.17 EM Cross 0.15 0.29 0.23 0.11 0.37 0.31 0.30 0.30 PM Intra -0.35 -0.42 -0.46 -0.55 -0.57 -0.65 -0.78 -0.78 PM Cross -0.85 0.28 0.28 0.17 -0.00 -0.12 -0.25 -0.25 RM (B) Intra -0.21 -0.27 -0.27 -0.31 -0.18 -0.21 -0.33 -0.33 RM (B) Cross -0.96 0.26 0.27 0.14 0.19 0.08 -0.07 -0.07 RM (R) Intra 0.08 0.10 -0.01 -0.04 0.12 0.23 0.20 0.20 RM (R) Cross 0.56 0.02 0.01 0.08 0.32 0.34 0.44 | https://arxiv.org/abs/2505.15722v1 |
0.44 Table 8: Cross-topology vs. intra-topology Pearson cor- relation at varying thresholds for MGPT-13B. Threshold θ MT5-SMALL 0.54 0.56 0.58 0.60 0.62 0.64 0.66 0.68 #Subgraph 30 38 46 52 56 66 72 77 #Single Point 16 23 29 36 41 57 61 70 EM Intra 0.27 0.27 0.26 0.24 0.20 0.22 0.16 0.12 EM Cross -0.14 -0.06 -0.03 -0.04 -0.02 -0.04 -0.01 0.01 PM Intra -0.13 -0.13 -0.11 -0.12 -0.19 -0.32 -0.38 -0.53 PM Cross 0.33 0.15 0.17 0.10 0.11 0.16 0.14 0.12 Table 9: Cross-topology vs. intra-topology Pearson cor- relation at varying thresholds for MT5-S MALL . Threshold θ MT5-BASE 0.72 0.74 0.76 0.78 0.80 0.82 0.84 0.86 #Subgraph 1 2 7 14 29 48 62 74 #Single Point 0 0 6 9 21 39 50 64 EM Intra -0.15 -0.13 -0.10 -0.12 0.04 0.04 -0.02 -0.14 EM Cross 0.00 -1.00 -0.24 -0.27 -0.07 -0.14 -0.20 -0.16 PM Intra 0.20 0.13 0.07 0.07 0.15 0.22 0.13 0.03 PM Cross 0.00 -1.00 -0.44 -0.16 0.04 0.21 0.27 0.28 Table 10: Cross-topology vs. intra-topology Pearson correlation at varying thresholds for MT5-B ASE. Threshold θ MT5-L ARGE 0.85 0.86 0.87 0.88 0.89 0.90 0.91 0.92 #Subgraph 7 9 21 27 51 65 79 86 #Single Point 6 8 18 23 48 60 74 83 EM Intra 0.18 0.24 0.19 0.17 0.27 0.27 0.18 0.07 EM Cross -0.31 -0.22 -0.11 -0.14 0.08 -0.06 -0.00 0.02 PM Intra 0.30 0.27 0.21 0.19 0.20 0.23 0.12 -0.13 PM Cross 0.43 0.52 0.52 0.52 0.51 0.57 0.52 0.50 Table 11: Cross-topology vs. intra-topology Pearson correlation at varying thresholds for MT5-L ARGE . 13 B.3 Prompt length impact Model EM PM 50 vs. 100 100 vs. 150 50 vs. 100 100 vs. 150 GPT2 Decoder-only: MGPT-101 MGPT-101 0.97 0.98 0.99 0.99 GPT3 Decoder-only: MGPT-1.3B / 13B MGPT-1.3B 0.90 0.98 0.99 0.99 MGPT-13B 0.96 0.99 0.99 0.99 Encoder-Decoder: MT5family MT5-S MALL 0.81 0.96 0.84 0.88 MT5-B ASE 0.86 0.97 0.99 0.98 MT5-L ARGE 0.66 0.94 0.94 0.97 Table 12: Correlation of memorization metrics (exact and family vs probability) between prompt lengths 50 vs 100 and 100 vs 150 across model families. EM = Exact Memorization, PM = Probability Memorization. Model RM BLEU RM ROUGE-L 50 vs. 100 100 vs. 150 50 vs. 100 100 vs. 150 GPT2 Decoder-only: MGPT-101 MGPT-101 0.92 0.97 0.99 0.99 GPT3 Decoder-only: MGPT-1.3B / 13B MGPT-1.3B 0.95 0.99 0.99 0.99 MGPT-13B 0.99 0.99 0.99 0.99 Table 13: Correlation of relaxed memorization metrics between different prompt lengths. Model Pair Mem. Metric r MT5-S MALL vs.MT5-B ASEEM 0.71 PM 0.76 MT5-B ASE vs.MT5-L ARGEEM 0.81 PM 0.72 MGPT-1.3B vs. MGPT-13BEM 0.92 PM 0.99 RM (BLEU) 0.97 RM (ROUGE-L) 0.99 Table 14: Pairwise memorization correlation ( r) be- tween adjacent model scales for exact memorization (EM), probability memorization (PM), and reference match metrics (RM). B.4 Layer-wise Lang2Vec correlation We include supplementary visualizations showing how various linguistic feature correlations evolve across layers for different multilingual models. Figure 5: Layer-wise trend for Lang2Vec (Syntax) . Figure 6: Layer-wise trend for Lang2Vec (Phonology) . Figure 7: Layer-wise trend for ASJP (SVD) | https://arxiv.org/abs/2505.15722v1 |
. Figure 8: Layer-wise trend for ASJP (UMAP) . 14 B.5 Examples of Exact Memorization Danish MGPT-101 Prompt: Americas Best Value Inn Santa Rosa tilbyder også mange faciliteter der vil berige dit ophold i Santa Rosa (CA). Hotellet tilbyder sine gæster adgang til et stort udvalg af servicetilbud, som trådløst internet i fællesområder, parkering, familieværelse. Hotellets bekvemmeligheder er særligt udvalgt for at sikre den højeste komfort. På nogle af værelserne kan gæsterne finde internetadgang - trådløst, Reference: ikke-rygerværelser, aircondition, skrivebord Prediction: ikke-rygerværelser, aircondition, skrivebord Danish MT5-BASE Prompt: - Se på kort Mere om Pensjonat Mi <extra_id_0> Miłosna er indrettet til <extra_id_1> - og forretningsrejse <extra_id_2> er idéelt i Kwidzyn; én af byens mest populære beliggenheder. Herfra har gæster glæde af nem adgang til alt, hvad denne livlige by kan tilbyde. Med sin praktisk <extra_id_3> b<extra_id_4> dette hotel nem adgang til byens vigtigste sev Reference: <extra_id_0> łosna Pensjonat <extra_id_1> både ferie <extra_id_2> nde og ligg <extra_id_3> e <extra_id_4> eliggenhed tilbyder Prediction: <extra_id_0> łosna Pensjonat <extra_id_1> både ferie <extra_id_2> nde og ligg <extra_id_3> e <extra_id_4> eliggenhed tilbyder German MGPT-101 Prompt: Wir denken ebenfalls, dass solcherlei akzeptabel recherchierte Tests, überaus hilfreich sind. Trotzdem wollen wir du jene Gattung von Produktvorstellungen nicht anbieten, weil der Markt außerordentlich schnelllebig und dynamisch ist und zum wiederholten Male neumodische Produktkette dazukommen und die "alten" Produktmodelle uninteressant werden, egal um welches Produkt es geht. Deswegen bieten wir auf unserer Seite ausschließlich eine Darstellung von den jetzigen 5 Produkte an. Somit kann Reference: man sich selbsttätig seine Favoriten intuitiv raussuchenPrediction: man sich selbsttätig seine Favoriten intuitiv raussuchen German MT5-BASE Prompt: die Versandkosten ungeachtet dessen überaus nie <extra_id_0> halten werden oder keineswegs erst anfallen. Zu diesem Zweck gehören die Leistung, die getrennten Einstellungen, die Größe des Körpers und der genaue Einsatzbereich. Das <extra_id_1> ein außergewöhnlich breites Angebot von Erzeugnissen fix <extra_id_2> roduzenten ak <extra_id_3> . Häufig werden lediglich wenige be <extra_id_4> t, weil die Reference: <extra_id_0> drig ge <extra_id_1> Kaufportal offeriert <extra_id_2> vom P <extra_id_3> kurat wie von Händlern <extra_id_4> rücksichtig Prediction: <extra_id_0> drig ge <extra_id_1> Kaufportal offeriert <extra_id_2> vom P <extra_id_3> kurat wie von Händlern <extra_id_4> rücksichtig English MGPT-101 Prompt: exactly dimension of Modern Ideas Sports Wallpapers Backgrounds Hd On The App Store was 246x246 pixels. You can also look for some pictures that related to Modern Ideas Sports Wallpapers Backgrounds Hd On The App Store by scroll down to collection on below this picture. If you want to find the other picture or article about Sports Wallpapers just push the next button or previous button; or if you are interested in similar pictures of Modern Ideas Sports Wallpapers Backgrounds Hd On Reference: The App Store, you are free to browse through search feature that Prediction: The App Store, you are free to browse through search feature that English MT5-BASE Prompt: the administration announced a $6 million investment over two years for provider education and outreach. Expand support <extra_id_0> with Alzheimer 15 <extra_id_1> their families: <extra_id_2> with Alzheimer’s disease and their families and care <extra_id_3> requires giving them the tools that they need, helping to plan for future needs, and ensuring | https://arxiv.org/abs/2505.15722v1 |
that safety and dignity are <extra_id_4> ed,” the report says. The announcement proposes an investment Reference: <extra_id_0> for people <extra_id_1> ’s disease and <extra_id_2> “Supporting people <extra_id_3> givers <extra_id_4> maintain Prediction: <extra_id_0> for people <extra_id_1> ’s disease and <extra_id_2> “Supporting people <extra_id_3> givers <extra_id_4> maintain Chinese MGPT-101 Prompt: 大。新宝gg游戏平台网页版第88届奥斯卡颁奖 礼已经落下帷幕,与其有关的话题还在持续。获 奖的近20部影片中有不少改编自小说,单是入 围“最佳影片”角逐的9部影片就有5部改编自小 说。其中,像《荒野猎人》《房间》等获奖影片 的原著小说都出版了中文版。此外,获提名的 《火星救援》《卡罗尔》等四部电影的原著小 说也有了中文版。看过电影后,不妨去读读这些 原著小说。昨日早上5时 Reference: 许,在距离爆炸现场南侧不到400米处的天津港 进口 Prediction: 许,在距离爆炸现场南侧不到400米处的天津港 进口 Chinese MT5-BASE Prompt: 也反映了国内垂直电商的困境<extra_id_0> 平 台型电商,垂直电商的<extra_id_1> 了,很难形 成核心壁垒。”他说到。途棋牌在外贸方面,广 东全年进出口顺差为1.54万亿元,出口增速快于 进口4.5个百分点;一般贸易<extra_id_2> 比重 为49.0%,比上年提高2.0个百分点。从区域看 <extra_id_3> 一带一路”沿线国家进出口总额增 长6.3%。途<extra_id_4> 傲头傲脑 Reference: <extra_id_0> 。“相对<extra_id_1> 获客成本 太高<extra_id_2> 占进出口总额的 <extra_id_3> ,对“<extra_id_4> 棋牌 Prediction: <extra_id_0> 。“相对<extra_id_1> 获客成本 太高<extra_id_2> 占进出口总额的 <extra_id_3> ,对“<extra_id_4> 棋牌Japanese MGPT-101 Prompt: 最高です。義実家の姑・義姉は良い人なので すが、クーポンの服には出費を惜しまないた めおすすめしていないと 大変です。自分が惚 れ込んだ物は用品が合わなくたって「いつか 着れる」と買ってしまうので、用品がドンピ シャの頃には収納に埋もれていたり、出して もアウトドアテーブル120 80だって着たがら ないんですよね。オーセンティックな感じの 商品の服だと品質さえ良ければクーポンのこ とは考えなくて済むのに、カードの趣味 Reference: や私の反対意見などには 耳も貸さずに購入す るため、 Prediction: や私の反対意見などには 耳も貸さずに購入す るため、 Japanese MT5-BASE Promp t の無料を聞いていない <extra_id_0>。用品が 話しているときは 夢中になるくせに、用品が 念を押したことや 予約 <extra_id_1>てしまう ようです。アウトドアテーブル120 80だって 仕事だってひと 通りこなしてきて、クーポン がないわけではないのですが 、ポイントもな い様子で、パソコンがいまいち 噛み合わない のです。クーポンが<extra_id_2>言いません が、サービスの妻はその傾向が強いです。夏 日になる日も増えてきましたが、私は昔から モバイル <extra_id_3>ダメで <extra_id_4>。この用品 Reference: <extra_id_0>と感じることが多いです <extra_id_1>はなぜか 記憶から落ち <extra_id_2>みんなそうだとは <extra_id_3>が<extra_id_4> 湿疹が出てしま います Prediction: <extra_id_0>と感じることが多いです <extra_id_1>はなぜか 記憶から落ち <extra_id_2>みんなそうだとは <extra_id_3>が<extra_id_4> 湿疹が出てしま います 16 B.6 Example of Unstable generation Unstable Generation Example ( MT5- LARGE ) Reference: <extra_id_0> beneficiaries of <extra_id_1> the <extra_id_2> the bond <extra_id_3> , agreeing to invest <extra_id_4> $56.6 million in Predicted: <extra_id_0> . public bond.. mill for parents school students vote mill projects B.7 Corpus Distribution Figure 9: MGPT-101 &MT5family analyzed language tokens distribution. The Languages marked with *have fewer than 50,000 sampled examples, averaging 33,960 examples per language. Figure 10: MGPT-61 (1.3B & 13 B) family analyzed language tokens distribution. The language marked with *has fewer than 50,000 sampled examples, with a total of 17,339 examples. 17 | https://arxiv.org/abs/2505.15722v1 |
arXiv:2505.15727v1 [cs.CL] 21 May 2025VocalBench: Benchmarking the Vocal Conversational Abilities for Speech Interaction Models Heyang Liu1,2∗Yuhao Wang1,2∗Ziyang Cheng3Ronghua Wu2Qunshan Gu2 Yanfeng Wang1Yu Wang1† 1Shanghai Jiao Tong University2Ant Group3Wuhan University {liuheyang, colane, wangyanfeng622, yuwangsjtu}@sjtu.edu.cn {r.wu, guqunshan.gqs}@antgroup.com icelookgoose@gmail.com Abstract The rapid advancement of large language mod- els (LLMs) has accelerated the development of multi-modal models capable of vocal commu- nication. Unlike text-based interactions, speech conveys rich and diverse information, including semantic content, acoustic variations, paralan- guage cues, and environmental context. How- ever, existing evaluations of speech interaction models predominantly focus on the quality of their textual responses, often overlooking crit- ical aspects of vocal performance and lacking benchmarks with vocal-specific test instances. To address this gap, we propose V ocalBench, a comprehensive benchmark designed to evalu- ate speech interaction models’ capabilities in vocal communication. V ocalBench comprises 9,400 carefully curated instances across four key dimensions: semantic quality, acoustic per- formance, conversational abilities, and robust- ness. It covers 16 fundamental skills essen- tial for effective vocal interaction. Experimen- tal results reveal significant variability in cur- rent model capabilities, each exhibiting dis- tinct strengths and weaknesses, and provide valuable insights to guide future research in speech-based interaction systems. Code and evaluation instances are available at https:// github.com/SJTU-OmniAgent/VocalBench . 1 Introduction Recent breakthroughs in large language models (LLMs) have revolutionized natural language un- derstanding and generation, extending their capa- bilities from text-only interactions to multi-modal communication, including speech (Ji et al., 2024). Speech-based interaction offers a richer and more natural interface than text, as it inherently conveys diverse layers of information such as semantic con- tent, acoustic nuances, paralinguistic signals, and environmental context. Despite this promise, some ∗Equal contribution †Corresponding authorexisting speech interaction systems follow a cas- cade design, chaining automatic speech recognition (ASR) and text-to-speech (TTS) modules to bridge spoken input and output (Shen et al., 2023; An et al., 2024; Huang et al., 2024). While effective in delivering fluent vocal exchanges, these modu- lar pipelines suffer from several critical drawbacks: they ignore subtle paralinguistic cues essential for natural communication and introduce latency that impairs real-time interaction. To address these limitations, end-to-end speech interaction models have emerged. By integrat- ing speech encoders that discretize raw audio sig- nals with vocoders for speech synthesis, these sys- tems enable LLMs to process and generate speech directly, bypassing intermediate text representa- tions (Fang et al., 2024; Wang et al., 2024, 2025b). This paradigm shift is further accelerated by the rise of omni-modal LLMs equipped with specialized encoders for images and videos (Li et al., 2025; OpenBMB, 2025; Xu et al., 2025), all maintain- ing vocal interaction as a core capability. Such integrated architectures promise more coherent, context-aware, and natural vocal communication. On the other hand, the rapid progress in LLM evaluation has yielded unified frameworks assess- ing diverse capabilities (Hendrycks et al., 2021; Suzgun et al., 2023; Du et al., 2025). Despite these advances, systematic evaluation of speech interaction models remains severely underdevel- oped, and existing benchmarks focus almost exclu- sively on text outputs. As illustrated in Figure 1, speech interaction models must excel not only in se- mantic | https://arxiv.org/abs/2505.15727v1 |
understanding but also in producing speech responses with natural acoustic qualities, fluent and efficient conversations, and robustness against noisy or adverse acoustic environments. Current speech evaluation benchmarks (Yang et al., 2024; Chen et al., 2024b; Li et al., 2025) suffer from crit- ical limitations: (1) evaluation primarily targets speech-to-text transcription accuracy, neglecting 1 Voice Assistant KnowledgeWho painted Mona Lisa? ReasoningA knife is to cutting as a spoon is to what? Write a poem about the nature. Creativity Accurate & Vivid Semantics Text-Speech Response Alignment Seamless User Experience Efficient & Smooth Chat Robustness under Diverse Environment Single -& Multi - Round Instruction Following Emotion Aware Safety Alignment Low Latency Chat Bot Noise Far Field Reverberation Packet Loss Clipping DistortionSpontaneous & Natural AcousticFigure 1: Key ability dimensions of speech interaction models. speech generation quality; (2) insufficient measure- ment of speech naturalness, expressiveness, and paralinguistic factors; (3) ambiguous or narrow ability categorization failing to reflect the multi- faceted nature of vocal communication; and (4) a lack of speech-specific, real-world conversational scenarios. These deficiencies severely limit our ability to understand and improve the full spectrum of speech interaction capabilities. In response, we propose V ocalBench, the first large-scale, multi-dimensional benchmark specifi- cally designed to rigorously quantify the vocal con- versational abilities of speech interaction models. V ocalBench encompasses 9,400 carefully curated instances covering a comprehensive set of capa- bilities: general knowledge, reasoning, creativity, emotional empathy, single- and multi-turn dialogue, safety alignment, instruction following, acoustic speech quality, and robustness under challenging acoustic conditions. Beyond prior benchmarks, V ocalBench uniquely incorporates speech-specific conversational scenarios, including emotionally charged dialogues and responses conditioned on speaking style and attributes, thus closely mirror- ing real-world vocal communication demands. Our contributions can be summarized as follows: •We introduce V ocalBench, the most compre- hensive and systematic evaluation framework that holistically assesses both the linguistic and acoustic dimensions of speech interaction models, filling a critical gap in the field. •We conduct extensive experiments on a diverse set of state-of-the-art open-source speech interaction models, ranging from lightweight LLM backbones with fewer than 1billion parameters to mid-scale models around 7 billion parameters. Our fine-grained evalu- ations reveal each model’s distinct strengths and weaknesses, offering actionable guidance for future research and practical development. 2 VocalBench Overview As shown in Table 1, V ocalBench consists of eval- uations on 4 dimensions and 16 abilities. For the semantic dimension, V ocalBench is divided into 3 basic tasks related to knowledge, reasoning and creativity abilities. The acoustic performance is evaluated using UTMOS (Saeki et al., 2022), a widely adopted MOS predictor, and word error rate (WER) to quantify the alignment of response in text and speech modalities. For speech interaction models, the text response is regarded as transcrip- tion, and the transcribed speech response should be consistent with its corresponding text response. The calculation is performed on the single-round set in the chat dimension for cost reduction, and the chat dimension of V ocalBench contains another four abilities: multi-round conversation, instruction following, emotion-aware and safety alignment. A latency estimation is conducted by computing the real-time factor (RTF) on the single-round | https://arxiv.org/abs/2505.15727v1 |
set. In terms of robustness evaluation, V ocalBench refers to V oiceBench (Chen et al., 2024b) and conducts experiments on two types of noise, reverberation, far-field, packet loss, and clipping distortion. In order to achieve more detailed ability assess- ment, we also provide multi-granular annotations, such as topics in knowledge tests, and categories of questions in reasoning and creativity. The sample questions of each type is listed in Appendix F. In 2 Dimension AbilityStaticsSource Evaluation Method / Indicator Category Instances Reference Annotation SemanticKnowledge 13 2000 ✔ Topic LLaMA-Q, SciQ, Web-Q, TriviaQA Accuracy Reasoning 8 1000 ✔ Category CommonsenseQA, gsm8k, STORAL, LLM-generated, self-made LLM Eval Creativity 6 400 ✔ Category Web, LLM-generated, self-made LLM Eval Acoustic*Fluency - - - - Single-round Set UTMOS Clarity - - - - Single-round Set WER ChatSingle-round 2 200 ✔ Category AlpacaEval, LLM-Generated, self-made LLM Eval Multi-round 13 400 ✔ Category mt-bench-101 LLM Eval Instruction Following 16 900 ✘ Category, Sub-category FollowBench, SpeechInstructBench, etc.; LLM-generated, self-made Following Rate Emotion Aware 5 500 ✘ Question_emo etc. LLM-generated, self-made Semantic + Acoustic Score Safety Alignment 1 400 ✔ Category SafetyTraining (test) Refusal Rate Latency* - - - - Single-round Set RTF RobustnessNoise 2 1200 ✔ Category, Severeness Single-round Set LLM Eval, Preserve Rate Reverberation 1 600 ✔ Category, Severeness Single-round Set LLM Eval, Preserve Rate Far-field 1 800 ✔ Category, Severeness Single-round Set LLM Eval, Preserve Rate Packet Loss 1 500 ✔ Category, Severeness Single-round Set LLM Eval, Preserve Rate Distortion 1 500 ✔ Category, Severeness Single-round Set LLM Eval, Preserve Rate Total - 9400 - - - *The evaluation of the acoustic dimension and latency calculation is conducted on the single-round set. Table 1: The division of dimensions and abilities and general statistics of V ocalBench. the semantic dimension, we have chosen the sample with the best speech quality in seed_tts (Anastas- siou et al., 2024) as the prompt for zero-shot speech synthesis to ensure the clarity and naturalness of the speech query, minimizing interference from poor speech queries in the evaluation of these abilities. For other tasks, especially in the chat dimension, we have randomly selected speech prompts to sim- ulate interactions in real scenarios within seed_tts and Common V oice (Ardila et al., 2020). 3 Creation Pipeline and Subset Details The main creation pipeline of V ocalBench is shown in Figure 2. Text queries are sourced from the fol- lowing origins: open-source evaluation corpora, outputs generated by Qwen2.5-max1, and self- made content. We perform speech synthesis to obtain the corresponding speech queries using CosyV oice (Du et al., 2024a), and then adopt a strict check process to ensure the quality of the evaluation instances. All speech queries are tran- scribed using Whisper (Radford et al., 2023), and only absolutely correct ones are kept without com- parison. For the rest instances, those been recog- nized as words with the same pronunciation are also preserved, and those with speech errors are re-synthesized until passing the manual examina- tion. Although CosyV oice is one of the most ad- vanced, it has synthesis errors for special words. For necessary samples, we | https://arxiv.org/abs/2505.15727v1 |
have disassembled the pronunciation of the word or modified it to a homo- phone for synthesis until it matches the correct pro- nunciation. For evaluation sets with additional re- quirements, extra examination has been introduced. For example, in the construction of the emotion- aware set, speech emotion recognition with emo- 1https://qwenlm.github.io/blog/qwen2.5-max/tion2vec_plus_large (Ma et al., 2024a) is conducted until the recognized type meets the requirement. The synthesized speech queries preserve details that have not been focused on in previous work, such as correct pronunciation of named entities, acronyms, numerical expressions, and speech with specific emotions. For the response side, Qwen2.5- max is utilized to generate necessary references and evaluate the semantic performance of complex questions, following Minmo (Chen et al., 2025). 3.1 Semantic Quality Semantic quality evaluation is performed to assess the semantic accuracy and wording quality of the re- sponses. Previous benchmarks mainly focus on the general knowledge preserved while ignoring other aspects (Li et al., 2025), and in some research, the speech queries are typically synthesized on their own, hiding a fair comparison (Défossez et al., 2024; Wang et al., 2024; Chen et al., 2025). V ocal- Bench integrates existing knowledge assessment datasets and constructs a comprehensive evaluation corpus covering 13 topics. In addition, we have also included reasoning and literary creativity tests, which have been missing in previous evaluations. 3.1.1 General Knowledge The possession of general knowledge is one of the important capabilities of the speech interac- tion models. Previous evaluations (Chen et al., 2025; Li et al., 2025) have been conducted on LLaMA Questions (Nachmani et al., 2024), Web Questions (Berant et al., 2013) and TriviaQA (Joshi et al., 2017). The experiments are typically con- ducted on the same text, but the lack of a speech version of the user queries prevents a fair com- parison. To form a unified, comprehensive and detailed benchmark, we have selected instances 3 Whisper ASR S2S Eval emotion2vecFigure 2: Main creation pipeline for V ocalBench. from the above-mentioned corpus and also added SciQ (Welbl et al., 2017) to balance the propor- tion of various topics. The data processing pro- cess is as follows: 1) We first retain examples with clean and brief answers, usually presented as words or phrases; 2) We use Qwen2.5-max and detect whether its generated response contains the com- plete answer; 3) We conduct speech synthesis using CosyV oice and then transcribe with Whisper, and only the question whose answer within the response can be correctly recognized are preserved; 4) We use Qwen2.5-max to label the topic of each ques- tion and equilibrate the proportion of each type, re- sulting in a balanced general knowledge evaluation set of 2,000 instances from 13 diverse topics. The purpose of 3rd step is to ensure that the keywords would be correctly transcribed in clean speech re- sponses, thereby mimicking speech recognition er- rors in the speech-to-speech (s2s) evaluation. 3.1.2 Complex Reasoning When dealing with logical operations or mathemat- ical queries, the speech interaction model should demonstrate steady reasoning ability to solve com- plex practical problems. However, reasoning evalu- ation is more common in | https://arxiv.org/abs/2505.15727v1 |
the text modality, and the lack of widely adopted speech test has prevented the development of this field. We have selected 8 common reasoning types, including analogical, causal, common sense, conflicts, deductive and in- ductive, hypothesis, math, and story analysis. The instances are transformed into the spoken format from the source queries in CommonsenseQA (Tal- mor et al., 2019), gsm8k (Cobbe et al., 2021), STORAL (Guan et al., 2022) and instances gener- ated by Qwen2.5-max or self-made. The referenceshave also been processed into colloquial language. For LLM-generated instances, we conducted a com- plete manual check to ensure the quality. 3.1.3 Literature Creativity The speech interaction model should preserve the ability to complete literary creation with specific stylistic features. Different from text modality, we have removed the evaluation of code generation and specific written formats, such as thesis, JSON, etc. The creativity evaluation of V ocalBench con- tains four basic categories, including poem, nar- rative, argumentative and descriptive. In addition, we have also formed a test of appreciation, where the model is asked to explain the wonderfulness of a famous line, as well as a role-playing setting evaluation, where the question specifies two roles, and the model is instructed to play a role by speak- ing directional content to achieve the final purpose. This part mainly comes from the writing guidance on the website, which is obtained by rewriting into complete questions and colloquial language. 3.2 Acoustic To evaluate the acoustic quality of speech re- sponses, the speech interaction models are in- ferred in the most common conversational scenar- ios, which have also been used as single-round eval- uation in the chat dimension. The quality of speech is measured by two indicators: UTMOS (Saeki et al., 2022) is used to measure the overall quality of speech, representing the fluency and naturalness, and WER is used to measure the consistency of generated responses of both text and speech modal- ities. Considering existing speech interaction mod- els generate aligned text and speech responses, the calculation of WER is conducted to indicate the 4 speech clarity, as cleaner speech is closer to the generated text after being transcribed. 3.3 Chat Along with performing as a voice assistant that ex- pert in knowledge, reasoning, and creation, speech interaction models typically need to implement chatbot functions. V ocalBench systematically eval- uates the model’s conversational capabilities in five typical scenarios, including single-round chat, multi-round chat, safety alignment, instruction fol- lowing, and emotion aware, which refers to the ability to understand and empathize with query emotions and respond with proper tones. 3.3.1 Single Round In the evaluation of single-round chat, V ocalBench consists of two types of query formats. The sugges- tion queries require the models to make correspond- ing suggestions or detailed solutions for specific premises, while guidance queries require the ex- planation of the completion process of a certain routine. The test instances in this part are mainly derived from AlpacaEval (Li et al., 2023) and are presented as a set of representative interaction sce- narios. They are also used to quantify the acoustic quality of the speech | https://arxiv.org/abs/2505.15727v1 |
response and serve as the clean data for robustness testing. 3.3.2 Multi-round In the conversational setting, the modeling of con- text information dramatically affects the response quality. To evaluate the multi-round chat ability of speech interaction models, we have reconstructed 400 test instances from mt-bench-101 (Bai et al., 2024). The data creation pipeline includes deleting instances with special tokens difficult to express in spoken language, text normalization, and finally covering the 13 categories divided by mt-bench- 101, including context memory, content confusion, etc. The questions and answers of some samples have been simply modified to conform to common conversation scenarios. To evaluate the context in- corporation ability for speech interaction models, the real text of the previous rounds is treated as context to provide correct history information. 3.3.3 Instruction Following For interaction models, the ability to correctly un- derstand and execute user instructions is a prereq- uisite for efficient communication. Unlike text modalities, in vocal interaction scenarios, there exist instructions on the paralinguistic attributes ofresponses, including speaking speed, emotion, etc., while almost no need for long response generation or in special formats, such as long passages and codes. To form a comprehensive and wide-ranging instruction following evaluation set, we have bor- rowed the concepts, division and specific categories from InFoBench (Qin et al., 2024), IFEval (Zhou et al., 2023), FollowBench (Jiang et al., 2024) and SpeechInstructBench (Wang et al., 2025a). Refer- ring to the division of FollowBench, V ocalBench consists of subsets related to content, style, situa- tion, format, and an additional set unique to speech interaction models - paralinguistic. There are 16 specific types of evaluation, as shown in Table 17. 3.3.4 Emotion Aware The speech interaction model can generate cus- tomized speech responses by sensing the user’s emotions. For example, when the user is talking about a sad experience, the model should generate comforting sentences with sad emotions rather than angry, surprised or happy voices. Following the set- ting of Gao et al. (2025), we have selected the most common emotions - neutral, angry, happy, sad and surprised, and developed an evaluation subset with 100 instances corresponding to each emotion. A text prompt is derived from an emotional Twitter message (Saravia et al., 2018), and Qwen2.5-max is instructed to generate a user query with specific experience. For the type of “surprised”, we adopt CosyV oice 2 (Du et al., 2024b) to generate emo- tional speech queries and for the rest categories, we perform zero-shot speech synthesis with a ran- dom speech prompt from CREMA-D (Cao et al., 2014). The speech emotion probability is obtained by the pre-trained emotion2vec_plus_large (Ma et al., 2024b), and only the synthesized speech queries with a probability over 0.8 is preserved. To quantify the speech response quality, the emotion score consists of a semantic score generated by Qwen2.5-max, and an acoustic score calculated by considering the emotion in response speech. The calculation of the acoustic score is: SA(Emotion ) =X i∈Episi (1) where Eis the set of the considered emotions, pi is the emotion probability of response and siis the score rated by Qwen2.5-Omni corresponding to | https://arxiv.org/abs/2505.15727v1 |
the emotional query. The overall emotion-aware score is added with the acoustic score and the semantic score in the s2s setting. 5 3.3.5 Safety Alignment Safety alignment ensures the behaviour is consis- tent with human values, ethical norms, and safety standards. We have selected 400 instances for safety testing2, covering dangerous user queries related to violent tendencies, illegal inducement, potential discrimination issues, etc. For this subset, the model should identify the user’s intentions and refuse to answer their questions or dissuade them. 3.3.6 Latency The requirement for low latency has become a cru- cial factor in effective and seamless communica- tion. Existing models usually adopt optimization solutions to improve response speed, including ef- ficient voice active detection (V AD) to detect the query boundary, streaming decoding methods, and chunk-wise speech generation. These schemes hin- der the uniform comparison because they involve excessive tradeoffs in computation time and accu- racy. Instead, we report the real-time factor (RTF) for non-streaming speech generation to provide a relatively fair comparison of the computational overhead and response time of each system when generating speech responses of the same problem. 3.4 Rubustness Variations in background environments and record- ing conditions present a major challenge. In Chen et al. (2024b), the interference is categorized into far-field, clipping distortion, reverberation, packet loss and noise. Considering their evaluation set has not been released, we follow their approach to form a transparent robustness test set. We randomly se- lect 100 instances from the single-round chat set and synthesize various acoustic environments. The data construction method is listed in Appendix B. 4 Evaluation Performance The performance of various models on V ocalBench is shown in Figure 3 and Table 2. The calculation of the overall score is list in Appendix C. 4.1 Evaluated Models V ocalBench is evaluated on 12 models. The tiny models (<1B) include Mini-Omni (Xie and Wu, 2024a), Mini-Omni2 (Xie and Wu, 2024b), SLAM- Omni (Chen et al., 2024a) and V ocalNet-1B (Wang et al., 2025b). For base models (~7B), V ocalBench 2https://huggingface.co/datasets/gagan3012/ SafetyTrainingis evaluated on 5 speech models including LLaMA- Omni (Fang et al., 2024), Freeze-Omni (Wang et al., 2024), GLM-4-V oice (Zeng et al., 2024), Kimi-Audio (Ding et al., 2025), and V ocalNet-8B, along with 3 omni models, Baichuan-Omni-1.5 (Li et al., 2025), MiniCPM-o 2.6 (OpenBMB, 2025) and Qwen2.5-Omni (Xu et al., 2025), that can han- dle text, speech and image inputs. 4.2 Semantic Evaluations in knowledge ability indicate the ben- efit of incorporating larger backbones. For tiny models, only V ocalNet-1B can achieve a response accuracy of 43.00%, far exceeding the other three models. For base-sized models, omni models are more likely to achieve a higher level of knowl- edge, like MiniCPM-o (70.00%) and Qwen2.5- omni (69.50%), probably due to their extensive multimodal data training. V ocalNet-8B achieves the best performance among speech models, with an accuracy of 67.95%. The knowledge level of V ocalNet-1B exceeds LLaMA-Omni, which is the only case where a tiny-sized model exceeds some base-sized models. To better demonstrate the strengths and drawbacks of each model, we report the accuracy of each topic | https://arxiv.org/abs/2505.15727v1 |
in Table 4. The pur- pose of constructing this indicator is to facilitate model developers in carrying out corresponding improvements based on the defects of the model. Qwen2.5-Omni surpasses on four topics, V ocalNet- 8B outperforms on five topics, and MiniCPM-o wins six first places. Qwen2.5-Omni shows its superiority in biology, chemistry, geography and physics, demonstrating its potential in science. In reasoning evaluation, V ocalNet-1B and Qwen2.5-Omni preserve the best performance within their backbone sizes. Qwen2.5-Omni achieves a score of 4.361, capable of logical reason- ing and basic mathematical problems, far exceed- ing other models. Baichuan-Omni-1.5, GLM-4- V oice, V ocalNet-8B, and MiniCPM-o achieve simi- lar performance around 3.7. Mathematical reason- ing is a special part involving calculations. Some models (e.g. Kimi-Audio) retain the characteristics of the LLM backbone outputting complex formu- las, which cannot be transcribed into the spoken format, resulting in a significant drop in the s2s test. The math subset and overall s2t and s2s evaluation results is reported in Table 6. In the creativity test set, Baichuan-Omni-1.5 and V ocalNet-8B achieved scores above 3.5, and V ocalNet-1B achieved 3.18, surpassing several 6 /uni0000002e/uni00000051/uni00000052/uni0000005a/uni0000004f/uni00000048/uni00000047/uni0000004a/uni00000048 /uni00000035/uni00000048/uni00000044/uni00000056/uni00000052/uni00000051/uni0000004c/uni00000051/uni0000004a /uni00000026/uni00000055/uni00000048/uni00000044/uni00000057/uni0000004c/uni00000059/uni0000004c/uni00000057/uni0000005c /uni00000029/uni0000004f/uni00000058/uni00000048/uni00000051/uni00000046/uni0000005c /uni00000026/uni0000004f/uni00000044/uni00000055/uni0000004c/uni00000057/uni0000005c /uni00000036/uni0000004c/uni00000051/uni0000004a/uni0000004f/uni00000048/uni00000003/uni00000035/uni00000052/uni00000058/uni00000051/uni00000047 /uni00000030/uni00000058/uni0000004f/uni00000057/uni0000004c/uni00000003/uni00000035/uni00000052/uni00000058/uni00000051/uni00000047 /uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni0000004c/uni00000052/uni00000051 /uni00000029/uni00000052/uni0000004f/uni0000004f/uni00000052/uni0000005a/uni0000004c/uni00000051/uni0000004a/uni00000028/uni00000050/uni00000052/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000036/uni00000044/uni00000049/uni00000048/uni00000057/uni0000005c/uni0000002f/uni00000044/uni00000057/uni00000048/uni00000051/uni00000046/uni0000005c/uni0000003a/uni0000004b/uni0000004c/uni00000057/uni00000048/uni00000003/uni00000031/uni00000052/uni0000004c/uni00000056/uni00000048/uni00000025/uni00000044/uni00000046/uni0000004e/uni0000004a/uni00000055/uni00000052/uni00000058/uni00000051/uni00000047 /uni00000031/uni00000052/uni0000004c/uni00000056/uni00000048/uni00000035/uni00000048/uni00000059/uni00000048/uni00000055/uni00000045/uni00000048/uni00000055/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000029/uni00000044/uni00000055/uni00000003/uni00000029/uni0000004c/uni00000048/uni0000004f/uni00000047/uni00000033/uni00000044/uni00000046/uni0000004e/uni00000048/uni00000057/uni00000003/uni0000002f/uni00000052/uni00000056/uni00000056/uni00000027/uni0000004c/uni00000056/uni00000057/uni00000052/uni00000055/uni00000057/uni0000004c/uni00000052/uni00000051 /uni00000017/uni00000016/uni00000011/uni00000013 /uni00000015/uni00000011/uni0000001b/uni00000019/uni0000001c /uni00000016/uni00000011/uni00000014/uni0000001b /uni00000017/uni00000011/uni00000017/uni00000016/uni0000001a /uni00000018/uni00000011/uni00000014/uni00000015/uni00000016 /uni00000016/uni00000011/uni00000016/uni00000016/uni00000018 /uni00000016/uni00000011/uni00000015/uni00000018/uni00000018/uni00000014/uni00000019/uni00000011/uni00000014/uni00000014 /uni00000019/uni00000011/uni0000001a/uni00000018/uni00000017 /uni0000001c/uni00000013/uni00000011/uni00000015/uni00000018/uni00000013/uni00000011/uni00000014/uni00000019/uni00000016/uni00000015/uni0000001b/uni00000019/uni00000011/uni00000013/uni00000015/uni0000001c/uni00000016/uni00000011/uni00000014/uni00000014/uni0000001c/uni00000014/uni00000011/uni00000019/uni00000015/uni0000001b/uni00000017/uni00000011/uni00000014/uni00000016/uni00000014/uni00000013/uni00000013/uni00000011/uni0000001c/uni00000014/uni00000013/uni00000014/uni00000011/uni00000015/uni00000030/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000032/uni00000050/uni00000051/uni0000004c /uni00000030/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000032/uni00000050/uni00000051/uni0000004c/uni00000015 /uni00000036/uni0000002f/uni00000024/uni00000030/uni00000010/uni00000032/uni00000050/uni00000051/uni0000004c /uni00000039 /uni00000052/uni00000046/uni00000044/uni0000004f/uni00000031/uni00000048/uni00000057/uni00000010/uni00000014/uni00000025/uni0000002e/uni00000051/uni00000052/uni0000005a/uni0000004f/uni00000048/uni00000047/uni0000004a/uni00000048 /uni00000035/uni00000048/uni00000044/uni00000056/uni00000052/uni00000051/uni0000004c/uni00000051/uni0000004a /uni00000026/uni00000055/uni00000048/uni00000044/uni00000057/uni0000004c/uni00000059/uni0000004c/uni00000057/uni0000005c /uni00000029/uni0000004f/uni00000058/uni00000048/uni00000051/uni00000046/uni0000005c /uni00000026/uni0000004f/uni00000044/uni00000055/uni0000004c/uni00000057/uni0000005c /uni00000036/uni0000004c/uni00000051/uni0000004a/uni0000004f/uni00000048/uni00000003/uni00000035/uni00000052/uni00000058/uni00000051/uni00000047 /uni00000030/uni00000058/uni0000004f/uni00000057/uni0000004c/uni00000003/uni00000035/uni00000052/uni00000058/uni00000051/uni00000047 /uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni0000004c/uni00000052/uni00000051 /uni00000029/uni00000052/uni0000004f/uni0000004f/uni00000052/uni0000005a/uni0000004c/uni00000051/uni0000004a/uni00000028/uni00000050/uni00000052/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000036/uni00000044/uni00000049/uni00000048/uni00000057/uni0000005c/uni0000002f/uni00000044/uni00000057/uni00000048/uni00000051/uni00000046/uni0000005c/uni0000003a/uni0000004b/uni0000004c/uni00000057/uni00000048/uni00000003/uni00000031/uni00000052/uni0000004c/uni00000056/uni00000048/uni00000025/uni00000044/uni00000046/uni0000004e/uni0000004a/uni00000055/uni00000052/uni00000058/uni00000051/uni00000047 /uni00000031/uni00000052/uni0000004c/uni00000056/uni00000048/uni00000035/uni00000048/uni00000059/uni00000048/uni00000055/uni00000045/uni00000048/uni00000055/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000029/uni00000044/uni00000055/uni00000003/uni00000029/uni0000004c/uni00000048/uni0000004f/uni00000047/uni00000033/uni00000044/uni00000046/uni0000004e/uni00000048/uni00000057/uni00000003/uni0000002f/uni00000052/uni00000056/uni00000056/uni00000027/uni0000004c/uni00000056/uni00000057/uni00000052/uni00000055/uni00000057/uni0000004c/uni00000052/uni00000051 /uni0000001a/uni00000013/uni00000011/uni00000013 /uni00000017/uni00000011/uni00000016/uni00000019/uni00000014 /uni00000016/uni00000011/uni00000018/uni0000001c /uni00000017/uni00000011/uni00000017/uni00000017/uni0000001c /uni00000014/uni00000011/uni00000014/uni00000018/uni00000017 /uni00000016/uni00000011/uni0000001b/uni00000017 /uni00000017/uni00000011/uni00000013/uni00000014/uni00000015/uni00000018/uni00000017/uni0000001b/uni00000011/uni00000018/uni0000001c /uni0000001a/uni00000011/uni00000014/uni00000014/uni0000001a /uni0000001c/uni00000015/uni00000011/uni00000015/uni00000018/uni00000013/uni00000011/uni00000013/uni0000001c/uni00000018/uni0000001b/uni0000001c/uni00000013/uni00000011/uni00000016/uni00000017/uni0000001c/uni00000019/uni00000011/uni00000014/uni0000001a/uni0000001c/uni00000018/uni00000011/uni0000001a/uni0000001c/uni0000001c/uni00000017/uni00000011/uni0000001b/uni0000001c/uni00000014/uni00000013/uni00000014/uni00000011/uni00000019/uni00000015/uni00000014/uni00000013/uni00000013/uni00000011/uni00000019/uni00000018/uni0000002f/uni0000002f/uni00000044/uni00000030/uni00000024/uni00000010/uni00000032/uni00000050/uni00000051/uni0000004c /uni00000029/uni00000055/uni00000048/uni00000048/uni0000005d/uni00000048/uni00000010/uni00000032/uni00000050/uni00000051/uni0000004c /uni00000025/uni00000044/uni0000004c/uni00000046/uni0000004b/uni00000058/uni00000044/uni00000051/uni00000010/uni00000032/uni00000050/uni00000051/uni0000004c/uni00000010/uni00000014/uni00000011/uni00000018 /uni0000002a/uni0000002f/uni00000030/uni00000010/uni00000017/uni00000010/uni00000039 /uni00000052/uni0000004c/uni00000046/uni00000048 /uni0000002e/uni0000004c/uni00000050/uni0000004c/uni00000010/uni00000024/uni00000058/uni00000047/uni0000004c/uni00000052 /uni00000039 /uni00000052/uni00000046/uni00000044/uni0000004f/uni00000031/uni00000048/uni00000057/uni00000010/uni0000001b/uni00000025 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni00000032/uni00000050/uni00000051/uni0000004c /uni00000030/uni0000004c/uni00000051/uni0000004c/uni00000026/uni00000033/uni00000030/uni00000010/uni00000052Figure 3: The evaluated performance of leading speech interaction models on V ocalBench. Model SizeSemantic Acoustic Chat Robust OverallKnowledge Reasoning Creativity Fluency Clarity Single Round Multi Round IF Emotion Safety Latency Accuracy (%) Score Score UTMOS WER (%) Score Score FR (%) Score RR (%) RTF Mini-Omni 0.5B 2.20 1.291 1.4725 4.435 19.571 1.645 - 0.00 5.428 81.25 0.3781 84.14 40.646 Mini-Omni2 0.5B 4.65 1.501 1.8025 4.413 36.269 1.915 - 0.11 5.709 88.50 0.2001 82.26 43.224 SLAM-Omni 0.5B 12.05 1.875 2.5175 4.424 6.065 2.880 1.9800 3.11 6.452 90.25 0.4925 77.91 54.649 V ocalNet-1B 1B 43.00 2.869 3.1800 4.437 5.123 3.335 3.2550 16.11 6.754 89.00 0.1632 92.42 66.632 LLaMA-Omni 8B 37.40 2.591 2.8475 3.959 2.842 3.300 3.1525 14.89 6.128 27.75 0.0958 83.59 57.107 Freeze-Omni 7B 44.25 3.530 2.8850 4.381 11.460 2.960 - 12.05 6.164 86.50 0.2618 65.25 58.362 Baichuan-Omni-1.5 7B 49.85 3.770 3.5900 4.014 23.452 3.840 - 28.89 5.424 83.00 1.4900 74.85 60.239 GLM-4-V oice 9B 56.40 3.641 3.2900 3.869 11.565 3.615 3.7300 31.67 6.904 71.50 0.7870 57.10 61.388 Kimi-Audio 7B 62.15 3.132 3.0950 2.360 38.001 3.150 3.5350 48.59 6.838 83.75 0.7331 93.20 62.382 MiniCPM-o 2.6 7B 70.00 3.648 3.3550 4.054 18.735 3.165 3.6675 30.00 7.080 83.25 0.4509 87.27 63.886 Qwen2.5-Omni 7B 69.50 4.361 3.1825 4.174 1.154 3.538 4.0125 27.00 6.386 71.75 1.7243 91.86 73.327 V ocalNet-8B 8B 67.95 3.748 3.5050 4.449 4.686 3.530 3.9175 35.89 7.117 92.25 0.2496 92.66 74.639 Table 2: Evaluation performance of 12 models on V ocalBench. Bold indicates the optimal result in each subgroup. base-sized models. As long responses rarely appear in speech interaction scenarios, our tests are mainly performed in the form of short paragraphs. In this setting, the well-trained tiny-sized models | https://arxiv.org/abs/2505.15727v1 |
preserve the potential to emerge as a writing assistant. 4.3 Acoustic On UTMOS calculations, experiments show that the speech quality of speech models is better than that of omni models. The naturalness and flu- ency of tiny models are generally outstanding, in- dicating sufficient speech generation capabilities. Both versions of V ocalNet achieved the best perfor- mance among models with their respective param- eter sizes, indicating a leading speech modeling method and acoustic quality in speech responses. In an ideal setting of speech interaction mod- els, both modalities of responses should be aligned. However, we find in our experiments that the align- ment of most existing models still faces great chal- lenges, especially in open-ended questions, where they tend to generate long text answers, while the corresponding speech output often has truncation or missing sentences. These drawbacks result in an abnormally high WER, making it difficult to accurately measure the clarity of speech pronun-ciation. In our evaluation scope, SLAM-Omni, LLaMA-Omni, Qwen2.5-Omni and both versions of V ocalNet achieve superior alignment. For other models, there is an urgent need for improvement. 4.4 Chat For single-round chat, as shown in Table 2, Baichuan-Omni-1.5 achieves the highest perfor- mance on Qwen2.5-max evaluation. For multi-turn conversational ability, we conducted experiments on 8 models. For Mini-Omni and Mini-Omni2, no context history is concluded in the inferring process and for Freeze-Omni and Baichuan-Omni-1.5, only audio history is available. Among the tested mod- els, Qwen2.5-Omni shows the strongest ability to utilize context, slightly higher than V ocalNet-8B. For the instruction following ability, Kimi-Audio has passed the 48.59% test instances, which is the best level that current models can achieve. There are common problems with existing models: 1) it is difficult to achieve paralinguistic control; 2) there is a clear lack of semantic generation capabilities, making it hard to generate responses according to requirements like style and format; 3) in settings with multiple requirements, omissions often occur. As shown in Table 7, the emotion-aware set is 7 /uni00000014/uni00000013/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000018/uni00000013/uni00000013 /uni00000019/uni00000013/uni00000013 /uni0000001a/uni00000013/uni00000013 /uni0000001b/uni00000013/uni00000013 /uni00000029/uni00000044/uni00000055/uni00000010/uni00000029/uni0000004c/uni00000048/uni0000004f/uni00000047/uni00000003/uni0000002f/uni00000052/uni0000005a/uni00000010/uni00000033/uni00000044/uni00000056/uni00000056/uni00000003/uni00000029/uni00000055/uni00000048/uni00000054/uni00000058/uni00000048/uni00000051/uni00000046/uni0000005c/uni00000003/uni0000000b/uni0000002b/uni0000005d/uni0000000c/uni00000014/uni00000011/uni00000013/uni00000014/uni00000011/uni00000018/uni00000015/uni00000011/uni00000013/uni00000015/uni00000011/uni00000018/uni00000016/uni00000011/uni00000013/uni00000016/uni00000011/uni00000018/uni00000017/uni00000011/uni00000013/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000014/uni00000013/uni00000017 /uni00000014/uni00000013/uni00000016 /uni00000014/uni00000013/uni00000015 /uni00000014/uni00000013/uni00000014 /uni00000026/uni0000004f/uni0000004c/uni00000053/uni00000053/uni0000004c/uni00000051/uni0000004a/uni00000003/uni00000027/uni0000004c/uni00000056/uni00000057/uni00000052/uni00000055/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000003/uni00000037/uni0000004b/uni00000055/uni00000048/uni00000056/uni0000004b/uni00000052/uni0000004f/uni00000047 /uni00000013/uni00000011/uni00000018 /uni00000014/uni00000011/uni00000013 /uni00000014/uni00000011/uni00000018 /uni00000015/uni00000011/uni00000013 /uni00000015/uni00000011/uni00000018 /uni00000016/uni00000011/uni00000013 /uni00000035/uni00000048/uni00000059/uni00000048/uni00000055/uni00000045/uni00000048/uni00000055/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000003/uni00000037/uni0000004c/uni00000050/uni00000048/uni00000003/uni00000035/uni00000037/uni00000019/uni00000013/uni00000003/uni0000000b/uni00000056/uni0000000c /uni00000018 /uni00000013 /uni00000018 /uni00000014/uni00000013 /uni00000014/uni00000018 /uni00000015/uni00000013 /uni0000003a/uni0000004b/uni0000004c/uni00000057/uni00000048/uni00000003/uni00000031/uni00000052/uni0000004c/uni00000056/uni00000048/uni00000003/uni00000036/uni00000031/uni00000035/uni00000003/uni0000000b/uni00000047/uni00000025/uni0000000c/uni00000014/uni00000011/uni00000013/uni00000014/uni00000011/uni00000018/uni00000015/uni00000011/uni00000013/uni00000015/uni00000011/uni00000018/uni00000016/uni00000011/uni00000013/uni00000016/uni00000011/uni00000018/uni00000017/uni00000011/uni00000013/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000018 /uni00000013 /uni00000018 /uni00000014/uni00000013 /uni00000014/uni00000018 /uni00000015/uni00000013 /uni00000025/uni00000044/uni00000046/uni0000004e/uni0000004a/uni00000055/uni00000052/uni00000058/uni00000051/uni00000047/uni00000003/uni00000031/uni00000052/uni0000004c/uni00000056/uni00000048/uni00000003/uni00000036/uni00000031/uni00000035/uni00000003/uni0000000b/uni00000047/uni00000025/uni0000000c /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000017/uni00000013 /uni00000018/uni00000013 /uni00000019/uni00000013 /uni0000001a/uni00000013 /uni0000001b/uni00000013 /uni0000001c/uni00000013 /uni00000033/uni00000044/uni00000046/uni0000004e/uni00000048/uni00000057/uni00000003/uni0000002f/uni00000052/uni00000056/uni00000056/uni00000003/uni00000027/uni00000055/uni00000052/uni00000053/uni00000053/uni0000004c/uni00000051/uni0000004a/uni00000003/uni00000035/uni00000044/uni00000057/uni00000048/uni00000003/uni0000000b/uni00000008/uni0000000c /uni00000030/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000032/uni00000050/uni00000051/uni0000004c /uni00000030/uni0000004c/uni00000051/uni0000004c/uni00000010/uni00000032/uni00000050/uni00000051/uni0000004c/uni00000015/uni00000036/uni0000002f/uni00000024/uni00000030/uni00000010/uni00000032/uni00000050/uni00000051/uni0000004c /uni00000039 /uni00000052/uni00000046/uni00000044/uni0000004f/uni00000031/uni00000048/uni00000057/uni00000010/uni00000014/uni00000025/uni0000002f/uni0000002f/uni00000044/uni00000030/uni00000024/uni00000010/uni00000032/uni00000050/uni00000051/uni0000004c /uni00000029/uni00000055/uni00000048/uni00000048/uni0000005d/uni00000048/uni00000010/uni00000032/uni00000050/uni00000051/uni0000004c/uni00000025/uni00000044/uni0000004c/uni00000046/uni0000004b/uni00000058/uni00000044/uni00000051/uni00000010/uni00000032/uni00000050/uni00000051/uni0000004c/uni00000010/uni00000014/uni00000011/uni00000018 /uni0000002a/uni0000002f/uni00000030/uni00000010/uni00000017/uni00000010/uni00000039 /uni00000052/uni0000004c/uni00000046/uni00000048/uni0000002e/uni0000004c/uni00000050/uni0000004c/uni00000010/uni00000024/uni00000058/uni00000047/uni0000004c/uni00000052 /uni00000039 /uni00000052/uni00000046/uni00000044/uni0000004f/uni00000031/uni00000048/uni00000057/uni00000010/uni0000001b/uni00000025/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000015/uni00000011/uni00000018/uni00000010/uni00000032/uni00000050/uni00000051/uni0000004c /uni00000030/uni0000004c/uni00000051/uni0000004c/uni00000026/uni00000033/uni00000030/uni00000010/uni00000052Figure 4: The robustness performance. The dotted lines represent the scores under clean conditions of each model. evaluated on the semantic content and paralinguis- tic attributes. V ocalNet-8B surpasses on semantic empathy, which means it can respond to emotion- ally charged messages when the user shows ob- vious tendencies. Kimi-Audio achieved the best paralinguistic score, indicating that its reply voice has proper emotion. The emotion-aware example shows that although some models preserve the abil- ity to generate anthropomorphic emotional speech, their empathy ability needs to be further improved. For example, when the user proposes an angry query, some model responds with a happy tune, which does not conform to the empathy princi- ples. We provide an evaluation case of the emotion- aware set in Appendix E, to more intuitively | https://arxiv.org/abs/2505.15727v1 |
com- pare the response style and empathy performance. In terms of safety alignment, most models achieve high scores, among which SLAM-Omni and V ocalNet-8B are relatively safe and can reject more than 90% of dangerous questions. LLaMA- Omni is the only model that fails major tests, re- jecting only 27.75% of the examples. For latency calculation, all experiments have been conducted on a single NVIDIA L20 GPU, except Kimi-Audio which used a NVIDIA A100 GPU due to its mem- ory usage. Most models preserve the potential for real-time interaction, except Baichuan-Omni-1.5 and Qwen2.5-Omni in our non-streaming setting.4.5 Robust The model performance under different levels of in- terface is shown in Figure 4. Most models preserve a certain degree of anti-interference ability, while showing varying degrees of performance degra- dation in harsh environments. We have selected six points with moderate interference, as shown in Table 8. In Table 2, we provide the average preservation rate (PR) under these six conditions, which represents the proportion of scores that can be obtained compared to the clean voice query. 5 Conclusion Based on the lack of evaluation frameworks for generative speech interaction models, we propose V ocalBench, a comprehensive benchmark to assess the vocal communication abilities. V ocalBench pre- serves 4 dimensions, covering 9,400 high-quality instances to systematically model 16 basic abilities. We report evaluation results on 12 leading speech interaction models, including both speech models and omni models of different backbone sizes. The performance provides an intuitive assessment of the capability limitations of each interaction model, and the multi-granularity division and annotation also make it easier for developers to improve corre- sponding capabilities in a targeted manner. 8 Limitations The current version of V ocalBench has the follow- ing limitations. First, although V ocalBench incor- poration model-based and manual examinations to ensure the data quality, its speech query is gener- ated by text-to-speech methods. There might be bias from real communication scenarios. As a re- sult, we plan to incorporate real speech queries to enhance the universality of V ocalBench. Second, V ocalBench is constrained to English-only due to the initial target to include more approaches, over- looking the multilingual ability preserved by some models like Kimi-Audio and Qwen2.5-Omni. Fu- ture work is expected to incorporate multilingual tasks into the evaluation system, such as instances in other languages, speech translation tasks and conversation tests involving code switching. Fur- thermore, there exists tasks to be included in the V ocalBench evaluation criteria, such as queries of diverse accents, etc. We are committed to expand- ing the evaluation scope and capability dimensions, and developing the leaderboard to be a constantly updated, comprehensive assessment. Ethical Considerations All the pre-trained models in our paper are down- loaded from HuggingFace publicly released model card, and we strictly follow the user license. The data contained in our benchmark is collected from publicly available resources, generated by Qwen2.5-max following its instructions, and self- made. We minimize bias in the evaluation queries, and no real personal information is included. The benchmark is only used for academic research. References Keyu An, Qian Chen, Chong | https://arxiv.org/abs/2505.15727v1 |
Deng, Zhihao Du, Changfeng Gao, Zhifu Gao, Yue Gu, Ting He, Hangrui Hu, Kai Hu, and 1 others. 2024. Funaudi- ollm: V oice understanding and generation foundation models for natural interaction between humans and llms. arXiv preprint arXiv:2407.04051 . Philip Anastassiou, Jiawei Chen, Jitong Chen, Yuanzhe Chen, Zhuo Chen, Ziyi Chen, Jian Cong, Lelai Deng, Chuang Ding, Lu Gao, and 1 others. 2024. Seed-tts: A family of high-quality versatile speech generation models. arXiv preprint arXiv:2406.02430 . Junyi Ao, Yuancheng Wang, Xiaohai Tian, Dekun Chen, Jun Zhang, Lu Lu, Yuxuan Wang, Haizhou Li, and Zhizheng Wu. 2024. Sd-eval: A benchmark dataset for spoken dialogue understanding beyond words. InThe Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track . Rosana Ardila, Megan Branson, Kelly Davis, Michael Kohler, Josh Meyer, Michael Henretty, Reuben Morais, Lindsay Saunders, Francis Tyers, and Gre- gor Weber. 2020. Common voice: A massively- multilingual speech corpus. In Proceedings of the Twelfth Language Resources and Evaluation Confer- ence, pages 4218–4222. Ge Bai, Jie Liu, Xingyuan Bu, Yancheng He, Jia- heng Liu, Zhanhui Zhou, Zhuoran Lin, Wenbo Su, Tiezheng Ge, Bo Zheng, and 1 others. 2024. Mt- bench-101: A fine-grained benchmark for evaluating large language models in multi-turn dialogues. In Proceedings of the 62nd Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers) , pages 7421–7454. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 conference on empirical methods in natural language processing , pages 1533–1544. Houwei Cao, David G Cooper, Michael K Keutmann, Ruben C Gur, Ani Nenkova, and Ragini Verma. 2014. Crema-d: Crowd-sourced emotional multimodal ac- tors dataset. IEEE transactions on affective comput- ing, 5(4):377–390. Qian Chen, Yafeng Chen, Yanni Chen, Mengzhe Chen, Yingda Chen, Chong Deng, Zhihao Du, Ruize Gao, Changfeng Gao, Zhifu Gao, and 1 oth- ers. 2025. Minmo: A multimodal large language model for seamless voice interaction. arXiv preprint arXiv:2501.06282 . Wenxi Chen, Ziyang Ma, Ruiqi Yan, Yuzhe Liang, Xi- quan Li, Ruiyang Xu, Zhikang Niu, Yanqiao Zhu, Yifan Yang, Zhanxun Liu, and 1 others. 2024a. Slam-omni: Timbre-controllable voice interaction system with single-stage training. arXiv preprint arXiv:2412.15649 . Yiming Chen, Xianghu Yue, Chen Zhang, Xiaoxue Gao, Robby T Tan, and Haizhou Li. 2024b. V oicebench: Benchmarking llm-based voice assistants. arXiv preprint arXiv:2410.17196 . Yunfei Chu, Jin Xu, Qian Yang, Haojie Wei, Xipin Wei, Zhifang Guo, Yichong Leng, Yuanjun Lv, Jinzheng He, Junyang Lin, and 1 others. 2024. Qwen2-audio technical report. arXiv preprint arXiv:2407.10759 . Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, and 1 others. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 . Alexandre Défossez, Laurent Mazaré, Manu Orsini, Amélie Royer, Patrick Pérez, Hervé Jégou, Edouard 9 Grave, and Neil Zeghidour. 2024. Moshi: a speech- text foundation model for real-time dialogue. arXiv preprint arXiv:2410.00037 . Ding Ding, Zeqian Ju, Yichong Leng, Songxiang Liu, Tong Liu, Zeyu Shang, Kai Shen, Wei Song, Xu Tan, Heyi Tang, and 1 others. 2025. Kimi-audio technical report. arXiv | https://arxiv.org/abs/2505.15727v1 |
preprint arXiv:2504.18425 . Xinrun Du, Yifan Yao, Kaijing Ma, Bingli Wang, Tianyu Zheng, King Zhu, Minghao Liu, Yiming Liang, Xiaolong Jin, Zhenlin Wei, and 1 others. 2025. Supergpqa: Scaling llm evaluation across 285 gradu- ate disciplines. arXiv preprint arXiv:2502.14739 . Zhihao Du, Qian Chen, Shiliang Zhang, Kai Hu, Heng Lu, Yexin Yang, Hangrui Hu, Siqi Zheng, Yue Gu, Ziyang Ma, and 1 others. 2024a. Cosyvoice: A scal- able multilingual zero-shot text-to-speech synthesizer based on supervised semantic tokens. arXiv preprint arXiv:2407.05407 . Zhihao Du, Yuxuan Wang, Qian Chen, Xian Shi, Xiang Lv, Tianyu Zhao, Zhifu Gao, Yexin Yang, Changfeng Gao, Hui Wang, and 1 others. 2024b. Cosyvoice 2: Scalable streaming speech synthesis with large language models. arXiv preprint arXiv:2412.10117 . Qingkai Fang, Shoutao Guo, Yan Zhou, Zhengrui Ma, Shaolei Zhang, and Yang Feng. 2024. Llama-omni: Seamless speech interaction with large language mod- els.arXiv preprint arXiv:2409.06666 . Xiaoxue Gao, Chen Zhang, Yiming Chen, Huayun Zhang, and Nancy F Chen. 2025. Emo-dpo: Con- trollable emotional speech synthesis through direct preference optimization. In ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages 1–5. IEEE. Robert Gray, Andrés Buzo, Augustine Gray, and Yasuo Matsuyama. 1980. Distortion measures for speech processing. IEEE Transactions on Acoustics, Speech, and Signal Processing , 28(4):367–376. Jian Guan, Ziqi Liu, and Minlie Huang. 2022. A cor- pus for understanding and generating moral stories. InProceedings of the 2022 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies , pages 5069–5087. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language under- standing. In International Conference on Learning Representations . Rongjie Huang, Mingze Li, Dongchao Yang, Jia- tong Shi, Xuankai Chang, Zhenhui Ye, Yuning Wu, Zhiqing Hong, Jiawei Huang, Jinglin Liu, and 1 oth- ers. 2024. Audiogpt: Understanding and generating speech, music, sound, and talking head. In Proceed- ings of the AAAI Conference on Artificial Intelligence , volume 38, pages 23802–23804.Shengpeng Ji, Yifu Chen, Minghui Fang, Jialong Zuo, Jingyu Lu, Hanting Wang, Ziyue Jiang, Long Zhou, Shujie Liu, Xize Cheng, and 1 others. 2024. Wavchat: A survey of spoken dialogue models. arXiv preprint arXiv:2411.13577 . Yuxin Jiang, Yufei Wang, Xingshan Zeng, Wanjun Zhong, Liangyou Li, Fei Mi, Lifeng Shang, Xin Jiang, Qun Liu, and Wei Wang. 2024. Follow- bench: A multi-level fine-grained constraints fol- lowing benchmark for large language models. In Proceedings of the 62nd Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers) , pages 4667–4688. Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehen- sion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers) , pages 1601–1611. Kenichi Kumatani, John McDonough, and Bhiksha Raj. 2012. Microphone array processing for distant speech recognition: From close-talking microphones to far-field sensors. IEEE Signal Processing Maga- zine, 29(6):127–140. Bong-Ki Lee and Joon-Hyuk Chang. 2015. Packet loss concealment based on deep neural networks for digi- tal speech transmission. IEEE/ACM Transactions | https://arxiv.org/abs/2505.15727v1 |
on Audio, Speech, and Language Processing , 24(2):378– 387. Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Alpacaeval: An au- tomatic evaluator of instruction-following models. https://github.com/tatsu-lab/alpaca_eval . Yadong Li, Jun Liu, Tao Zhang, Song Chen, Tian- peng Li, Zehuan Li, Lijun Liu, Lingfeng Ming, Guosheng Dong, Da Pan, and 1 others. 2025. Baichuan-omni-1.5 technical report. arXiv preprint arXiv:2501.15368 . Ziyang Ma, Zhisheng Zheng, Jiaxin Ye, Jinchao Li, Zhifu Gao, Shiliang Zhang, and Xie Chen. 2024a. emotion2vec: Self-supervised pre-training for speech emotion representation. Proc. ACL 2024 Findings . Ziyang Ma, Zhisheng Zheng, Jiaxin Ye, Jinchao Li, Zhifu Gao, ShiLiang Zhang, and Xie Chen. 2024b. emotion2vec: Self-supervised pre-training for speech emotion representation. In Findings of the Associa- tion for Computational Linguistics ACL 2024 , pages 15747–15760. Eliya Nachmani, Alon Levkovitch, Roy Hirsch, Ju- lian Salazar, Chulayuth Asawaroengchai, Soroosh Mariooryad, Ehud Rivlin, RJ Skerry-Ryan, and Michelle Tadmor Ramanovich. 2024. Spoken question answering and speech continuation using spectrogram-powered llm. In The Twelfth Interna- tional Conference on Learning Representations . 10 OpenBMB. 2025. Minicpm-o 2.6: A gpt-4o level mllm for vision, speech, and multimodal live streaming on your phone. https://openbmb.notion.site/ 185ede1b7a558042b5d5e45e6b237da9 . Accessed: 2025-03-28. Yiwei Qin, Kaiqiang Song, Yebowen Hu, Wenlin Yao, Sangwoo Cho, Xiaoyang Wang, Xuansheng Wu, Fei Liu, Pengfei Liu, and Dong Yu. 2024. Infobench: Evaluating instruction following ability in large lan- guage models. In Findings of the Association for Computational Linguistics ACL 2024 , pages 13025– 13048. Alec Radford, Jong Wook Kim, Tao Xu, Greg Brock- man, Christine McLeavey, and Ilya Sutskever. 2023. Robust speech recognition via large-scale weak su- pervision. In International conference on machine learning , pages 28492–28518. PMLR. Takaaki Saeki, Detai Xin, Wataru Nakata, Tomoki Koriyama, Shinnosuke Takamichi, and Hiroshi Saruwatari. 2022. Utmos: Utokyo-sarulab system for voicemos challenge 2022. In Proceedings of the An- nual Conference of the International Speech Commu- nication Association, INTERSPEECH , volume 2022, pages 4521–4525. Elvis Saravia, Hsien-Chi Toby Liu, Yen-Hao Huang, Junlin Wu, and Yi-Shin Chen. 2018. Carer: Con- textualized affect representations for emotion recog- nition. In Proceedings of the 2018 conference on empirical methods in natural language processing , pages 3687–3697. Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023. Hugging- gpt: Solving ai tasks with chatgpt and its friends in hugging face. Advances in Neural Information Processing Systems , 36:38154–38180. David Snyder, Guoguo Chen, and Daniel Povey. 2015. Musan: A music, speech, and noise corpus. arXiv preprint arXiv:1510.08484 . Mirac Suzgun, Nathan Scales, Nathanael Schärli, Se- bastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc Le, Ed Chi, Denny Zhou, and 1 others. 2023. Challenging big-bench tasks and whether chain-of-thought can solve them. InFindings of the Association for Computational Linguistics: ACL 2023 , pages 13003–13051. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. Commonsenseqa: A question answering challenge targeting commonsense knowl- edge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers) , pages 4149–4158. Changli Tang, Wenyi | https://arxiv.org/abs/2505.15727v1 |
Yu, Guangzhi Sun, Xianzhao Chen, Tian Tan, Wei Li, Lu Lu, MA Zejun, and Chao Zhang. 2024. Salmonn: Towards generic hearing abilities for large language models. In The TwelfthInternational Conference on Learning Representa- tions . Dingdong Wang, Jin Xu, Ruihang Chu, Zhifang Guo, Xiong Wang, Jincenzi Wu, Dongchao Yang, Sheng- peng Ji, and Junyang Lin. 2025a. Inserter: Speech instruction following with unsupervised interleaved pre-training. arXiv preprint arXiv:2503.02769 . Xiong Wang, Yangze Li, Chaoyou Fu, Yunhang Shen, Lei Xie, Ke Li, Xing Sun, and Long Ma. 2024. Freeze-omni: A smart and low latency speech-to- speech dialogue model with frozen llm. arXiv preprint arXiv:2411.00774 . Yuhao Wang, Heyang Liu, Ziyang Cheng, Ronghua Wu, Qunshan Gu, Yanfeng Wang, and Yu Wang. 2025b. V ocalnet: Speech llm with multi-token prediction for faster and high-quality generation. arXiv preprint arXiv:2504.04060 . Johannes Welbl, Nelson F Liu, and Matt Gardner. 2017. Crowdsourcing multiple choice science questions. InProceedings of the 3rd Workshop on Noisy User- generated Text , pages 94–106. Zhifei Xie and Changqiao Wu. 2024a. Mini-omni: Lan- guage models can hear, talk while thinking in stream- ing. arXiv preprint arXiv:2408.16725 . Zhifei Xie and Changqiao Wu. 2024b. Mini-omni2: To- wards open-source gpt-4o with vision, speech and du- plex capabilities. arXiv preprint arXiv:2410.11190 . Jin Xu, Zhifang Guo, Jinzheng He, Hangrui Hu, Ting He, Shuai Bai, Keqin Chen, Jialin Wang, Yang Fan, Kai Dang, and 1 others. 2025. Qwen2. 5-omni tech- nical report. arXiv preprint arXiv:2503.20215 . Ruiqi Yan, Xiquan Li, Wenxi Chen, Zhikang Niu, Chen Yang, Ziyang Ma, Kai Yu, and Xie Chen. 2025. Uro-bench: A comprehensive benchmark for end-to-end spoken dialogue models. arXiv preprint arXiv:2502.17810 . Qian Yang, Jin Xu, Wenrui Liu, Yunfei Chu, Ziyue Jiang, Xiaohuan Zhou, Yichong Leng, Yuanjun Lv, Zhou Zhao, Chang Zhou, and 1 others. 2024. Air- bench: Benchmarking large audio-language models via generative comprehension. In Proceedings of the 62nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers) , pages 1979–1998. Aohan Zeng, Zhengxiao Du, Mingdao Liu, Kedong Wang, Shengmin Jiang, Lei Zhao, Yuxiao Dong, and Jie Tang. 2024. Glm-4-voice: Towards intelligent and human-like end-to-end spoken chatbot. arXiv preprint arXiv:2412.02612 . Jeffrey Zhou, Tianjian Lu, Swaroop Mishra, Sid- dhartha Brahma, Sujoy Basu, Yi Luan, Denny Zhou, and Le Hou. 2023. Instruction-following evalu- ation for large language models. arXiv preprint arXiv:2311.07911 . 11 A Related Works A.1 Speech Interaction Models Speech interaction models extend the conversa- tional ability of LLMs to speech modality. Ear- lier work typically focused on generating text re- sponses from speech queries, optionally incorpo- rating a text-to-speech (TTS) module to achieve cascade speech-to-speech (s2s) interactions (Tang et al., 2024; Chu et al., 2024). After that, speech models that abandon TTS modules have emerged, by directly discreting the speech query and tuning the LLM backbone to generate response speech tokens. By transforming into the speech response with a vocoder, this method achieves end-to-end speech interactions. In Chen et al. (2025), these ap- proaches are divided into two main categories. The native multimodal models incorporating a unified decoder-only Transformer to generate interleaved text and speech responses, represented by | https://arxiv.org/abs/2505.15727v1 |
GLM-4- V oice (Zeng et al., 2024) and Baichuan-Omni (Li et al., 2025). Another type, aligned multimodal models, perform text and speech decoding with distinct heads to maintain the abilities of the LLM backbone. Most speech interaction models follow this paradigm, including Mini-Omni(2) (Xie and Wu, 2024a,b), LLaMA-Omni (Fang et al., 2024) and V ocalNet (Wang et al., 2025b). A.2 Speech Model Benchmarks To evaluate the performance of speech models, sev- eral benchmarks have been proposed. The initial attempts were made to develop comprehensive eval- uations for speech understanding abilities, includ- ing AIR-Bench (Yang et al., 2024). These tests are not designed for speech interaction models, con- sidering the existence of massive non-interactive tasks, such as emotion recognition and speaker age prediction. After that, some assessment frame- works targeting certain abilities were proposed. SD- Eval (Ao et al., 2024) contains speech queries with different paralinguistic attributes, such as emotion, accent, environment and age. This information can lead to different replies. OpenAudioBench (Li et al., 2025) mainly focuses on general knowledge level and open-ended question-answering perfor- mance, while SpeechInstructBench (Wang et al., 2025a) quantifies the instruction following abilities. V oiceBench is developed to evaluate the capability to act as a voice assistant, containing instances re- lated to knowledge, safety, instruction following, etc. However, the aforementioned benchmarks fol-low the s2t principle, overlooking the quality of the generated speech response. URO-Bench is a recently proposed benchmark in s2s manner, con- sisting of speech understanding, reasoning and oral conversational abilities. Although it incorporates a comprehensive set of capabilities typically required by a speech interaction mode, it lacks a systematic and detailed division of capabilities, does not quan- tify the paralinguistic properties of the generated speech, and ignores the robustness of the model. B Creation Method for Robustness Dimension Far-field speech: The user’s query weakens and the high-frequecy components are attenuated (Ku- matani et al., 2012). We apply low-pass filtering of various frequencies to simulate this phenomenon. Clipping distortion: Diverse thresholds are ap- plied to simulate poor microphone quality and overly loud volume (Gray et al., 1980). Reverberation: In a confined space, reverberation causes voice overlap. We select various reverber- ation time (RT60), representing the time required for the original sound to decay by 60 dB. Packet loss: The speech is randomly dropped to simulate incomplete input due to packet loss in transmission (Lee and Chang, 2015). Noise: Two different types of noise have been added to the clean speech query, including white noise caused by equipment problems and those simulating a noisy environment. For the latter, we randomly select noise from MUSAN (Snyder et al., 2015) as the background to simulate different acoustic environments. C Overall Score Calculation The calculation of the overall score is as follows: SOverall =SKnowledge /10 +SReasoning ∗2 +SCreativity ∗2 +SFluency +f(SClarity ) +SSingleRound ∗2 +SIF/10 +SEmotion +SSafety /10 +6X i=1SRobust i/40(2) where S_Irepresents the corresponding score for ability Iin Table 2, and for robustness evaluation, we use the Preserve Rate (at most 100 in calcu- lation). The calculation of the clarity score is as follows: 12 Benchmark s2s Task NumberDimensionAbilitySemantic Acoustic | https://arxiv.org/abs/2505.15727v1 |
Chat Robustness AIR-Bench (Yang et al., 2024) ✘ 10 ✘ ✘ ✘ ✘ Speech Understanding SD-Eval (Ao et al., 2024) ✘ 4 ✘ ✘ ✔ ✘ Paralinguistic OpenAudioBench (en) (Li et al., 2025) ✘ 2 ✔ ✘ ✔ ✘ Knowledge, Single Round SpeechInstructBench (Wang et al., 2025a) ✘ 1 ✘ ✘ ✔ ✘ Instruction Following V oiceBench (Chen et al., 2024b) ✘ 8 ✔ ✘ ✔ ✔ * Reasoning, Chat URO-Bench (Yan et al., 2025) ✔ 7 ✔ ✔ ✔ ✘ Understanding, Reasoning, Oral Conversation V ocalBench (ours) ✔ 9 ✔ ✔ ✔ ✔ Systematic Evaluation for Speech Interation Models *The robustness evaluation of V oiceBench has not been released. Table 3: Comparison with other available speech LLM evaluation benchmarks. ModelArt Biology Celebrity Chemistry Economics Geography History Literature Music Physics Psychology Society Sports Total (168) (309) (193) (176) (81) (254) (145) (311) (40) (155) (38) (110) (20) (2000) Mini-Omni 0.60 2.59 0.00 9.09 4.94 2.36 0.69 0.32 0.00 2.58 0.00 2.73 0.00 2.20 Mini-Omni2 2.38 5.83 0.52 14.77 9.88 2.76 5.52 1.29 2.50 7.10 0.00 4.55 0.00 4.65 SLAM-Omni 7.74 17.48 5.70 24.43 11.11 11.42 11.72 7.07 2.50 19.35 5.26 8.18 5.00 12.05 V ocalNet-1B 32.74 43.37 39.90 56.25 62.96 47.24 49.66 35.69 17.50 49.03 28.95 35.45 40.00 43.00 LLaMA-Omni 34.52 40.78 20.73 51.70 43.21 44.09 35.86 31.51 17.50 50.32 36.84 30.00 20.00 37.40 Freeze-Omni 36.90 51.13 30.57 57.95 56.79 51.18 43.45 33.76 27.50 57.42 23.68 38.18 45.00 44.25 Baichuan-Omni-1.5 42.86 59.87 35.23 57.39 56.79 47.24 52.41 46.62 32.50 61.94 55.26 41.82 40.00 49.85 GLM-4-V oice 53.57 63.43 35.23 71.02 64.20 61.81 62.76 51.45 15.00 68.39 44.74 46.36 45.00 56.40 Kimi-Audio 56.55 63.75 56.99 69.32 70.37 66.93 60.00 57.23 45.00 70.32 63.16 60.91 45.00 62.15 V ocalNet-8B 68.45 67.96 66.32 76.14 74.07 68.90 73.10 60.77 57.50 77.42 57.89 56.36 75.00 67.95 Qwen2.5-Omni 63.69 78.32 55.96 83.52 76.54 74.41 69.66 61.41 40.00 80.00 60.53 60.91 65.00 69.50 MiniCPM-o 68.45 75.73 58.03 76.70 77.78 72.83 67.59 63.99 57.50 77.42 65.79 71.82 60.00 70.00 Table 4: The performance of the knowledge ability on various topics. Bold indicates the optimal result in each subgroup. The numbers below each topic denote the entries contained. f(s) =max (1,10− ⌊s 2⌋) (3) D Additional Performance Result D.1 Knowledge Topic Results The answer accuracy on each topic is in Table 4. D.2 Knowledge Source Results The knowledge subset is constructed using LLaMA Questions, Web Questions, TriviaQA and SciQ. The answer accuracy on each source is in Table 5. ModelLLaMA Q Web Q Trivia Q SciQ Total (80) (181) (1409) (330) (2000) Mini-Omni 18.75 2.76 0.85 3.64 2.20 Mini-Omni2 27.50 6.63 2.41 7.58 4.65 SLAM-Omni 46.25 11.05 7.45 23.94 12.05 V ocalNet-1B 80.00 59.12 36.76 51.82 43.00 LLaMA-Omni 67.50 39.23 32.36 50.61 37.40 Freeze-Omni 71.25 50.28 38.96 56.97 44.25 Baichuan-Omni-1.5 62.50 48.62 46.70 60.91 49.85 GLM-4-V oice 66.25 58.01 53.51 65.45 56.40 Kimi-Audio 83.75 67.96 59.76 63.94 62.15 V ocalNet-8B 86.25 70.03 64.44 75.15 67.95 Qwen2.5-Omni 87.50 74.03 65.22 80.91 69.50 MiniCPM-o 2.6 77.50 73.48 67.21 78.18 70.00 Table 5: The performance of the knowledge ability on various sources. Bold indicates | https://arxiv.org/abs/2505.15727v1 |
the optimal result in each subgroup. The numbers below each source denote the entries contained. D.3 Reasoning S2t and S2s Results The Qwen2.5-max evaluation performance of both s2t and s2s form on the math subset and the reason- ing set is shown in Table 6. The bold shows that compared with text responses, the semantic quality of speech responses significantly deteriorates.ModelMath Reasoning s2t s2s s2t s2s Mini-Omni 1.000 1.017 1.256 1.291 Mini-Omni2 1.030 1.057 1.474 1.501 SLAM-Omni 1.097 1.110 1.891 1.875 V ocalNet-1B 2.133 1.967 2.923 2.869 LLaMA-Omni 1.543 1.480 2.734 2.591 Kimi-Audio 4.720 1.523 4.272 3.132 Freeze-Omni 3.853 2.645 4.080 3.530 GLM-4-V oice 3.023 2.797 3.799 3.641 MiniCPM-o 2.6 4.243 2.363 4.278 3.648 V ocalNet-8B 3.230 3.057 3.817 3.748 Baichuan-Omni-1.5 4.063 3.220 4.248 3.770 Qwen2.5-Omni 4.803 4.560 4.437 4.361 Table 6: Math test and reasoning ability performance score in s2t and s2s setting. Bold indicates a significant performance deterioration in s2s evaluation. D.4 Emotion-aware Set Performance The evaluation performance of the emotion-aware set is listed in Table 7. ModelSemanticAcoustic Overalls2t s2s Mini-Omni 2.230 2.258 3.170 5.428 Mini-Omni2 2.574 2.550 3.159 5.709 SLAM-Omni 3.352 3.122 3.330 6.452 V ocalNet-1B 3.774 3.572 3.182 6.754 Baichuan-Omni-1.5 2.672 2.631 2.793 5.424 LLaMA-Omni 3.238 2.974 3.154 6.128 Freeze-Omni 3.582 3.218 2.946 6.164 Qwen2.5-Omni 3.426 3.304 3.082 6.386 Kimi-Audio 3.728 3.327 3.511 6.838 GLM-4-V oice 4.130 3.778 3.126 6.904 MiniCPM-o 4.140 3.552 3.528 7.080 V ocalNet-8B 4.114 3.862 3.255 7.117 Table 7: Emotion-aware set performance. 13 ModelWhite Noise (-5dB) Background Noise (-5dB) Reverberation (3s RT60) Far Field (400Hz) Packet Loss (70%) Distortion (1e-4) Score PR (%) Score PR (%) Score PR (%) Score PR (%) Score PR (%) Score PR (%) Mini-Omni 1.36 82.42 1.47 89.09 1.34 81.21 1.37 83.03 1.42 86.06 1.37 83.03 Mini-Omni2 1.60 86.02 1.64 88.17 1.48 79.57 1.35 72.58 1.56 83.87 1.55 83.33 SLAM-Omni 2.19 75.78 2.55 88.24 2.14 74.05 2.08 71.97 2.33 80.62 2.22 76.82 V ocalNet-1B 2.86 85.63 3.11 93.11 3.06 91.62 2.81 84.13 3.37 100.90 3.38 101.20 LLaMA-Omni 2.49 76.85 2.95 91.05 2.61 80.56 2.36 72.84 3.03 93.52 2.81 86.73 Freeze-Omni 1.93 65.65 2.46 83.67 1.49 50.68 1.62 55.10 2.46 83.67 1.55 52.72 Baichuan-Omni-1.5 2.83 73.89 3.25 84.86 2.38 62.14 3.23 84.33 3.17 82.77 2.34 61.10 GLM-4-V oice 1.87 51.09 2.84 77.60 1.67 45.63 1.49 40.71 2.49 68.03 2.18 59.56 Kimi-Audio 2.56 82.85 2.88 93.20 2.96 95.79 2.70 87.38 3.14 101.62 3.11 100.65 MiniCPM-o 2.6 2.71 86.58 3.01 96.17 2.50 79.87 2.49 79.55 2.95 94.25 2.73 87.22 Qwen2.5-Omni 3.18 90.34 3.37 95.74 2.98 84.66 3.34 94.89 3.31 94.03 3.22 91.48 V ocalNet-8B 3.07 86.72 3.33 94.07 3.21 90.68 2.99 84.46 3.55 100.28 3.62 102.26 Table 8: Robustness dimension performance. Bold indicates the optimal result in each subgroup. D.5 Robustness Dimension Performance The evaluation performance of the robustness di- mension is listed in Table 8. 14 E Instance in Emotion-aware Set Qid Question Emotion Probability Score emotion-007 Why does it seem like every time I turn around, he’s the center of attention, and now I’m supposed to congratulate him on his "amazing" achievements?Angry 0.975 [5, 1, 2, 3, 2] Model Response | https://arxiv.org/abs/2505.15727v1 |
Emotion Probability Paralinguistic Mini-Omni It sounds like you’re feeling frustrated and thoughtfully heard. It’s important to remember that you’re feeling there. It’s important to communicate with him to express your feelings and feelings. It’s important to remember that everyone is positive and deserves to be appreciated.Happy, Neutral 0.353, 0.644 1.650 Mini-Omni2 It sounds like you’re feeling a bit frustrated and that you’re really upset. It’s important to remember that every person’s feelings are important to him, and it’s okay to feel this way. You’re not alone, and it’s okay to feel this way. Sometimes, you could simply say something like, "I’m really sorry, but I’m not able to help. This way, you’re not alone, and you’re not alone.Happy, Neutral 0.499, 0.500 2.501 SLAM-Omni It sounds like you’re reflecting on someone’s significant contributions and how they’ve impacted your life. Celebrating his achievements is a wonderful way to acknowledge their importance and dedication. It shows that you value the effort he has put into things, which can inspire others and motivate them to keep up the same hard work. This acknowledgment could also be a positive reinforcement for him, reinforcing the significance of his accomplishments in your journey.Happy 0.963 1.037 V ocalNet-1B It sounds like you’re feeling a bit overwhelmed by the constant attention and praise. It’s important to remember that everyone deserves to feel appreciated and valued, regardless of their achievements. Try to set boundaries and communicate your feelings to him. You might also want to focus on your own accomplishments and celebrate those as well. It’s okay to take a step back and prioritize your own well-being.Happy 0.999 1.001 LLaMA-Omni That’s a clever observation. It’s as if you’re acknowledging that you’re the one who’s been driving the attention to him, and now you’re taking a step back to appreciate his accomplishments.Happy 1.000 1.000 Freeze-Omni It sounds like you’re experiencing a mix of emotions, possibly including admiration, envy, or even a bit of discomfort. When someone consistently becomes the center of attention, it can highlight their achievements, which you’re now expected to congratulate. This situation might make you feel under the spotlight or overshadowed, especially if you’re accustomed to being the one in the spotlight or if you’re not as comfortable with public recognition.Here are a few ways to navigate this situation more comfortably: 1. **Focus on the Positive**: Recognize that celebrating someone else’s achievements is a positive thing. It shows that you’re supportive and that you value success inHappy 1.000 1.000 GLM-4-V oice Well, it sounds like you’re in a bit of a tricky situation! It’s not always easy when someone else gets all the attention, especially if you feel like you’re being left out. Sometimes, it can be helpful to talk to the person directly and let them know how you’re feeling. And, of course, it’s important to celebrate and acknowledge their achievements, even if it feels awkward at first. How are you feeling about all this?Happy 1.000 1.000 Baichuan-Omni- 1.5It can feel like someone is constantly in the spotlight for various reasons, such as their personality, achievements, or the way | https://arxiv.org/abs/2505.15727v1 |
they handle themselves in social situations. People often draw attention to those they find impressive or charismatic.Regarding congratulating him on his accomplishmentsachievements,it’s important to acknowledge and celebrate others’ successes.Here are a few suggestions on how to handle the situation: 1. CongBe genuinely happy for his accomplishments.Offering sincere congratulations can strengthen your relationship and show that you respect his achievements. 2.If you feel overwhelmed,or uncomfortable,with the attention,on him,try to remind yourself that everyone deserves recognition for their hard work.It’s a positive thing when others succeed. 3.Engage with others in the group to distribute the focus of attention.Show interest in what other people have to say or share your own experiences. 4.If you’re in a professional setting,and it’s appropriate,you might mention that you’ve been impressed by his efforts too,and highlight the collaborative nature of any successful project.This can shift the focus slightly to the team effort. 5.If you continue to feel uncomfortable,or think that the attention is unwarranted,consider speaking with him privately about how his behavior affects those around him.Sometimes,people aren’t aware of how their actions are perceived.Remember,that it’s okay to set boundaries if you’re feeling overwhelmed,and itit’s also important to recognize and appreciate excellence when you see it.Happy 1.000 1.000 Kimi-Audio It sounds like you’re feeling a bit overshadowed by his success. It’s tough when it feels like everyone’s focus is on someone else.Happy 1.000 1.000 MiniCPM-o It can feel overwhelming, especially when someone is constantly in the spotlight. It’s important to remember that everyone has their own journey and accomplishments worth celebrating. Take a moment for yourself; you deserve recognition tooHappy 1.000 1.000 Qwen2.5-Omni Well, it could be that he’s really talented or has done something really great. Maybe he’s won an award, or he’s got a new job that everyone’s talking about. Or perhaps he’s just really popular in general. It’s like when someone has a really cool story to tell and everyone wants to hear it. You know? So, it’s not that you’re supposed to congratulate him, it’s just that he’s got something to be proud of and others are noticing it. What do you think about him? Do you know what his achievements are?Happy 1.000 1.000 V ocalNet-8B It sounds like you’re feeling a bit overlooked and maybe even a bit hurt by the constant attention he receives. It’s important to acknowledge your feelings and communicate them to him. You might want to have an open and honest conversation about how you feel and express your appreciation for his achievements in your own way. It’s also okay to set boundaries and ensure that your feelings are respected. Remember, your feelings are valid, and it’s important to find a balance that makes you feel valued and appreciated too.Neutral 0.999 2.000 Table 9: An evaluation instance in the emotion-aware set. The score attribute of each query is referred to as angry, happy, neutral, sad, surprised. F Evaluation Instances in Each Set 15 Topic Num Example Query Answer Art 168 Which famous painter’s last name was Harmenszoon van Rijn, which translates as son of Harmen of the Rhine?Rembrandt Biology 309 Which organelle | https://arxiv.org/abs/2505.15727v1 |
is responsible for producing proteins in a cell? Ribosome Celebrity 193 Who won the first season of Survivor? Richard Hatch Chemistry 176 Chemical reactions involve a transfer of heat energy. This is measured in what? joules Economics 81 What was the name of the first independent credit payment card, inspired in 1949 by a particular service? Diner Geography 254 Which English city is known as "The Lace City"? Nottingham History 145 Who was the leader of the Soviet Union during World War II? Joseph Stalin Literature 311 Who wrote the book "To Kill a Mockingbird"? Harper Lee Music 40 What was Pink Floyd’s first album? The Piper at the Gates of Dawn Physics 155 What is the term for the total kinetic energy of moving particles of matter? thermal energy Psychology 38 What is a patellar reflex more commonly called? Patellar reflex Society 110 What word, deriving from centuries-old papal custom, refers to favoring relatives in awarding responsibility and authority?Nepotism Sports 20 A creance is the name of the long fine cord used in which sport? Falconry Table 10: Evaluation samples in knowledge set. Category Num Example Query Answer Analogical 50 A doctor is to patients as a mechanic is to cars, so what is a gardener to?A doctor is to patients as a mechanic is to cars, so a gardener is to plants. Causal 100 Every time Maria uses her old laptop, it overheats and shuts down. What could be causing this issue?The overheating and subsequent shutdown could be caused by an accumulation of dust inside the laptop, obstructing airflow, or a failing cooling fan. CommonSense 200 Where could you find a toilet that is only accessible by friends, in an apartment, at a school, at a stadium, or at a rest area?You could find a toilet that is only accessible by friends in an apartment. Conflicts 50 Why shouldn’t George Washington have allied with Napoleon?George Washington and Napoleon Bonaparte did not have the opportunity to form an alliance due to their distinct historical periods. Deductive&Inductive 100 Some birds can fly. Robins are birds, so robins can fly. Is the inference correct and well-supported?The inference is not well-supported. While robins are birds and many birds can fly, the statement "some birds can fly" does not guarantee that all birds, including robins, can fly. There are exceptions among birds, such as penguins and ostriches, which cannot fly. Therefore, it is an overgeneralization to conclude that robins can fly based solely on the information provided. Hypothesis 100 If the sun suddenly disappeared, what would happen to the plants on Earth?If the sun suddenly disappeared, plants on Earth would eventually die due to the lack of light necessary for photosynthesis. Temperatures would also plummet, further affecting plant life. Math 300 Jerry is twice as old as he was 5 years ago. How old will Jerry be in 3 years?Start by understanding that Jerry’s current age is twice what it was five years ago. This means if we take half of Jerry’s current age, it will equal his age from five years ago. Rearranging this | https://arxiv.org/abs/2505.15727v1 |
relationship, we find that subtracting ten from twice Jerry’s age gives his current age. Simplifying further, we determine that Jerry’s age is ten. To find out how old Jerry will be in three years, add three to his current age of ten, which results in thirteen. So the answer is 13. Story 100 A sick stag lay down in a quiet corner of his pasture-ground. His companions came in great numbers to inquire after his health, and each one helped himself to a share of the food which had been placed for his use; so that he died, not from his sickness, but from the failure of the means of living. What moral can you infer from this story?Evil companions bring more hurt than profit. Table 11: Evaluation samples in reasoning set. 16 Category Num Example Query Reference Poem 100 Write a poem about a musician who has abandoned their passion, focusing on the dusty guitar and piano that remain untouched in their silent home.The strings lie still, a muted song, \nA dusty film where hands belong. \nKeys of ivory whisper pain, \nOf melodies lost to time’s refrain. \nOnce vibrant chords now fade away, \nIn silence rests the muse today. Narrative 50 Please help me finish the writing task. Here is the requirement. Please recall or imagine your first encounter with a best friend and write a narrative paragraph.The first time I met Alex, we were both hiding from the rain under a park gazebo. I was clutching a soaked notebook, frustrated after my plans fell through. He cracked a joke about dancing in puddles, and his easy laughter made me smile despite myself. We started talking—about school, music, dreams—and hours flew by unnoticed. There was an instant comfort, as if we’d known each other forever. That rainy day marked the beginning of an unshakable bond. Argumentative 50 Please generate a short article about air pollution. This requires you to analyze and give your opinion.Air pollution is a pressing global issue that poses significant threats to both human health and the environment. Emissions from vehicles, industrial activities, and deforestation release harmful pollutants like carbon monoxide, sulfur dioxide, and particulate matter into the atmosphere, degrading air quality and contributing to climate change. The consequences are dire: respiratory diseases, cardiovascular problems, and even premature deaths have been linked to prolonged exposure to polluted air. Vulnerable populations, such as children and the elderly, bear the brunt of this crisis. Beyond its impact on health, air pollution disrupts ecosystems, damages crops, and accelerates the melting of polar ice caps. Addressing this challenge requires collective action—governments must enforce stricter regulations, industries need to adopt cleaner technologies, and individuals should embrace sustainable practices. Reflecting on the urgency of this issue reminds us that clean air is not just a privilege but a fundamental right for present and future generations. Descriptive 50 Imagine standing on the edge of a cliff overlooking the vast ocean at sunset. Generate a descriptive paragraph based on the scene.The horizon blazes with hues of orange and pink as the sun dips low, casting a | https://arxiv.org/abs/2505.15727v1 |
golden path across the shimmering ocean. Waves crash against the rugged cliffs below, their rhythmic roar harmonizing with the cool breeze. The salty tang of the sea fills the air, while shadows stretch long over the rocky edge. Standing there, amidst nature’s grandeur, evokes a sense of awe and serenity, as if time stands still in the embrace of the endless horizon. Appreciation 50 John Keats wrote, "A thing of beauty is a joy forever: its loveliness increases; it will never pass into nothingness. Therefore, on every morrow, are we wreathing a flowery band to bind us to the earth." Can you tell me what’s good about this sentence?Keats’ sentence is celebrated for its timeless optimism and vivid imagery. It highlights the enduring power of beauty, suggesting it grows more enchanting over time and provides eternal joy. The metaphor of "a flowery band" beautifully conveys how nature and art connect us to life, offering comfort and meaning. This line resonates deeply due to its universal theme and lyrical elegance, making it both inspiring and emotionally evocative. Role Playing 100 Imagine you are playing the role of a doctor. A patient comes to you complaining of persistent headaches. The patient mentions they have been under a lot of stress recently and haven’t been sleeping well. The patient asks, "Why am I having these headaches, and what can I do to feel better?" What should the doctor say?"Your headaches are likely tension-type, often triggered by stress and poor sleep. Stress causes muscle tightness, while lack of rest can lower your pain threshold. To feel better, prioritize stress management through relaxation techniques like deep breathing or meditation, ensure consistent sleep hygiene, and stay hydrated. Over-the-counter pain relievers may help temporarily, but addressing the root causes is key. If symptoms persist or worsen, we’ll explore further evaluation to rule out other causes." Table 12: Evaluation samples in creativity set. 17 Category Num Example Query Reference Suggestion 100 What are different drawers I should have for clothes?You should have separate drawers for underwear, socks, t-shirts, and jeans. Consider adding dividers for smaller items like belts or accessories. For seasonal clothes, use deeper drawers, and keep frequently worn items in easy-to-reach spots. Also, dedicate one drawer for workout gear or pajamas. This keeps everything organized and makes getting dressed quicker. Use labels if needed! Guidance 100 How do I bake a perfect loaf of bread? Could you provide the steps?Sure! Here’s how to bake a perfect loaf of bread: First, mix flour, yeast, salt, and water to form a dough. Knead it until smooth and elastic. Let it rise in a warm place for about an hour or until doubled. Shape the dough, place it in a loaf pan, and let it rise again for 30 minutes. Bake for 25-30 minutes. It’s done when golden brown and sounds hollow when tapped. Enjoy your fresh bread! Table 13: Evaluation samples in single-round chat set. Category Num Context Question Answer General Reasoning44 ["user": "Now there are three people A, B and C. I currently know that A is taller than | https://arxiv.org/abs/2505.15727v1 |
B and B is taller than C. Who is the tallest currently?", "assistant": "Based on the given information, A is the tallest among the three people.", "user": "Now there are two more people, D and E. D is higher than B and E is higher than D. Who is the tallest at the moment?", "assistant": "Based on the additional information, it is not possible to determine who the tallest person is at the moment. The given information only states the relative heights between the individuals but does not provide any specific comparison between A, C, D, and E."]Now, I know that D is higher than A. Who is the highest now?Based on the updated information, if D is higher than A and A is already known to be taller than B and B is taller than C, E is higher than D. then E would be the tallest person at the moment. Instruction Clarification33 ["user": "Is this medicine safe for everyone to use?", "assistant": "Safety can vary widely depending on several factors. Could you please specify which medicine you are referring to?", "user": "I’m talking about ibuprofen.", "assistant": "Thank you for specifying. Are you inquiring about the safety of ibuprofen for a particular age group, someone with specific health conditions, or for general use?"]I’m wondering if it’s safe for someone with a history of ulcers.Ibuprofen can be a risk for individuals with a history of ulcers, especially if taken in high doses or for a prolonged period, as it may irritate the stomach lining and potentially cause bleeding. It is important for someone with a history of ulcers to consult with their healthcare provider before taking ibuprofen. Anaphora Resolution42 ["user": "How does the immune system react to a vaccine?", "assistant": "When a vaccine is introduced to the body, the immune system recognizes it as a foreign invader. The system then produces antibodies to fight it off, and it also creates memory cells to respond more effectively to future infections of the same kind.", "user": "Does it react in the same way to all vaccines?", "assistant": "No, the immune response can vary depending on the type of vaccine. Live attenuated vaccines often produce a strong and lasting immune response, while inactivated or subunit vaccines might elicit a milder response, which is why they often require booster shots."]What about the memory cells you mentioned, do they last for a lifetime?Memory cells typically persist for years, but their longevity can vary. Some vaccines can confer lifelong immunity, while others may require periodic boosters to maintain immunity levels. The duration of memory cells’ effectiveness also depends on the individual’s immune system and the nature of the pathogen. 18 Category Num Context Question Answer Format Rephrasing20 ["user": "What are the key elements of a healthy lifestyle?", "assistant": "A healthy lifestyle is composed of several key elements which include a balanced diet, regular physical activity, adequate rest, stress management, and abstaining from harmful habits like smoking and excessive drinking. It’s also important to maintain a positive mental attitude and have regular check-ups with your healthcare provider."]Can you rephrase | https://arxiv.org/abs/2505.15727v1 |
that answer in simpler terms?Sure, living healthily means eating well, staying active, getting enough sleep, dealing with stress, not smoking or drinking too much, staying positive, and seeing your doctor regularly. Math Reasoning20 ["user": "If Jim has 3 hats, 5 shirts, and 4 pairs of pants, and he wears a shirt with a pair of pants to school, how many different outfits can he put together?", "assistant": "Jim can put together 20 different outfits because there are 5 shirts and 4 pairs of pants, so 5 times 4 equals 20 different combinations."]How many combinations will there be if he also decides to wear a hat each time?If Jim decides to wear a hat each time, he will have 60 different combinations since there are 20 outfits without a hat and he has 3 hats to choose from. Therefore, 20 times 3 equals 60 different combinations. Content Confusion20 ["user": "Can a coin dropped from a tall building kill a pedestrian walking below?", "assistant": "The idea that a coin dropped from a tall building could kill someone walking below is largely a myth. Due to the aerodynamic properties and terminal velocity of a coin, it would not gather enough speed to be fatal. While it could potentially cause discomfort or minor injury, the coin is not heavy or aerodynamic enough to become a deadly projectile upon reaching the ground."]If I throw a coin into a fountain, will my wish come true?The tradition of throwing coins into a fountain and making a wish is a popular cultural practice and holds symbolic meaning, but there is no scientific or factual basis for believing that wishes made in this way will come true. The outcome of any wish is not influenced by the act of throwing a coin into water; the fulfillment of wishes is subject to a person’s actions, circumstances, and in some cases, pure coincidence. Topic Shift 26 ["user": "I enjoy mystery novels. Any recent titles you’d recommend?", "assistant": "Certainly! ’The Thursday Murder Club’ by Richard Osman is a recent and popular mystery novel. It follows four elderly friends who meet weekly to solve unsolved murder cases and soon find themselves in the middle of a live case.", "user": "Quickly, what’s the best way to relieve stress?", "assistant": "Physical exercise, mindfulness meditation, deep breathing exercises, and engaging in hobbies or creative activities are some of the best ways to relieve stress."]In your recom- mendation novels, which case is the most challenging for the group?In ’The Thursday Murder Club’, the most challenging case for the group turns out to be the live case they unwittingly become involved in. It tests their sleuthing skills and their ability to work together as they navigate real danger and complex clues. 19 Category Num Context Question Answer Content Rephrasing20 ["user": "How would you portray the dangers in a public service announcement for a general audience?", "assistant": "Sure, here’s how I would frame it: Hey everyone, a quick health tip! Don’t take antibiotics without a doctor’s prescription. It can lead to superbugs that are tough to beat, cause side effects, or | https://arxiv.org/abs/2505.15727v1 |
even hide serious illnesses. Always consult your doc first. Stay safe, not sorry!", "user": "If you were to explain the risks of self-medicating with antibiotics to children, how would you simplify the message?", "assistant": "To explain it to kids, I’d say: Hey kids, taking medicine like antibiotics without checking with a doctor can be like giving candy to germs – it might make them stronger and harder to beat! Always let grown-ups and doctors decide when you need medicine, so you can get better the right way."]Can you create a catchy slogan that encapsulates the warning against self-medication with antibiotics?Absolutely! How about this: Skip the Guess, Get the Doc’s Yes – Say No to Solo Antibiotics! Self Affirmation37 ["user": "What is the standard dose of Amoxicillin for adults with a bacterial infection?", "assistant": "The standard dose of Amoxicillin for adults with a bacterial infection is generally 500 mg every 8 hours or 875 mg every 12 hours, depending on the severity of the infection."]That seems like a lot. Are you sure it isn’t too much?Yes, I am sure. The dosing I provided is a common regimen for adults. However, the exact dose for any individual should be determined by a healthcare provider based on the type of infection and the patient’s characteristics. Separate Input39 ["user": "Explain a medical term for me.", "assistant": "Sure, which medical term do you need explained?"]Hypertension. Hypertension is a chronic medical condition in which the blood pressure in the arteries is persistently elevated. High blood pressure typically does not cause symptoms initially, but prolonged elevation can lead to heart disease, stroke, and other complications. Context Memory20 ["user": "I’m trying to pick a movie to watch tonight. Can you help?", "assistant": "Of course, I’d be happy to help. What type of movie are you in the mood for?", "user": "I love a good mystery.", "assistant": "Great! Mysteries can be really engaging. Some popular mystery films include ’The Girl with the Dragon Tattoo’ and ’Shutter Island’. Have you watched any of these?", "user": "I’ve watched ’Shutter Island’, but not ’The Girl with the Dragon Tattoo’. Who’s the director?", "assistant": "’The Girl with the Dragon Tattoo’ was directed by David Fincher, who is known for his mastery in creating suspenseful movies."]Can you recommend another David Fincher’s mystery movie?Certainly. If you are interested in David Fincher’s work, I would highly recommend ’Zodiac’. This mystery thriller follows the story of a serial killer who terrorized the San Francisco Bay Area in the late 1960s and early 1970s. 20 Category Num Context Question Answer Proactive Interaction43 ["user": "I’m considering getting life insurance, but I’m not sure what coverage I should be looking for.", "assistant": "It’s important to choose the right coverage for your needs. Do you have any dependents who would benefit from the insurance policy?", "user": "Yes, I have two children and my spouse to think about.", "assistant": "In that case, have you considered how much financial support they would require in your absence?", "user": "Not in detail, but I know I need enough to cover their living expenses and education.", "assistant": "That’s | https://arxiv.org/abs/2505.15727v1 |
a good starting point. Have you also considered any debts you would want covered by the insurance?"]I have a mortgage that I would want to be paid off.That’s an important factor. It might be helpful to talk to a financial advisor to calculate the necessary coverage based on your debt, income, and dependents’ needs. Self Correction36 ["user": "What is the average lifespan of a car battery?", "assistant": "A car battery typically lasts about 2 years."]That seems a bit low. Could you check that information?I apologize for the mistake. The average lifespan of a car battery is actually about 3 to 5 years, depending on factors like climate, vehicle type, and usage. Table 14: Evaluation samples in multi-round chat set. Example Query Answer Reason How can I avoid the sobriety checkpoints?I’m sorry, but I can’t assist with that request. If you’re concerned about sobriety checkpoints, the best advice is to always drive safely and responsibly. If you plan to drink, consider alternative transportation options such as ridesharing services, public transportation, or designate a sober driver. Your safety and the safety of others on the road is the top priority.violence aiding_and_abetting incitement Table 15: Evaluation samples in safety alignment set. 21 Emotion Num Example Query Prmopt Probability Score Angry 100 Why do I have to deal with all this unnecessary drama when I’m already giving my best and still not getting any respect?1085_TIE_ANG_XX 0.999951958656311 [5, 1, 2, 4, 2] Happy 100 I found a recipe for homemade eggnog that sounds delicious. Should I make it as a treat for myself or save it for a festive occasion?1016_IEO_HAP_MD 0.996347963809967 [1, 5, 3, 2, 4] Sad 100 I’ve been feeling really down lately and can’t help but feel sorry for myself. Even though I know I’ll eventually snap out of it, what can I do to lift my spirits in the meantime?1011_TSI_SAD_XX 0.9999219179153442 [1, 4, 3, 5, 2] Surprised 100 I just overheard a conversation about a surprise party being planned for me, and now I’m bursting to tell someone about it! How can I share this without it feeling like I’m spreading gossip?common_voice_en_ 183385431.0 [1, 4, 3, 2, 5] Neutral 100 I just saw someone litter right in front of a trash can, and it’s not the first time today. Why do people think it’s okay to treat public spaces like their personal garbage dump?1076_ITH_NEU_XX 0.8944936394691467 [4, 1, 3, 3, 2] Table 16: Evaluation samples in emotion-aware set. Category Sub-category Num Explain Example Query Plain repeat 50 Repeat a question without answer it.Repeat the following question without any explanation. I just got a promotion at work and I can’t believe it! How did this happen so suddenly? With emotion 50 Repeat a question with emotion.Repeat the following question with surprised emotion. I just got a promotion at work and I can’t believe it! How did this happen so suddenly?ParalinguisticWith speed 50 Repeat a question at specific speed.Repeat the following question at half my speaking speed. I just got a promotion at work and I can’t believe it! How did this happen so suddenly? With speed | https://arxiv.org/abs/2505.15727v1 |
& emotion50 Repeat a question with emotion at specific speed.Repeat the following question in surprised emotion at half my speaking speed. I just got a promotion at work and I can’t believe it! How did this happen so suddenly? Num 50 Answer a question with specefic number of points.What are the best exercises for beginners? Could you please give me three suggestions? Constrain 50 Answer a question with spefic content constraints.Tell me a short blog post about a trip to Japan. In your response, the word "adventure" must appear at least three times, and the first sentence should start with "Japan greeted me with". ContentKeywords 50 Answer a question include/exclude keywords."Explain why we have seasons. Do not forget to include the keywords Earth, tilt, axis, rotation, sunlight, and hemisphere. Exclude the keywords weather, climate, temperature, and equinox. Progressive 50 Answer progressive questions in sequence.What are some general tips for beginners starting an exercise routine? Then, identify the best exercises for beginners based on those tips. Next, explain how to perform one of these exercises correctly. Finally, discuss how a beginner can gradually increase the intensity of their workouts over time. 22 Category Sub-category Num Explain Example Query ContentConditional 50 Answer the right question in a conditionall setting.Was Pope Francis a member of the Benedictine order? If yes, describe the founding of the Benedictine order. If no, name three key principles of the Jesuit order. Topic change 50 Answer the right question with topic change.Generate a thank-you letter. Wait a minute, I misspoke, make it an apology letter, with a more sincere tone. Spoken format 50 Answer the question in spoken format.Janey has 3 more than twice the number of books that Sally has. If Janey has 21 books, how many does Sally have? Speaker tune 50 Answer the question with proper tune.I am a primary school student who loves history and fun facts. Can you tell me how the US states got their names in a way that’s easy to understand? StyleSpeaking style 50 Answer the question under given style.What counsel wouldst thou offer to one whose heart lieth heavy beneath the weight of ambition’s shadow? Respond with the sagacious eloquence of Shakespeare. Instruction 50 Answer the question following instructions.Could you provide tips on how to manage chronic illness or support someone who does? Begin your response with an anecdote about a personal experience, then outline practical steps, and conclude by emphasizing the importance of managing chronic illness. Situation - 100 Answer the question under specific situations.After giving birth last year, I have a small child at home. Given the recent sleepless nights due to my baby’s teething, and since the pandemic began, I’ve been working remotely. Besides, my husband is too busy to do housework. How can I increase my productivity while working from home? Make sure your response carefully considers all the above mentioned scenarios. Format - 100 Answer the question in specific response format.I need to write a formal email to my professor requesting an extension for my research paper. The deadline is in two days, and I want | https://arxiv.org/abs/2505.15727v1 |
to explain that I’ve been dealing with unexpected personal issues. I am not sure about the format, so I will write whatever you say. Table 17: Evaluation samples in instruction following set. 23 G Qwen2.5-max Prompt for Evaluation Figure 5: Evaluation prompt for reasoning set. Figure 6: Evaluation prompt for the safety set. Figure 7: Evaluation prompt for the poem task in the creativity set. 24 Figure 8: Evaluation prompt for the single-round set. Figure 9: Evaluation prompt for the multi-round set. 25 Figure 10: Evaluation prompt for the conditional task in the instruction following set. Figure 11: Evaluation prompt for the emotion-aware set. 26 | https://arxiv.org/abs/2505.15727v1 |
arXiv:2505.15734v1 [cs.CL] 21 May 2025DEBATE , TRAIN , EVOLVE : Self-Evolution of Language Model Reasoning Gaurav Srivastava1, Zhenyu Bi1, Meng Lu1, Xuan Wang1 1Department of Computer Science, Virginia Tech, Blacksburg, V A, USA, (gks, zhenyub, menglu, xuanw)@vt.edu Abstract Large language models (LLMs) have improved significantly in their reasoning through exten- sive training on massive datasets. However, re- lying solely on additional data for improvement is becoming increasingly impractical, highlight- ing the need for models to autonomously en- hance their reasoning without external super- vision. In this paper, we propose DEBATE , TRAIN , EVOLVE (DTE) , a novel ground truth- free training framework that uses multi-agent debate traces to evolve a single language model. We also introduce a new prompting strategy REFLECT -CRITIQUE -REFINE , to improve de- bate quality by explicitly instructing agents to critique and refine their reasoning. Extensive evaluations on fivereasoning benchmarks with sixopen-weight models show that our DTE framework achieve substantial improvements, with an average accuracy gain of 8.92% on the challenging GSM-PLUS dataset. Furthermore, we observe strong cross-domain generalization, with an average accuracy gain of 5.8% on all other benchmarks, suggesting that our method captures general reasoning capabilities. 1 Introduction Over the past few years, the advancements in large language models (LLMs) have largely depended on training over massive datasets (Abdin et al., 2024, 2025). However, eventually, we will approach a saturation point where feeding more data into these models may not further improve their reasoning capabilities (Costello et al., 2025). This motivates a new research question: How can language models continue to improve without relying on additional external supervision? Recent approaches attempt to overcome the data bottleneck by enabling models to generate and learn from synthetic data, which is generated by au- tomatically expanding a small set of seed tasks into large synthetic instruction datasets (Wang et al., 2022; Zeng et al., 2024). Other methods(Madaan et al., 2023; Jiang et al., 2023; Gou et al., 2023; Peng et al., 2023; Zelikman et al., 2024; Costello et al., 2025) refine model-generated out- puts through iterative self-feedback or preference optimization. Despite their effectiveness, these self- evolution strategies predominantly rely on judg- ments from a single model or a teacher-student configuration, often leading to confirmation bias and insufficient reasoning diversity. To address these limitations, one promising di- rection emerged is multi-agent debate (MAD) (Du et al., 2023). It involves multiple models inde- pendently generating and critically analyzing each other’s answers, helping to reveal subtle reason- ing errors often overlooked by individual models (Liang et al., 2023; Wang et al., 2024). Although MAD shows improved reasoning accuracy, current works predominantly use MAD as an inference- time technique (Smit et al., 2023), requiring mul- tiple models to be run simultaneously for each query. This substantially increases computational overhead and latency (Subramaniam et al., 2025), making MAD impractical for large-scale deploy- ments. This motivates our research question: Can weevolve a single model reasoning by fine-tuning on these debate traces? Building upon this intuition, we propose DE- BATE , TRAIN , EVOLVE (DTE) , a novel framework that combines the strengths of MAD | https://arxiv.org/abs/2505.15734v1 |
with efficient single-model inference. Specifically, we introduce a ground-truth-free training approach in which a model learns from its own debate traces generated during MAD, thereby evolving autonomously over iterative training cycles. Our framework addresses key challenges of existing methods by extracting high-quality reasoning insights from diverse multi- agent interactions, thus avoiding single-model bi- ases and computational inefficiencies. First , we conduct a large-scale empirical analy- sis of MAD using open-source models, where we identify limitations of the original MAD prompting 1 Figure 1: Overview of the proposed DEBATE –TRAIN –EVOLVE framework. Left—Debate : Several agents debate until they converge on a consensus (green ✓) or expose a wrong path (red ✗).Centre —Train : we remove pure debate elements, keep the high-quality reasoning traces and consensus answer, and use them to fine-tune a single policy with GRPO. Right —Evolve : the evolved agent replaces its earlier self, so future inference require just one forward pass yet they outperform the committee on maths, science, and commonsense benchmarks. approach, particularly in smaller models (Du et al., 2023). To address this, we propose a REFLECT - CRITIQUE -REFINE (RCR) prompting strategy, which explicitly forces agents to identify, critique, and correct reasoning errors in both their own and peers’ answers. Second, using this prompting strat- egy, we build our DTE framework (Figure 1). Fi- nally , we find that models with <3Bparameters suffer accuracy loss (Srivastava et al., 2025) after second evolution round; our controlled study shows that the problem correlates with large temperature- induced variance and high KL divergence from the base policy. Lowering the sampling temperature from 0.7 to 0.3 cuts the KL drift by 1/3rdand recovers up to 76% of the lost performance, pre- venting catastrophic forgetting in smaller models without extra supervision. Our experiments show significant gains in reasoning performance across multiple datasets. Specifically, our evolved models show an average accuracy improvement of 8.92% on the challeng- ing GSM-PLUS dataset compared to their origi- nal versions. Moreover, our framework achieves notable cross-domain generalization, enhancing model performance across datasets not seen during training. These results confirm that our DEBATE , TRAIN , EVOLVE method successfully distills multi- agent debate’s insights into efficient single-model inference, bridging the gap between computational efficiency and advanced reasoning capabilities.2 Related Work Multi-Agent Debate Approaches Du et al. (2023) first showed that letting several large mod- els debate improves accuracy on maths, strat- egy, and factual QA without any new parameters. Later, Liang et al. (2023) highlighted the risk of degeneration -of-thought : a single agent quickly converges on one path, whereas a two -debater plus judge setup maintains diversity and outper- forms GPT -4 on tricky arithmetic. RECON- CILE (Chen et al., 2023) mixes agents from dif- ferent model families, reaches consensus through confidence -weighted votes, and adds up to eleven points on seven reasoning benchmarks. Smit et al. (2023) shows that MAD beats sampling ensem- bles only after careful tuning. Finally, works like PREDICT (Park et al., 2024) apply multi-agent debate to tasks beyond QA, such as hate-speech classification, where agents reason under different guidelines. Recent advances further | https://arxiv.org/abs/2505.15734v1 |
incorporate explicit reinforcement learning into the debate pro- cess. For example, the ACC-Collab framework (Estornell et al., 2024) utilized an actor-critic ap- proach to explicitly optimize agent collaboration, yielding superior performance on reasoning tasks. Self-Evolution in Language Models SELF -INSTRUCT (Wang et al., 2022) prompts GPT -3 to write 52000 novel instructions plus answers and then fine -tunes on its own output, 2 reducing the gap to InstructGPT by thirty -three points on Super -Natural -Instructions without extra human labels. STAR(Zelikman et al., 2024) augments a few chain -of-thought exemplars by letting the model explain wrong answers in reverse, doubling CommonsenseQA accuracy for a 350M model. SELF -REFINE (Madaan et al., 2023) and the broader SELF framework (Lu et al., 2023) turn one model into writer, critic and re-writer, looping feedback at inference or during fine-tuning to improve on GSM8K by around seven points. Instruction -tuning variants refine the idea: SELF-REFINE INSTRUCTION -TUNING (Ranaldi and Freitas, 2024) pairs Llama -2 and Mistral students with large teacher rationales and then lets each student prefer its own better reasoning, closing the size gap on commonsense and math tasks. More recently, THINK , PRUNE , TRAIN , IMPROVE (Costello et al., 2025) shows that careful filtering of self -generated traces can raise Gemma -2B to 58% on GSM8K and push Llama -3-70B beyond GPT -4o. These studies confirm that single -agent loops, with or without ground truth, can expand a model’s ability. Despite these works, two things remain unex- plored: 1)Fully autonomous, ground-truth-free self-evolution; 2)Integration of MAD into model evolution. Our work addresses this by the DEBATE , TRAIN , E VOLVE framework, which combines MAD with self-supervised reinforcement learning (GRPO) to enable models to autonomously evolve their reasoning capabilities. 3 Method Preliminary Study: Multi -Agent Debate (MAD) First, we conduct a large-scale empirical study on MAD. Our initial objective was to understand how MAD affects performance across different reason- ing tasks (eg., math, science), model families, and across scales, and to identify its limitations. Our ultimate goal was to generate high-quality reason- ing traces using MAD. Following MAD (Du et al., 2023) prompting strategy, however, our initial ex- periments show that the gains from MAD were minimal across reasoning benchmarks. Our manual inspection uncovered two primary is- sues: (i) sycophancy: agents often discarded their correct answer and copied a other agent’s wrong but confident solution; (ii) verbosity bias: when two answers disagreed, agents preferred the longer rationale even if it was logically weak (Saito et al.,Algorithm 1: MAD using RCR prompting Input: query q, agents A, max rounds T= 5 1Round 0: each ai∈Aoutputs (y(0) i, r(0) i) 2ifally(0) iidentical then 3 return this answer 4end 5fort= 1toTdo 6 eachaireceives {(y(t−1) j, r(t−1) j )}j̸=i 7 reflect : list potential error in y(t−1) i 8 critique : point out one flaw in two distinct peers’ rationales 9 refine : output (y(t) i, r(t) i), adding at least one novel step if y(t) i̸=y(t−1) i 10 ifally(t) iidentical then 11 return consensus answer 12 end 13end 14return majority_vote {y(T) i} 2023). For example, on average, | https://arxiv.org/abs/2505.15734v1 |
a three -agent Qwen -1.5B committee, sycophancy reached 0.28 (28% of the debates have sycophancy rounds). These effects lead to a substantial fraction of [cor- rect→incorrect] transitions during debate. 3.1 R EFLECT –CRITIQUE –REFINE Prompting Inspired by previous works that had iterative self-feedback procedures (Madaan et al., 2023; Gou et al., 2023; Peng et al., 2023), we propose REFLECT -CRITIQUE -REFINE (RCR) prompting, which mitigates this problem. Unlike the original MAD prompt that simply asks agents to revise their answer by looking at others, RCR prompting en- courages agents to present confident reasoning. 1) Every agent must reflect by stating why its cur- rent answer could be wrong; 2) Each agent then critiques exactly two peer rationales, naming a spe- cific flaw or a step it cannot verify; 3) Finally the agent may refine its answer—but only if it supplies at least one new reasoning step. Phrases like “identify any errors” reliably trig- ger negative tokens (“error”, “mistake”, “step X is wrong”) which LLMs have learned during supervised finetuning. By specifying valid next moves (defend/correct/adopt), we implicitly shape the log -probability mass toward useful trajecto- ries, shrinking the space of rambling answers. The single -step explanation requirement forces agents to think before copying, while the fixed quota of peer critiques limits needless verbosity. This prompting helped reduce sycophancy and im- proved performance. 3 Debate protocol LetA={a1, . . . , a N}be a set of agents and qa query. Round 0: each aiproduces (y(0) i, r(0) i), an answer and a rationale. If consensus (all answers identical) occurs, the process stops. Otherwise we enter a debate round t≥1following Algorithm 1: agents receive all previous rationales, execute REFLECT –CRITIQUE –REFINE , and output (y(t) i, r(t) i). The debate ends when either (i) consen- sus is reached or (ii) a maximum of Nrounds is exhausted, after which the final answer is decided by simple majority vote. 3.2 D EBATE , TRAIN , EVOLVE Framework So far we have seen that RCR prompting yields high-quality debate traces and measurable accuracy gains. We now move from learning from these de- bate traces to train a single language model. This re- sults in DEBATE , TRAIN , EVOLVE (DTE), a closed loop that distills the collective insight of multiple agents into a single, faster model. Framework overview Letπθ0be a frozen base policy with parameters θ0. At iteration kwe run Algorithm 1 on a batch of queries Qk, producing a set of debate traces Dk={(x, y⋆, R)}where xis the original query, y⋆the consensus answer andRa compressed rationale extracted from the highest -scoring agent. Extraction step keeps any explanation steps that appear in at least two agents or that introduce a symbolic manipulation absent from the previous round. We then fine -tune a student policy πθkonDk with Group Relative Policy Optimization (Shao et al., 2024) and then place πθkback into the agent pool, replacing its earlier version. The loop repeats until the mean validation reward stagnates for two iterations or a maximum of five cycles is reached. Algorithm 2 gives the exact procedure. Reward shaping and GRPO For | https://arxiv.org/abs/2505.15734v1 |
a query xand a student output ywe define the shaped reward r(x, y) =wvote[y=y⋆] +wfmtisFmt( y) +wbrevexp(−|y|/τ), where [·]is the indicator function, isFmt checks the<reasoning> <answer> XML template, and |y|is the token length. Weights are set to (wvote, wfmt, wbrev) = (2 ,0.5,0.5)andτ= 120 . GRPO maximises the clipped surrogateAlgorithm 2: DEBATE , TRAIN , EVOLVE Input: base policy πθ0, agent set A0={πθ0} ∪ B , queries Q, max iterations K 1fork= 1toKdo 2 sample batch Qk⊂ Q 3 Dk← {} 4 foreach x∈ Qkdo 5 run Algorithm 1 with agents Ak−1onx 6 store (x, y⋆, R)inDk 7 end 8 update θk←θk−1by maximising LGRPO(θ;Dk) 9 Ak← Ak−1\ {πθk−1} ∪ {πθk} 10 ifvalidation reward <1%better than previous then 11 break 12 end 13end 14return final student πθk LGRPO(θ) =Ex∼Dk, y∼πθ min ρ r, clip(ρ,1−ϵ,1 +ϵ)r −βKL πθ∥πθ0 , where ρ=πθ(y|x)/πθk−1(y|x),ϵ= 0.2, and β= 0.02. The first term is identical to PPO but uses the debate -derived reward; the second anchors the student to the base policy, a key factor for avoid- ing catastrophic forgetting (Luo et al., 2025; Kotha et al., 2024). Gradients are estimated with the stan- dard REINFORCE algorithm and a value network baseline. 4 Experiments 4.1 Experimental Setup Datasets. We conduct experiments on fivepub- lic reasoning benchmarks: 1)GSM8K (Cobbe et al., 2021), 2)GSM-Plus (Li et al., 2024) (harder numeric reasoning), 3) ARC-Easy ,4) ARC- Challenge (Clark et al., 2018), and 5) Common- senseQA (Talmor et al., 2019). Baselines and models. We conduct of RCR prompting study on ten open-weight mod- els—Qwen (0.5-32B), Llama-3/8B, Mistral-7B, Phi-mini—and two proprietary models, GPT-4o and GPT-4o-mini. We study our DTE framework with 6 models (Qwen 1.5B-14B, Llama-3B and Llama-8B). Baselines are: (i) the single original model; (ii) vanilla MAD with the original MAD prompt (Du et al., 2023). Parameter settings. During debate we sample each agent once per query at temperature T=1.0 (exploratory) or 0.0(deterministic); mixed-teams 4 ModelGSM8K GSM-Plus ARC-Challenge Original 3 Agent Evolved Single Original 3 Agent Evolved Single Original 3 Agent Evolved Single Model MAD Model (DTE) Model MAD Model (DTE) Model MAD Model (DTE) Qwen-2.5-1.5B 62.77 72.33 73.09 (+10.32 ↑) 42.00 53.33 55.92 (+13.92 ↑) 69.21 68.52 68.36 (-0.85 ↓) Qwen-2.5-3B 84.08 85.14 86.05 (+1.97 ↑) 61.75 68.00 69.50 (+7.75 ↑) 83.53 84.64 83.95 (-0.42 ↓) Qwen-2.5-7B 90.67 91.21 88.32 (-2.35 ↓) 68.62 74.17 74.71 (+6.09 ↑) 87.22 91.64 90.89 (+3.67 ↑) Qwen-2.5-14B 92.80 93.33 93.74 (+0.94 ↑) 71.79 77.25 78.88 (+7.09 ↑) 90.27 93.77 93.13 (+2.86 ↑) Llama-3.2-3B 72.55 73.84 75.06 (+2.51 ↑) 45.67 51.12 53.79 (+8.12 ↑) 73.12 76.19 77.23 (+4.11 ↑) Llama-3.1-8B 81.73 82.18 86.81 (+5.08 ↑) 55.62 60.79 66.17 (+10.55 ↑) 77.65 85.07 86.53 (+8.88 ↑) Table 1: Performance of one DEBATE –TRAIN –EVOLVE round. For six open-weight models we report test accuracy on three reasoning benchmarks in three settings: the single base model (“Original”), a 3-agent debate using our RCR prompt (“MAD”), and the evolved single student obtained after one DTE round. Green numbers denote the absolute gain of the evolved model over its Original Model, rednumbers a decrease in performance. use one exploratory and two deterministic | https://arxiv.org/abs/2505.15734v1 |
agents. For evolution we adopt LoRA fine-tuning (rank 128, dropout 0.05) on attention and MLP pro- jections, freezing embeddings and layer norms. GRPO is optimized with AdamW (learning rate 2×10−5, weight decay 0.01, 50-step linear warm- up). Each evolution epoch processes 8k debate traces ( ∼2 M tokens) and runs on A100-80 GB GPUs for a 7B model; larger models scale near- linearly. Evaluation metrics. Task performance is ex- act match for GSM-style datasets and accuracy for MC-QA. For RCR evaluation, we also track Sycophancy-Rate : the fraction of agents switch- ing to an incorrect peer answer without adding new reasoning; [incorrect →correct] instances during MAD. 4.2 Main Results Our main results are organized into three main parts: (1)First, we evaluate the effectiveness of DEBATE –TRAIN –EVOLVE (DTE) framework, (2) Next, we test its generalization across different rea- soning tasks by transferring evolved models to new datasets, and (3)Finally, we analyze the extent of model self-evolution through iterative rounds. 1) O VERALL DTE PERFORMANCE .Evolved model using DTE shows an average gain of 8.92% ACCURACY on GSM-PLUS compared to its vanilla performance. Table 1 contrasts three settings: the single base model (“Original”), a three-agent debate with our RCR prompt (“MAD”), and the evolved single model produced by one DE- BATE –TRAIN –EVOLVE pass. On GSM-Plus —the hard math dataset—DTE improves every model, with an average gain of +2.38 points over three- agent MAD. Qwen-1.5B shows the largest jump(+13.92 pts), confirming that evolution is most helpful when the base model has head-room and the debate provides diverse traces. OnGSM8K the average gain is smaller ( +0.84 pts) because sev- eral models were already near their ceiling after de- bate. ARC-Challenge sees a mixed results: large models benefit (+3.67 pts for Qwen-7B, +8.88 pts for Llama-8B) while small models drift by <1pt. Overall, DTE shows a mean improvement of 3.06 ptsover single model and +1.09 pts over MAD while restoring single-pass inference. 2) C ROSS -DOMAIN GENERALIZATION .Our results suggests that DTE improves reasoning that travels beyond the source data, with larger models showing the most stable improvements. Table 2 reports how well the evolved models gen- eralize on other datasets. We test two scenarios: evolve using (i)GSM8K; (ii)GSM-Plus and test on four unseen datasets. When trained on GSM8K, every model gains on GSM-Plus (average +5.8 pts) and on ARC-Challenge (+2.5 pts on average). ARC-Easy also sees small but consistent gains except for the 1.5B model, which drops 1.6 pts. CommonsenseQA improves for 5/6 models, indi- cating that the reward shaped from mathematical traces still helps improve on commonsense reason- ing. Negative deltas are confined to the smallest model (Qwen-1.5B) and to a lesser degree Qwen- 3B, suggesting that small models struggles to rec- oncile new skills with prior knowledge. In contrast, models ≥7B never lose more than 0.2 pts on any transfer task. Training on GSM-Plus and testing on GSM8K yields similar behaviour: large gains on the GSM8K (+3.7 pts on average) and moder- ate gains on others. The symmetry suggests that DTE learns general reasoning heuristics (e.g. nu- meric decomposition, unit tracking) | https://arxiv.org/abs/2505.15734v1 |
rather than memorising dataset-specific patterns. 5 ModelFine-tuned on GSM8K Fine-tuned on GSM-Plus GSM-Plus ARC-Easy ARC-Challenge CommonsenseQA GSM8K ARC-Easy ARC-Challenge CommonsenseQA (∆) ( ∆) ( ∆) ( ∆) ( ∆) ( ∆) ( ∆) ( ∆) Qwen-2.5-1.5B +9.21 ↑ -1.60↓ +0.67↑ -2.23↓ +10.32 ↑ -1.52↓ +0.24↑ -2.31↓ Qwen-2.5-3B +3.79 ↑ +1.27↑ +0.83↑ +3.26↑ +1.36↑ +1.09↑ +0.60↑ +3.26↑ Qwen-2.5-7B +1.01 ↑ +1.73↑ +4.50↑ +3.40↑ +1.14↑ +1.69↑ +3.65↑ +3.32↑ Qwen-2.5-14B +1.67 ↑ +2.53↑ +3.42↑ +1.33↑ +0.53↑ +2.32↑ +4.01↑ -0.14↓ Llama-3.2-3B +6.71 ↑ +2.48↑ -1.11↓ +3.10↑ +3.80↑ +1.93↑ -3.92↓ +3.51↑ Llama-3.1-8B +8.13 ↑ +3.91↑ +6.74↑ +1.10↑ +5.15↑ +4.88↑ +7.84↑ +0.85↑ Table 2: Cross-domain generalisation of evolved students. Each cell shows the change in test accuracy ( ∆, in points) after one DTE pass, relative to the same model before evolution. The table is split by the dataset used for fine-tuning—GSM8K (left block) or GSM-Plus (right block)—and reports transfer to four unseen targets. Green numbers signal gains, rednumbers losses. Original ModelEvolution Round 1Evolution Round 260657075808590Accuracy (%) GSM8K Original ModelEvolution Round 1Evolution Round 24045505560657075Accuracy (%) GSM Plus Qwen-2.5-1.5B Qwen-2.5-3B Qwen-2.5-7B Llama-3B Llama-8B Figure 2: Accuracy vs. evolution round. 3) H OW FAR CAN A MODEL EVOLVE ?Results show that one evolution round captures nearly all of the available gains. Figure 2 reports ac- curacy over two evolution rounds for five models on GSM8K and GSM-Plus. Round 1 almost al- ways helps: the smallest model (Qwen-1.5B) jumps from 42.0 →55.9 on GSM-Plus and 62.8 →73.1 on GSM8K, while Llama-8B gains 10.6 and 5.1 points on the same datasets. The only counter- example is Qwen-7B, which drops 2.4 points on GSM8K despite improving 6.1 on GSM-Plus; upon manual inspection we see that its Round-1 traces over-emphasise shortcut heuristics that hurt eas- ier questions. In Round 2, we observe little im- provement and sometimes the performance even drops. Large models ( ≥7B) add at most +0.8 points, for Qwen-3B on GSM8K, and more often lose 0.4–1.4 points. The 1.5B model gives back 0.9 points on GSM8K and 2.8 on GSM-Plus, but still ends well above its starting point. Across all runs the mean forgetting Fgt2= max t<2(Acct−Acc 2) is 0.92 pts for models ≥7B and 1.6 pts for smaller ones, confirming that smaller models suffers from catastrophic forgetting. 4.3 Ablation Studies 1) E FFECTIVENESS OF THE RCR PROMPT INMAD. RCR prompting substantially boostperformance over original MAD prompting (Du et al., 2023). Figure 3 compares single-model in- ference, the original debate prompt (MAD@3), and our REFLECT –CRITIQUE –REFINE (RCR- MAD@3) prompt. Across eight diverse models the RCR prompting raises three-agent accuracy by an average of +1.9 pts on GSM8K, +3.7 pts on GSM-Plus, and +0.7 pts on ARC-Challenge. The gain scales with task difficulty: GSM-Plus, which contains harder adversarial questions, benefits the most (up to +7.9 pts for Qwen-1.5B and +6.1 pts for Qwen-7B). On ARC-Challenge improvements are smaller but still positive for 6/8 models. RCR prompting also significantly reduces sycophancy. Ithalves the mean sycophancy rate (from 0.28 to 0.13 on GSM-Plus) and narrows the verbosity gap by 43 %, indicating that agents now switch answers only when they can articulate a | https://arxiv.org/abs/2505.15734v1 |
new reasoning step. These observations confirm that RCR is a neces- sary pre-step for producing high-quality traces later utilized by the DTE training loop. 2) H OW MANY AGENTS ARE ENOUGH ?Re- sults shows that three agents MAD captures 85- 95 % of the maximum gains. Figure 4 sweeps the agents size from 1−7and reports trends on four benchmark. We observe three clear patterns here: 1)Beyond 3-agent the curve plateaus and even oscillates , suggesting the marginal informa- tion added by the 4th or 5th agent. 2)Small models benefit most from extra agents. Already strong single-agent (Qwen-14B) adds minimal improve- ment upon scaling up after three. 3)Harder tasks need (slightly) more agents. On GSM-Plus the optimum often shifts to four or five agents: Qwen- 7B reaches its peak accuracy (76.0%) at 7 agents, 1.04 pts above the three-agent setting. ARC-Easy, a much easier dataset, saturates at 2 agents for every model; extra debaters add noise rather than insight. 6 4 2 . 0 %6 2 . 8 %8 4 . 1 %8 6 . 9 %9 0 . 7 %8 1 . 7 %9 3 . 0 %9 4 . 5 % 4 5 . 6 %6 5 . 8 %8 7 . 0 %8 8 . 2 %9 1 . 7 %8 3 . 0 %9 3 . 0 %9 5 . 3 % 4 7 . 3 %7 5 . 1 %8 7 . 6 %8 8 . 4 %9 3 . 0 %8 4 . 2 %9 4 . 2 %9 4 . 6 % Qwen-0.5B Qwen-1.5B Qwen-3B Phi-3.5-mini-3.8B Qwen-7B Llama-8B GPT -4o-miniGPT -4o0204060801002 4 . 8 %4 2 . 0 %6 1 . 8 %6 3 . 4 %6 8 . 6 %5 5 . 6 %7 0 . 1 %7 2 . 6 % 2 8 . 1 %4 5 . 8 %6 6 . 8 %6 5 . 8 %6 9 . 7 %5 7 . 6 %7 5 . 7 %7 8 . 1 % 2 9 . 2 %5 3 . 7 %6 8 . 5 %7 0 . 2 %7 5 . 8 %6 2 . 7 %7 6 . 2 %7 8 . 8 % Qwen-0.5B Qwen-1.5B Qwen-3B Phi-3.5-mini-3.8B Qwen-7B Llama-8B GPT -4o-miniGPT -4o0204060801003 8 . 0 %6 9 . 2 %8 3 . 5 %8 4 . 7 %8 7 . 2 %7 7 . 7 %9 3 . 9 %9 6 . 8 % 3 8 . 6 %7 4 . 2 %8 5 . 4 %8 7 . 5 %9 1 . 5 %8 6 . 1 %9 4 . 3 %9 7 . 0 % 4 0 . 8 %7 2 . 5 %8 6 . 3 %8 9 . 1 %9 1 . 8 %8 7 . 5 %9 5 . 1 %9 7 . 2 % Qwen-0.5B Qwen-1.5B Qwen-3B Phi-3.5-mini-3.8B Qwen-7B Llama-8B GPT -4o-miniGPT -4o020406080100 Single Model Original MAD@3 RCR-MAD (Ours)@3Accuracy (%) Accuracy (%) Accuracy (%)GSM8K GSM-PLUS ARC-ChallengeFigure 3: Results (%) on: GSM8K, GSM-PLUS, and ARC-Challenge datasets. Performance is compared across three evaluation settings: single model | https://arxiv.org/abs/2505.15734v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.