text string | source string |
|---|---|
sampled token t1as an one-hot vector t1∈ {0,1}|V|withP ktk 1= 1, its expectation under the multinomial distribution is the first concept token : ct1=E[t1] =X t1p(t1|x)t1=p(· |x)∈∆|V|−1, (9) which aligns the concept token in Definition 1. By a linear approximation of p(y|x,·)around the mean, we obtain p(y|x) =X t1p(t1|x)p(y|x, t1)≈p y|x,X t1p(t1|x)t1 =p y|x, ct 1 . (10) Thus, the full outer summation is replaced by a single evaluation at the concept token ct1. 5 Recursive Approximation. We now apply the same linearization recursively. Given ct1, the conditional probability expands as p(y|x, ct 1) =X t2p(t2|x, ct 1)p y|x, ct 1, t2 ≈p y|x, ct 1, ct2 , (11) where ct2=P t2p(t2|x, ct 1)t2. Repeating this process for all msteps yields the continuous expansion: p(y|x)≈p y|x, ct 1, ct2, . . . , ct m . (12) Comparison to Standard CoT. In contrast, discrete CoT replaces each summationP tjp(tj| ·) with sampling a single token, thereby discarding mass from all other paths. Soft Thinking preserves the full probability distribution at each step through concept tokens , collapsing the exponential path-summation in Eq. 7 into a single forward pass under a sequence of linear approximations. 4 Experiments & Results 4.1 Experiment Setup Benchmarks. We conduct a comprehensive evaluation of our method on eight benchmark tasks, including Math500 [ 31], AIME 2024 [ 32], GSM8K [ 33], and GPQA-Diamond [ 34] in the math- ematics domain, as well as HumanEval [ 35], MBPPMBPP [ 36], and LiveCodeBench [ 37] in the programming domain. Detailed descriptions of these benchmarks are provided in Appendix A.2. Models. We select three widely used open-source LLMs: QwQ-32B [ 13], DeepSeek-R1-Distill- Qwen-32B [ 38], and DeepSeek-R1-Distill-Llama-70B [ 38]. This diverse selection is designed to demonstrate the effectiveness and generalizability of the Soft Thinking approach across different model scales (32B and 70B), model architectures (Qwen and LLaMA), and training paradigms (QwQ- 32B is trained with reinforcement learning, while the DeepSeek models are trained via supervised distillation). Baseline Methods. We evaluate the performance of Soft Thinking by comparing it with two representative baselines. These baselines include Standard CoT Thinking, which employs explicit step-by-step reasoning, and Standard Greedy CoT Thinking, which utilizes greedy decoding at each step of the reasoning process. Metrics. We use the Pass@1 metric to evaluate the accuracy of the model’s generated answers. The formula for computing Pass@k is as follows: Pass@k = 1− n−c k n k (13) where nis the total number of samples (e.g., 16), cis the number of correct samples, and kis the num- ber of samples selected (we set k= 1). Therefore, Pass@1 =c n. Besides, we evaluate the model’s reasoning efficiency by reporting the number of tokens generated specifically for correct solutions. These two metrics allow us to comprehensively evaluate the trade-off between computational cost and performance across different methods. Implementation Details. We reuse the model’s existing embedding matrix without any extra parameters or layers as concept tokens , and the Cold Stop controller monitors decoder entropy and emits an end -of-thinking marker when triggered. No model weights update, architecture change, or additional training procedures, | https://arxiv.org/abs/2505.15778v1 |
Soft Thinking can be plugged into the CoT pipeline of any LLM with minimal engineering effort. We implement our Soft Thinking on SGLang [ 39], enabling fast inference (see Appendix A.3 for implementation details). We evaluate our method on a server equipped with eight NVIDIA H100 80GB GPUs. 6 Accuracy ↑ Generation Length ↓ MATH 500AIME 2024GSM8KGPQA DiamondAvg.MATH 500AIME 2024GSM8KGPQA DiamondAvg. QwQ-32B [13] CoT Thinking 97.66 76.88 96.67 64.17 83.84 4156 12080 1556 8095 6472 CoT Thinking (Greedy) 97.00 80.00 96.57 65.15 84.68 ( ↑0.84) 3827 11086 1536 7417 5967 ( ↓7.8%) Soft Thinking 98.00 83.33 96.81 67.17 86.32 (↑2.48) 3644 10627 1391 7213 5719 (↓11.6%) DeepSeek-R1-Distill-Qwen-32B [38] CoT Thinking 94.50 72.08 95.61 63.10 81.32 3543 9347 875 6218 4995 CoT Thinking (Greedy) 93.00 63.33 95.30 59.09 77.68 ( ↓3.64) 3651 8050 1048 8395 5286 ( ↑5.8%) Soft Thinking 95.00 76.66 95.83 64.64 83.03 (↑1.71) 3373 6620 785 4722 3875 (↓22.4%) DeepSeek-R1-Distill-Llama-70B [38] CoT Thinking 94.70 70.40 94.82 65.34 81.31 3141 8684 620 5500 4486 CoT Thinking (Greedy) 94.61 73.33 93.60 66.16 81.92 ( ↑0.61) 2877 9457 606 4443 4345 (↓3.1%) Soft Thinking 94.80 73.33 94.90 66.66 82.42 (↑1.11) 3021 6644 597 4470 3683 ( ↓17.9%) Table 1: Comparison of Soft Thinking and various baseline methods on accuracy and generation length across mathematical datasets. Best results are highlighted in bold . Accuracy ↑ Generation Length ↓ HumanEval MBPP LiveCodeBench Avg. HumanEval MBPP LiveCodeBench Avg. QwQ-32B [13] CoT Thinking 97.63 97.49 62.00 85.70 2557 2154 9986 4899 CoT Thinking (Greedy) 95.73 96.50 57.35 83.19 ( ↓2.51) 2396 2069 7034 3833 (↓21.8%) Soft Thinking 98.17 97.66 62.72 86.18 (↑0.48) 2638 2157 7535 4110 ( ↓16.1%) DeepSeek-R1-Distill-Qwen-32B [38] CoT Thinking 97.25 95.13 57.33 83.23 3095 2761 8376 4744 CoT Thinking (Greedy) 87.19 87.54 43.36 72.70 ( ↓10.53) 2294 1703 4702 2900 (↓38.9%) Soft Thinking 97.56 95.33 59.50 84.13 (↑0.90) 2713 2534 6255 3834 ( ↓19.1%) DeepSeek-R1-Distill-Llama-70B [38] CoT Thinking 97.71 94.77 56.94 83.14 2711 2386 8319 4472 CoT Thinking (Greedy) 92.07 91.82 48.02 77.30 ( ↓5.84) 2192 1979 5438 3203 ( ↓28.3%) Soft Thinking 98.17 94.94 58.42 83.84 (↑0.70) 2498 2214 6512 3741 ( ↓16.3%) Table 2: Comparison of Soft Thinking and various baseline methods on accuracy and generation length across two coding datasets. Best results are highlighted in bold . 4.2 Hyper-parameter Settings For all experiments, the maximum generation length was set to 32,768, the temperature to 0.6, top-k to30, and top- pto0.95, unless specified otherwise. The Standard CoT baseline was evaluated using 16 samples per problem to calculate Pass@1 accuracy, whereas the greedy CoT approach utilized a temperature of 0 with a single sample. ForSoft Thinking , the concept token was determined using the top- ntokens, where n∈ 5,10,15,20,30, along with an entropy threshold τchosen from 0.01,0.05,0.1,0.2and a length threshold kselected from 128,256,512,1024 . All other settings were kept consistent. We find that n= 15 yields the best performance for QwQ-32B [ 13], while n= 10 is optimal for DeepSeek-R1 models [38]. Results are reported based on the best-performing combinations of τandk. 4.3 Results and Analysis We present the quantitative evaluation results of | https://arxiv.org/abs/2505.15778v1 |
Soft Thinking and other baseline methods on mathematical and coding datasets in Table 1 and Table 2, respectively. Improved Pass@1 Accuracy. Our proposed Soft Thinking consistently enhances Pass@1 accu- racy across all evaluated math and coding benchmarks, demonstrating its broad effectiveness and generalization ability. For instance, on mathematical reasoning tasks, the QwQ-32B model’s average Pass@1 improves from 83.84% (CoT Thinking) to 86.32% ( Soft Thinking ), representing a notable gain of 2.48% points. On the challenging AIME2024 dataset, the improvement reaches 6.45% points. Similarly, for DeepSeek-R1-Distill-Qwen-32B and DeepSeek-R1-Distill-Llama-70B, Pass@1 increases by 1.71% and 1.11% points, respectively. On coding benchmarks, Soft Thinking also achieves consistent improvements: QwQ-32B sees a 0.48-point increase in average Pass@1, while DeepSeek-R1-Distill-Qwen-32B and DeepSeek-R1-Distill-Llama-70B improve by 0.90% and 0.70% 7 Question:43 * 34 = ?CoTThinking(157tokens):First, I recognize that multiplying 43 by 34 can be simplified by breaking down the numbers into more manageable parts. I'll decompose 34 into 30 and 4.\n\nNext, I'll multiply 43 by 30. Since 30 is three times 10, I can first multiply 43 by 3 to get 129, andthen add a zero to account for the 10, resulting in 1,290.\n\nThen, I'll multiply 43 by 4, which equals 172.\n\nFinally, I'll add the two results together: 1,290 plus 172 gives me the final answer of 1,462.SoftThinking(96tokens):To solve 43 multiplied by 34, I'll start by breaking down the numbers into their place values. \n\nFirst, I'll multiply 43 by 4, which gives me 172.\n\nNext, I'll multiply 43 by 30, resulting in 1,290.\n\nFinally, I'll add these two products together: 172 plus 1,290 equals 1,462.Figure 3: A comparison between standard CoT and Soft Thinking on a multiplication problem. We select the token with the highest probability at each step of Soft Thinking for readability and interpretability. Full distribution is visualized in Figure 4. Red text denotes repetitive, useless words. points, respectively. These results demonstrate that Soft Thinking provides robust accuracy gains across both math and code domains. Token Efficiency. A key advantage of Soft Thinking is its significant reduction in generation length, leading to improved token efficiency. On mathematical reasoning benchmarks, Soft Thinking reduces token usage for QwQ-32B by 11.6%, DeepSeek-R1-Distill-Qwen-32B by 22.4%, and DeepSeek- R1-Distill-Llama-70B by 17.9% compared to standard CoT thinking. Similar trends are observed in coding tasks: for QwQ-32B, DeepSeek-R1-Distill-Qwen-32B, and DeepSeek-R1-Distill-Llama-70B, token usage is reduced by 16.1%, 19.1%, and 16.3%, respectively. This substantial reduction in token usage not only translates to lower computational and inference costs, but also indicates that the model can reach correct answers through more concise and efficient reasoning. Such token efficiency is particularly valuable for real-world applications, where cost, speed, and scalability are crucial. Analysis. Our experimental results demonstrate that Soft Thinking achieves simultaneous improve- ments in both reasoning performance and token efficiency across a diverse set of mathematical and coding benchmarks. This dual gain highlights a key advantage of our approach: by leveraging concept tokens that encode richer semantic information at each reasoning step, the model is able to represent and process more abstract or composite ideas within a single token. As a result, fewer reasoning steps are required to reach the | https://arxiv.org/abs/2505.15778v1 |
correct solution, directly translating to reduced token usage. Results also stand in stark contrast to the greedy decoding baseline, which, while also reducing token usage, suffers from substantial drops in accuracy, particularly on complex code generation tasks where the loss of diversity in reasoning paths leads to premature convergence on suboptimal solutions. These findings suggest that the superior efficiency of Soft Thinking is not simply a result of more aggressive pruning or shortcutting, but rather reflects a fundamental enhancement in the model’s reasoning process. By maintaining the full probability distribution over possible next tokens at each step, our method allows for a “soft” aggregation of multiple reasoning trajectories, effectively broadening the model’s exploration space without incurring the combinatorial explosion of explicit enumeration. This enables the model to make more informed and confident decisions earlier in the reasoning chain, reducing unnecessary detours and redundant steps. Overall, the results provide strong evidence that Soft Thinking breaks the traditional trade-off between performance and efficiency in large language model reasoning. Instead of sacrificing one for the other, our approach inherently boosts both, offering a more powerful and concise reasoning framework that is readily applicable to a wide range of tasks and model architectures. 4.4 Qualitative Results Visualization of Shorted Examples. Figure 3 demonstrates the comparison between Standard CoT andSoft Thinking . We select the token with the highest probability at each step of Soft Thinking for visualization. It can be seen that Soft Thinking has high readability and interpretability. While both methods arrive at the correct answer ( 1,462), Soft Thinking produces a significantly more concise explanation ( 96tokens vs. 157tokens). This demonstrates Soft Thinking’s ability to preserve logical structure while improving token efficiency. 8 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20T o 0.39 solve 0.88' ' 0.914 1.003 1.00 multiplied 1.00 by 1.00' ' 1.003 1.004 1.00, 1.00 I 1.00'll 0.60 start 0.53 by 1.00 breaking 1.00 down 1.00 the 0.84 numbers 0.60 into 0.50 First 0.39 calculate 0.07 the 0.09 can 0.40 break 0.28' ' 0.16 multiplication 0.40 to 0.50 I 0.21 multiply 0.05 use 0.19 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 their 0.38 place 0.92 values 1.00. 0.70' \n\n' 0.60First 0.91, 1.00 I 1.00'll 1.00 multiply 0.45' ' 1.004 1.003 1.00 by 1.00' ' 1.004 0.55, 0.87 which 1.00 gives 0.50 me 0.55 smaller 0.16 tens 0.08.\n\n 0.30 This 0.174 0.09 decom 0.203 0.450 0.13 equals 0.41' ' 0.45 make 0.16 I 0.17 separate 0.16 is 0.09 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60' ' 1.001 1.007 1.002 1.00.\n\n 0.60Next 1.00, 1.00 I 1.00'll 1.00 multiply 1.00' ' 1.004 1.003 1.00 by 1.00' ' 1.003 1.000 1.00, 1.00 resulting 1.00 in 1.00 . 0.40 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80' ' 1.001 1.00, 0.602 1.009 1.000 1.00.\n\n 1.00Finally 1.00, | https://arxiv.org/abs/2505.15778v1 |
1.00 I 1.00'll 1.00 add 1.00 these 0.65 two 1.00 products 0.91 together 1.00: 0.87' ' 1.001 1.007 0.95 2 0.40 the 0.35 results 0.09 to 0.13, 0.05 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 972 1.00 plus 1.00' ' 1.001 1.00, 1.002 1.009 1.000 1.00 equals 1.00' ' 1.001 1.00, 1.004 1.006 1.002 1.00.\n 0.50</think> 1.00 .\n\n 0.50 0.00.20.40.60.81.0 ProbabilityFigure 4: An example illustrating the probability distribution of our proposed Soft Thinking method. At each step, top- ktoken candidates and their probabilities are shown. Red boxes indicate the selected tokens that form the final generated sequence for readability and interpretability. Accuracy Generation Length All Generation Length Correct AIME 2024 ↑LiveCodeBench ↑AIME 2024 ↓LiveCodeBench ↓AIME 2024 ↓LiveCodeBench ↓ COCONUT-TF 0.0 0.0 32,768 32,768 – – Average Embedding 6.66 7.49 30,802 30,474 6,556 2,141 Soft Thinking w/o Cold Stop 73.33 56.98 12,991 13,705 9,457 6,877 Soft Thinking w/ Cold Stop 83.33 62.72 11,445 12,537 10,627 7,535 Table 3: Ablation study of Soft Thinking with QwQ-32B on the AIME 2024 and LiveCodeBench. COCONUT-TF represents training-free COCONUT [27]. Visualization of Embedding Weights. In Figure 4, we present the token probability distributions at each intermediate reasoning step of Soft Thinking . For demonstration, we highlight the top three tokens. During exploratory reasoning phases, such as steps 1−3,13−14, and 18−20, the token distribution appears more uniform, reflecting the presence of multiple viable paths. In contrast, during precise calculations, the token distribution becomes nearly one-hot, indicating that textual elements facilitate path exploration, while numerical components handle exact computations. Notably, at steps 36-37, the model evaluates whether to multiply by 4 or 30, ultimately assigning a higher probability to 4. As a result, at step 42, the model selects multiplication by 4. This demonstrates how Soft Thinking integrates path exploration across consecutive concept tokens , thereby enhancing both reasoning flexibility and depth. 4.5 Ablation Study To comprehensively assess the effectiveness of Soft Thinking , we conducted ablation studies focusing on different strategies for concept token and the impact of the Cold Stop mechanism. Different strategies for concept token .Specifically, we compared (1) the training-free COCONUT approach [ 27], which directly feeds the previous hidden state as the next input embedding; (2) a simple average embedding strategy that takes the mean of the top- ntoken embeddings (we use 5 for ablation); and (3) our Soft Thinking , which computes a probability-weighted over embeddings. As shown in Table 3, we observe that the training-free COCONUT fails entirely, producing no correct answers and always reaching the maximum generation length. The average embedding method performs marginally better, yielding a small number of correct solutions but still suffering from extremely long output. In contrast, our Soft Thinking substantially improves both accuracy and token efficiency. Impact of Cold Stop .We further analyze the Cold Stop by comparing Soft Thinking with and without it. Without Cold Stop , the model is prone to generation collapse, especially due to out-of- 9 distribution (OOD) issues because models have never been trained on concept tokens . It will begin to | https://arxiv.org/abs/2505.15778v1 |
repeat until it hits the maximum generation length. This OOD-induced collapse significantly increases the average generation length across all problems. However, for problems that are solved correctly (typically those requiring shorter reasoning chains), the average length remains relatively low, as these cases are less likely to trigger collapse. When Cold Stop is activated, generation collapse is effectively mitigated, and the model avoids unnecessary exploration along overconfident paths, resulting in a significant reduction in average generation length for all problems. Interestingly, as Cold Stop allows the model to correctly solve more challenging problems that require longer reasoning chains, the average length for correct solutions increase. Nevertheless, as demonstrated in Tables 1 and 2, Soft Thinking with Cold Stop not only solves more problems than standard CoT but also achieves greater overall efficiency, confirming the effectiveness of our approach. 5 Conclusion In this work, we present Soft Thinking , a novel, training-free framework that enables large language models to reason in a continuous concept space by leveraging probabilistically weighted concept tokens instead of traditional discrete tokens. By aggregating information across the entire vocabulary at each reasoning step, our method allows the model to implicitly explore multiple reasoning paths in parallel, leading to both higher accuracy and greater token efficiency. Experiments on mathematical and coding benchmarks demonstrate that Soft Thinking consistently improves pass@1 accuracy and reduces generation length, all without any additional training or architectural modifications. Qualitative analyses further show that the reasoning process remains interpretable and concise. Future work may explore integrating training-based approaches to adapt the concept token , with the goal of further improving performance and stability when faced with out-of-distribution inputs. 6 Acknowledgments We would like to express our sincere gratitude to Yue Fan, Saaket Agashe, Liliang Ren, Hao Cheng, Baolin Peng, and Yiping Wang for their valuable feedback and constructive discussions. We thank Orby AI for generously providing the computational resources. Additionally, we appreciate the SGLang Team’s assistance during development. References [1]Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. [2]Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, and Ashish Sabharwal. Decomposed prompting: A modular approach for solving complex tasks. arXiv preprint arXiv:2210.02406 , 2022. [3]Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc Le, et al. Least-to-most prompting enables complex reasoning in large language models. arXiv preprint arXiv:2205.10625 , 2022. [4]Guhao Feng, Bohang Zhang, Yuntian Gu, Haotian Ye, Di He, and Liwei Wang. Towards revealing the mystery behind chain of thought: a theoretical perspective. Advances in Neural Information Processing Systems , 36:70757–70798, 2023. [5]R Quian Quiroga, Leila Reddy, Gabriel Kreiman, Christof Koch, and Itzhak Fried. Invariant visual representation by single neurons in the human brain. Nature , 435(7045):1102–1107, 2005. [6]Evelina Fedorenko and Rosemary Varley. Language and thought are not the same thing: evidence from neuroimaging and neurological patients. Annals of the New York Academy of Sciences , 1369(1):132–153, | https://arxiv.org/abs/2505.15778v1 |
2016. [7]AA Ivanova, Z Mineroff, V Zimmerer, N Kanwisher, R Varley, and E Fedorenko. The language network is recruited but not required for non-verbal semantic processing. biorxiv, 696484, 2019. 10 [8]Yael Benn, Anna A Ivanova, Oliver Clark, Zachary Mineroff, Chloe Seikus, Jack Santos Silva, Rosemary Varley, and Evelina Fedorenko. The language network is not engaged in object categorization. Cerebral Cortex , 33(19):10380–10400, 2023. [9]Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. Ad- vances in neural information processing systems , 36:11809–11822, 2023. [10] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171 , 2022. [11] Lifan Yuan, Yangyi Chen, Ganqu Cui, Hongcheng Gao, Fangyuan Zou, Xingyi Cheng, Heng Ji, Zhiyuan Liu, and Maosong Sun. Revisiting out-of-distribution robustness in nlp: Benchmark, analysis, and llms evaluations. arXiv preprint arXiv:2306.04618 , 2023. [12] Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. [13] Qwen Team. Qwq-32b: Embracing the power of reinforcement learning, March 2025. [14] Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen. Mammoth: Building math generalist models through hybrid instruction tuning. arXiv preprint arXiv:2309.05653 , 2023. [15] Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for large language models. arXiv preprint arXiv:2309.12284 , 2023. [16] Peiyi Wang, Lei Li, Zhihong Shao, Runxin Xu, Damai Dai, Yifei Li, Deli Chen, Yu Wu, and Zhifang Sui. Math-shepherd: Verify and reinforce llms step-by-step without human annotations. InProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 9426–9439, 2024. [17] Alex Havrilla, Yuqing Du, Sharath Chandra Raparthy, Christoforos Nalmpantis, Jane Dwivedi- Yu, Maksym Zhuravinskyi, Eric Hambro, Sainbayar Sukhbaatar, and Roberta Raileanu. Teaching large language models to reason with reinforcement learning. arXiv preprint arXiv:2403.04642 , 2024. [18] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, YK Li, Yu Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. [19] Fangxu Yu, Lai Jiang, Haoqiang Kang, Shibo Hao, and Lianhui Qin. Flow of reasoning: Efficient training of llm policy with divergent thinking. arXiv preprint arXiv:2406.05673 , 2024. [20] Yangzhen Wu, Zhiqing Sun, Shanda Li, Sean Welleck, and Yiming Yang. Inference scaling laws: An empirical analysis of compute-optimal inference for problem-solving with language models. arXiv preprint arXiv:2408.00724 , 2024. [21] Sohee Yang, Elena Gribovskaya, Nora Kassner, Mor Geva, and Sebastian Riedel. Do large language models latently perform multi-hop reasoning? arXiv preprint arXiv:2402.16837 , 2024. [22] Eden Biran, Daniela Gottesman, Sohee Yang, Mor Geva, and Amir Globerson. Hopping too late: Exploring the limitations of large language models on multi-hop queries. arXiv | https://arxiv.org/abs/2505.15778v1 |
preprint arXiv:2406.12775 , 2024. [23] Yuval Shalev, Amir Feder, and Ariel Goldstein. Distributional reasoning in llms: Parallel reasoning processes in multi-hop reasoning. arXiv preprint arXiv:2406.13858 , 2024. [24] Xinyi Wang, Lucas Caccia, Oleksiy Ostapenko, Xingdi Yuan, William Yang Wang, and Alessandro Sordoni. Guiding language model reasoning with planning tokens. arXiv preprint arXiv:2310.05707 , 2023. 11 [25] Loïc Barrault, Paul-Ambroise Duquenne, Maha Elbayad, Artyom Kozhevnikov, Belen Alastruey, Pierre Andrews, Mariano Coria, Guillaume Couairon, Marta R Costa-jussà, David Dale, et al. Large concept models: Language modeling in a sentence representation space. arXiv preprint arXiv:2412.08821 , 2024. [26] Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, and Furu Wei. Improving text embeddings with large language models. arXiv preprint arXiv:2401.00368 , 2023. [27] Shibo Hao, Sainbayar Sukhbaatar, DiJia Su, Xian Li, Zhiting Hu, Jason Weston, and Yuandong Tian. Training large language models to reason in a continuous latent space. arXiv preprint arXiv:2412.06769 , 2024. [28] Yige Xu, Xu Guo, Zhiwei Zeng, and Chunyan Miao. Softcot: Soft chain-of-thought for efficient reasoning with llms. arXiv preprint arXiv:2502.12134 , 2025. [29] Mengting Hu, Zhen Zhang, Shiwan Zhao, Minlie Huang, and Bingzhe Wu. Uncertainty in natural language processing: Sources, quantification, and applications. arXiv preprint arXiv:2306.04459 , 2023. [30] Alfréd Rényi. On measures of entropy and information. In Proceedings of the fourth Berkeley symposium on mathematical statistics and probability, volume 1: contributions to the theory of statistics , volume 4, pages 547–562. University of California Press, 1961. [31] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874 , 2021. [32] AMC. American invitational mathematics examination. https://artofproblemsolving. com/wiki/index.php/American_Invitational_Mathematics_Examination , 2025. [33] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 , 2021. [34] David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R Bowman. Gpqa: A graduate-level google-proof q&a benchmark. In First Conference on Language Modeling , 2024. [35] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-V oss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code. 2021. [36] Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large | https://arxiv.org/abs/2505.15778v1 |
language models. arXiv preprint arXiv:2108.07732 , 2021. [37] Naman Jain, King Han, Alex Gu, Wen-Ding Li, Fanjia Yan, Tianjun Zhang, Sida Wang, Ar- mando Solar-Lezama, Koushik Sen, and Ion Stoica. Livecodebench: Holistic and contamination free evaluation of large language models for code. arXiv preprint arXiv:2403.07974 , 2024. [38] Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. 12 [39] Lianmin Zheng, Liangsheng Yin, Zhiqiang Xie, Chuyue Livia Sun, Jeff Huang, Cody Hao Yu, Shiyi Cao, Christos Kozyrakis, Ion Stoica, Joseph E Gonzalez, et al. Sglang: Efficient execution of structured language model programs. Advances in Neural Information Processing Systems , 37:62557–62583, 2024. [40] Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. arXiv preprint arXiv:2305.20050 , 2023. 13 A Appendix A.1 Limitation While Soft Thinking demonstrates significant improvements in both reasoning accuracy and token efficiency without requiring any additional training, its training-free nature also introduces certain limitations. Specifically, since current large language models are trained exclusively on discrete token sequences, they have never encountered concept tokens , the probability-weighted mixtures of token embeddings, during pre-training or fine-tuning. As a result, feeding such continuous concept tokens into the model during inference places it in an out-of-distribution (OOD) regime. This can lead to instability or generation collapse, especially when the reasoning chain becomes long or the input distribution diverges from the model’s training data. Although our Cold Stop mechanism helps mitigate these issues by terminating reasoning when the model is sufficiently confident, it does not fundamentally resolve the OOD problem. Future work should explore training strategies that explicitly expose the model to concept tokens , thereby aligning its internal representations with the continuous concept space and improving robustness and generalization under soft thinking paradigms. A.2 Benchmarks The evaluation covers four mathematical benchmark tasks and three programming benchmark tasks. •Mathematical Benchmarks: Math500 [ 40] is a diverse subset of 500 problems selected from the MATH dataset [ 31], covering seven mathematical disciplines. AIME 2024 [ 32] is drawn from the American Invitational Mathematics Examination, this dataset features challenging problems that serve as a rigorous benchmark for both accuracy and token efficiency. GSM8K [ 33] is a benchmark comprising 1,319 grade-school-level math word problems designed to assess multi-step arithmetic reasoning. GPQA-Diamond [ 34] is a subset focusing on high-difficulty problems requiring deep, multi-step reasoning. •Coding Benchmarks: HumanEval [ 35] is a widely used benchmark for evaluating functional correctness in code generation. Each problem consists of a Python function signature and a natural language description, with correctness measured by executing unit tests. MBPP [ 36] contains introductory-level coding tasks paired with simple specifications and measures a model’s ability to understand and execute basic programming logic. LiveCodeBench [ 37] is a dynamic, contamination-free code generation benchmark. For our study, we select 279 problems released between August 2024 and January 2025 to ensure evaluation on unseen | https://arxiv.org/abs/2505.15778v1 |
tasks. A.3 Implementation of Soft Thinking on SGLang In this appendix, we describe the engineering modifications made to the SGLang inference engine (v0.4.6.post1 ) to support our proposed Soft Thinking method. We highlight the core code changes, affected files, and provide high-level code snippets to clarify the new reasoning flow. A.3.1 Overview of Modifications Our implementation introduces a new inference mode, soft thinking , in which intermediate reasoning steps use “concept tokens” (i.e., full token probability distributions) rather than discrete token ids. This required changes to the input/output interface, sampling, embedding, and state management in SGLang. The main modifications are summarized as follows: •Configuration : New flags and parameters to enable soft thinking and control its behavior. •Sampler : Modified to output top- kprobability distributions (concept tokens) instead of a single sampled token. •Embedding Layer : Added support for probability-weighted interpolation of token embed- dings. •Forward Pipeline : Adapted to accept and propagate concept tokens through the model. •Cold Stop : Entropy-based early stopping logic for intermediate reasoning steps. 14 A.3.2 Key Files and Logic Changes 1.model_config.py &server_args.py •Purpose : Add configuration options for soft thinking. •Key additions : 1enable_soft_thinking : bool 2max_topk : int 3# Command - line flags : --enable -soft - thinking , --max -topk , -- think -end -str 2.sampler.py (Sampling Logic) •Purpose : Output concept tokens (top- kprobabilities and indices) instead of only discrete token ids. •Key changes : 1if enable_soft_thinking : 2 # Compute top -k probabilities and indices 3 topk_probs , topk_indices = torch . topk (probs , k= max_topk , dim = -1) 4 # Normalize 5 topk_probs = topk_probs / topk_probs .sum(dim =-1, keepdim = True ) 6 logits_output . topk_probs = topk_probs 7 logits_output . topk_indices = topk_indices 8 # For next token id , use argmax or sample from topk as needed 9 batch_next_token_ids = topk_indices [:, 0] 10else : 11 # Standard discrete sampling 12 batch_next_token_ids = torch . argmax (probs , -1) 13 14# Entropy calculation for Cold Stop 15entropy = -torch .sum( probs * torch .log( probs . clamp (min =1e -12) ), dim = -1) 16logits_output . entropy = entropy 3.vocab_parallel_embedding.py (Embedding Layer) •Purpose : Support probability-weighted embedding computation. •Key changes : 1def weighted_forward (self , topk_probs , topk_indices ): 2 # Compute weighted sum of embeddings 3 topk_embeddings = self . quant_method . embedding (self , topk_indices . long ()) # [B, K, D] 4 # Normalize probabilities 5 topk_probs = topk_probs / topk_probs .sum(dim =-1, keepdim = True ) 6 new_embedding = torch .sum ( topk_probs . unsqueeze ( -1) * topk_embeddings , dim =1) 7 return new_embedding 4.models/llama.py ,models/qwen2.py (Model Forward Pass) •Purpose : Accept and process concept tokens as input. •Key changes : 15 1# In model . forward : 2if forward_batch . topk_probs is not None and forward_batch . topk_indices is not None : 3 if self . tp_size > 1: 4 hidden_states = self . embed_tokens . weighted_forward_tp ( forward_batch . topk_probs , forward_batch . topk_indices ) 5 else : 6 hidden_states = self . embed_tokens . weighted_forward ( forward_batch . | https://arxiv.org/abs/2505.15778v1 |
topk_probs , forward_batch . topk_indices ) 7elif input_embeds is None : 8 hidden_states = self . embed_tokens ( input_ids ) 5.schedule_batch.py ,scheduler.py ,scheduler_output_processor_mixin.py •Purpose : State management and output tracking for soft thinking. •Key changes : 1# Pseudocode for Cold Stop 2if entropy < entropy_threshold : 3 low_entropy_steps += 1 4else : 5 low_entropy_steps = 0 6if low_entropy_steps >= length_threshold : 7 # Insert end -of - thinking token , switch to answer mode 8 self . output_ids [ -1] = self . sampling_params . think_end_str_id 6.sampling_params.py ,sampling_batch_info.py •Purpose : Add soft thinking-specific parameters and per-batch flags. •Key changes : 1# Parameters for after - thinking sampling , entropy thresholds , etc . 2early_stopping_entropy_threshold : float 3early_stopping_length_threshold : int 4soft_thinking_mode : Optional [ torch . Tensor ] A.3.3 High-Level Soft Thinking Inference Flow 1.Initialization : Ifenable_soft_thinking is set, the model enters soft thinking mode for reasoning steps. 2.At Each Reasoning Step : •The sampler outputs a concept token: top- kprobabilities and indices (not just a single token id). •The embedding layer computes the next input embedding as a weighted sum over token embeddings, using these probabilities. • The model forward pass consumes this weighted embedding. 3.Cold Stop (Early Termination) : • At each step, compute the entropy of the concept token. •If entropy is below a threshold for several consecutive steps, insert the end-of-thinking token to terminate reasoning. 4.Answer Generation : After reasoning, the model switches back to standard discrete decoding for the answer. The above changes enable SGLang to support soft thinking as described in our paper, allowing for continuous, distributional reasoning and entropy-based early stopping, all with minimal overhead to the standard inference pipeline. 16 | https://arxiv.org/abs/2505.15778v1 |
arXiv:2505.15781v1 [cs.CL] 21 May 2025dKV-Cache: The Cache for Diffusion Language Models Xinyin Ma Runpeng Yu Gongfan Fang Xinchao Wang∗ National University of Singapore maxinyin@u.nus.edu, xinchao@nus.edu.sg Abstract Diffusion Language Models (DLMs) have been seen as a promising competitor for autoregressive language models (ARs). However, diffusion language mod- els have long been constrained by slow inference. A core challenge is that their non-autoregressive architecture and bidirectional attention preclude the key–value cache that accelerates decoding. We address this bottleneck by proposing a KV- cache-like mechanism, delayed KV-Cache , for the denoising process of DLMs. Our approach is motivated by the observation that different tokens have distinct representation dynamics throughout the diffusion process. Accordingly, we pro- pose a delayed and conditioned caching strategy for key and value states. We design two complementary variants to cache key and value step -by-step: (1) dKV- Cache-Decode, which provides almost lossless acceleration, and even improves performance on long sequences, suggesting that existing DLMs may under -utilise contextual information during inference. (2) dKV-Cache -Greedy, which has ag- gressive caching with reduced lifespan, achieving higher speed-ups with quadratic time complexity at the cost of some performance degradation. dKV-Cache, in final, achieves from 2-10 ×speedup in inference, largely narrowing the gap be- tween ARs and DLMs. We evaluate our dKV-Cache on several benchmarks, delivering acceleration across general language understanding, mathematical, and code-generation benchmarks. Experiments demonstrate that cache can also be used in DLMs, even in a training-free manner from current DLMs. The code is available at https://github.com/horseee/dKV-Cache 1 Introduction Diffusion language models (DLMs) [ 3,25] have recently emerged as an alternative paradigm for text generation, inspired by the success of diffusion models in continuous domains like images [ 15,48] and videos [ 28,5]. Unlike autoregressive transformers (ARs) [ 14,43,12] that generate text left- to-right at a time, a diffusion language model produces text by gradually refining a sequence of initially noisy tokens or masked tokens into a coherent output [ 24,29]. Recent advancements in diffusion-based language models have underscored their versatility across both continuous [ 21] and discrete formulations [ 40,3]. In particular, discrete diffusion models have shown competitive performance in language modeling [ 37], attracting growing interest for their potential to achieve faster decoding than traditional ARs while maintaining comparable generation quality. One notable advantage of diffusion language models is their potential to decode an arbitrary number of tokens in parallel [ 49], whereas ARs require one forward pass per generated token. This parallel decoding paradigm offers the potential for improved wall-clock inference time and higher throughput. However, despite their theoretical efficiency, current DLMs remain substantially slower than ARs in practice. This inefficiency is primarily due to two factors: the incompatibility of DLMs with the ∗Corresponding author Preprint. Under review. KV-Cache mechanism [ 33,13], and the large number of network evaluations for denoising [ 17]. Specifically, generating a sequence of length Ltypically entails Ldenoising steps, each involving a full bidirectional attention pass, leading to a cubic time complexity of O(L3). In contrast, AR models utilize KV-Cache to reduce per-step complexity to O(L2), achieving much faster inference overall. In this paper, we tackle | https://arxiv.org/abs/2505.15781v1 |
the challenge of integrating the KV-Cache mechanism into diffusion language models and operate without autoregressive or semi-autoregressive structures [ 2]. We identify two core reasons that prevent the direct usage of KV-Cache in DLMs. (1) KV-Cache hinges on the assumption that the key and value states of previously generated tokens remain fixed during subsequent decoding steps. This property is preserved in autoregressive models through the use of a causal attention mask, which restricts each token to attend only to earlier positions [ 11]. However, DLMs adopt a bidirectional attention mechanism, similar to non-autoregressive models [ 20], allowing every token to attend to all others in the sequence and making it nontrivial to reuse previously cached keys and values. (2) KV-Cache presupposes a fixed and left-to-right sequential decoding, where the position of the next token is deterministic. This assumption enables ARs to compute QKV states selectively only at the current decoding position. However, DLMs break this paradigm by supporting flexible generation orders. At each denoising step, any token position may be selected for update, which is a key advantage of DLMs for tasks involving long-range dependencies and holistic planning [51]. To solve this problem, we propose the delayed KV-Cache, dKV-Cache, the KV-Cache for diffusion language models. The core design of our proposed dKV-Cache centers on how to enable caching of key and value states across denoising steps in diffusion language models. A key insight motivating this design is that, although DLMs employ bidirectional attention, intuitively incompatible with caching, the representations of key and value are not fundamentally unreusable, but rather require delayed and conditioned reuse. In particular, we observe that the evolution of key and value states is strongly influenced by whether a token has been decoded. This behavior motivates a delayed caching strategy, wherein only the key and value states of decoded tokens would be cached, delayed from the ARs that caching occurs immediately upon input. We further propose a one-step delayed caching mechanism, in which the caching of key/value states is postponed by another denoising step. This intentional delay substantially boosts the performance and also reduces memory overhead for KV storage. Besides the delayed caching, we further propose a more aggressive decoding strategy to reduce the computational complexity of diffusion language models from O(L3)toO(L2)by restricting the caching to a compact subset of tokens: the delayed tokens and the current to-be-decoded tokens, and the window tokens. Our experiments demonstrate that the proposed method achieves 2–10 ×speedup on existing 7B- scale diffusion language models, including LLaDA [ 37] and Dream [ 52], across a broad range of benchmarks such as general language understanding, code generation, and mathematical problem solving. These efficiency gains come with only minor and often negligible performance degradation, highlighting the practical value of our approach for accelerating DLM inference without training. Furthermore, we demonstrate that dKV-Cache is robust across variations in prefill length, output length, and the number of sampling steps. Contributions. (1) We propose the first KV-Cache mechanism for diffusion language models by leveraging the evolving dynamics of token representations. We introduce a delay caching strategy compatible with bidirectional | https://arxiv.org/abs/2505.15781v1 |
attention. (2) We propose two practical variants of our method: dKV- Cache-Decode, which enables long-term cache reuse, and dKV-Cache-Greedy, which reduces the per-step time complexity for faster decoding. (3) Extensive experiments on DLMs demonstrate that our approach achieves 2–10 ×inference speedup with minimal or negligible performance loss. 2 Related Work 2.1 Diffusion Language Models Diffusion models [ 24,41] model the data generation as the inversion of a forward-noise process and demonstrated impressive generation quality in image [ 38], video [ 5], and audio generation [ 16]. For diffusion models on language generation, [ 25,3] extends DDPM to categorical data and defines the transition matrices for the corruption and denoising. [ 29] introduces the continuous-time diffusion over the continuous word-embedding space, and [ 21] closes the quality of generation on par with GPT- 2 via a simplex defined over the vocabulary. Besides the diffusion in the continuous space [ 47,53] 2 for discrete distribution, another approach seeks the path in discrete language diffusion models. [ 22] training BERT to learn the reverse process of a discrete diffusion process with an absorbing state in D3PM. SEDD [ 33] introduces score entropy training, a novel loss extending score-matching to discrete data and MDLM [ 40] shows that simple masked discrete diffusion is competitive to all previous kinds of DLMs. Block diffusion [ 2] extends the current non-autogressive [ 20] discrete language diffusion models into a semi-autoregressive one [ 21,55], making it feasible to generate sequences of arbitrary length. [ 37,19] scaling the masked diffusion language models to billions of parameters, achieving performance comparable to leading autoregressive LLMs. 2.2 Cache in Generative Models Cache [ 45] is a small, fast memory that stores frequently accessed data, reducing the time that the CPU needs to fetch data from slower memory. Cache is first introduced in deep neural networks in transformers [ 44], where the KV-Cache caches previous tokens’ key and value tensors. KV- Cache becomes a fundamental technique in transformers, and several improved techniques are proposed [18, 32, 26] to reduce the memory consumption of KV-Cache for long-context generation. Besides this, caches are also been explored in diffusion models for image [ 36,46]. Those work leverage the temporal similarities between high-level features [ 35,9], attention map [ 54,31] to achieve faster inference of diffusion generation. This also has been explored in 3D generative modeling [ 50] and video generation [ 56,34]. However, the cache for diffusion language models is less explored, especially the KV-Cache for diffusion language models. [ 2] explores the kv- cache in semi-autoregressive diffusion language models. This requires considering the KV-Cache in training, making twice the forward computation in the training and its form is still constrained in the autoregressive formula. [ 40] also considers cache, but under a strict condition that no new tokens have been calculated. 3 Methods 3.1 Preliminary We primarily focus on continuous-time discrete language models, with particular attention to masked diffusion language models [ 40,37], which have shown strong scalability to billion-parameter scales with high generation quality. Consider a text sequence with Ltokens x1:L 0, sampled from the target | https://arxiv.org/abs/2505.15781v1 |
distribution pdata(x0). Each token is represented by a one-hot vector with Vcategories, where Vis the vocabulary size. The forward process adds noise in the original sequence x0, which, in the discrete diffusion models here, takes the form of masking some of the tokens randomly. The masking process can be controlled by a transition matrix Ut, where each element in this matrix [Ut]ijrepresents the probability transition from token ito token jat step t[3]. The denoising process is discretized into Tsteps, and we define the continuous timestep c(t) =t/T, where t∈ {0,1, . . . , T }. We use timestep in the continuous temporal space of the denoising process and stepin the discrete space. The forward diffusion can be modelled as: q xc(t)|x0 = Cat xc(t);p=x0Ut ,where Ut=tY i=1Ui Then we can get the corrupted x0. The absorbing form for the transition matrix is used here, where each token either remains the same or transfers to the special [MASK] token at the probability of βt. The cumulated transition matrix Ut, as defined in the masked diffusion models, can be formulated as: Ut ij=(1 ifi=j=[MASK] ¯αt ifi=j̸=[MASK] 1−¯αtifj=m, i̸=[MASK]with ¯αt=tY i=1(1−βi) where ¯αtlinearly decrease to 0 as tapproaches T. The reverse process is learned by the models θ, where pθ(xc(t−1)|xc(t))is learned to approximate q(xc(t−1)|xc(t),x0). Inpθ(xc(t−1)|xc(t)), the neural network with parameters θis optimized to predict the clean tokens x0given xc(t). 3 vvv!![#:!)#]"![#:!)#]*![#:!]*!&#[#:!&#]#![#:!)#]!+1(a) KV-CacheinAR(b) dKV-Cache!Step !!+1!+2*!'"! CacheStep !+1*!&#' "!MASK TokenDecoded TokenCache for KVCompute KVCompute QReorder"!"!"#*!(#)!"#+!(#)!"#"!%#"!%# "!%#,!)!"# #!$!$!$!"!%#Cache Step !Step !+1ProjectionFigure 1: Illustration of dKV-Cache. At step t, no prior cache would be activated though the token Dt−1has been decoded. KandVare delayed to the next step to be reordered and reused. Sampling Process of DLMs. Given the noisy sequence x1:L 1, which is consisted only with the masked token. Each timestep, the denoising model pθ(xc(t−1)|xc(t))would be called, with x0 predict first and then remasking the rest by q(xc(t−1)|x0,xc(t)). The unmasked tokens would remain unchanged during the later denoising process. Several strategies are used in the remasking stage, e.g., random remasking [ 3], keeping the most confident ones [ 7] or by selecting the topk positions with the largest margin between the two most probable values [27]. Formulation of KV-Cache. Since diffusion language models still use the transformer architecture (with or without GQA [ 1] would not affect our method), we simply grab the formulation of KV-Cache in ARs first. In a transformer decoder, each layer would project the current hidden states htinto a query-key-value triplet (Qt,Kt,Vt)via learned projection WQ,WKandWV. At step t, only the hidden states of the t-th tokens h[t] twould be calculated. The recursive KV -Cache update is appending the new key–value pair to the running buffered KV-Cache: zt= softmax Q[t] t K[1:t] t⊤ √dk V[1:t]with K[1:t] t= concat K[1:t−1] t−1,K[t] t V[1:t] t= concat V[1:t−1] t−1,V[t] t (1) where ztis the output of the attention head at step tand the dot products are scaled down by√dk. 3.2 Why KV-Cache Cannot be Used in DLMs? The effectiveness of KV-Cache can be attributed to the reuse of previously computed K and V states, and | https://arxiv.org/abs/2505.15781v1 |
the targeted computation only for the current decoding token. We conclude that standard KV-Cache is fundamentally incompatible with diffusion language models for two reasons: •Timestep-variant key and value states. In the autoregressive setting, every time step shares a single, causally growing set of key and value tensors. K[1:t−1] m andV[1:t−1] m are the same at each step mfrom t−1and later with the help of causal attention mask. By contrast, DLMs employ a bidirectional attention mask; consequently, the key and value representations that each token can attend to at timestep m,K[1:t−1] m andV[1:t−1] m , differ from those at timestep n. Put differently, K[1:t−1] m ̸=K[1:t−1] n andV[1:t−1] m ̸=V[1:t−1] n ifn̸=m. The bidirectional attention introduced by diffusion language models therefore breaks the global reuse assumption that supports conventional KV-Cache. •Non-sequential decoding order. Generation in DLMs does not follow a strictly left-to- right order. Instead, the model dynamically fills masked positions based on probabilities computed at each denoising step. As a result, the positions of decoded tokens are only revealed after the model forward pass, and the subsequent update may target any position in the sequence, rather than progressing sequentially. This uncertainty prevents us from pre-determining which token iwill require the computation of its hidden states h[i]and its Q[i],K[i]andV[i]. 4 0 10 20 30 40 50 60 N-th Decoded T oken0102030405060N-th Decoded T okenThreshold = 0.975(a) Similarity Heatmap 0 20 40 60 80 100 120 N-th Decoded T oken0.02.55.07.510.012.515.017.520.0Average Distance(b) Representation Distance (Before vs. After) Q: Before decode K: Before decode V: Before decodeQ: After decode K: After decode V: After decode 0 20 40 60 80 100 120 N-th Decoded T oken020406080100120Timestep with the top-2 max/min ||dK/dt||(c) Timestep of each token with Max/Min ||dK/dt|| 1st max ||dK/dt|| 2nd max ||dK/dt|| 2nd min ||dK/dt|| 1st min ||dK/dt|| 0.920.930.940.950.960.970.980.991.00Figure 2: (a) We present a heatmap illustrating the pairwise similarities among the key states across different timesteps. Here we use LLaDA with L= 128 ,T= 128 and the block size is set to 64. We then compute the Euclidean distance between consecutive steps tandt+ 1to analyze the dynamics of intermediate representations. (b) We report the average distance measured before and after the decoding of each token. (c) We highlight the top-2 steps exhibiting the largest and smallest changes in the key and value states for each token in their decoded order. Representation Dynamics for tokens in Diffusion Sampling We investigate in DLMs whether KandVcan be reused. We focus on the dynamics of KandVfor each token, and the results are shown in Figure 2. Interestingly, we observe several noteworthy patterns in the dynamics of QKV states: (1) Despite step-to-step differences, the key and value embeddings, KandV, exhibit consistently high similarity across timesteps, as shown in Figure 2(a). (2) Once a token is decoded, its representation becomes relatively stable in subsequent steps, whereas the representations of still-masked tokens continue to fluctuate significantly. This phenomenon is evident in Figure 2(b), where QKV fluctuations are more pronounced prior to decoding than thereafter. (3) The most substantial changes in KandVoccur at the decoding step of each token and then in | https://arxiv.org/abs/2505.15781v1 |
the early stages of the denoising process. This is reflected by prominent changes along the diagonal in Figure 2(c), corresponding to the i-th decoding step which decodes the i-th token. These observations provide key insights into the temporal structure of discrete diffusion models and motivate the design of our KV-Cache mechanism for diffusion language modeling. 3.3 Delayed KV-Cache for Masked Diffusion Language Models We first present a more general non -sequential KV -Cache formulation that replaces the contiguous slice K[1:t−1]used in Eq.1 with an arbitrary-order index set St⊆ I={1, . . . , L }and the next token position from the fixed ttoDt. The cached keys and values gathered from previous steps are now KSt t, which retrieves cached states at the positions specified by indexes in St: zt= softmax QDt t KSt∪{Dt} t⊤ √dk VSt∪{Dt} t with( KSt∪{t} t = concat _reorder KSt t−1,KDt t VSt∪{t} t = concat _reorder VSt t−1,VDt t (2) where Dtis the denoised token at step t. If we use random sampling for diffusion, then before any inference, we can know the decoding order of the sequence. We here use the notation of Dt, and later would not depend on knowing the decoding order. The operator concat _reorder is proposed to enhance the efficiency of the indexing and gathering of K and V here. We explain in the appendix about this operator and how it works with ROPE [42] and how it accelerates. We first extend the formulation in Eq.2 by generalizing the query input QDt tfrom the single-token decoding to the multiple and arbitrary token decoding. Specifically, at decoding step t, we construct a dynamic set Mt, where Mtdenotes the subset of tokens that are not yet finalized during generation. And with the hMt tcalcuated, we also can get the corresponding QMt t,KMt tandVMt t. We delay the caching of each token, not the time it is appended in the input, but rather on the step it is decoded: concat _reorder ( KSt t−1|{z} Cache from t−1,KDt t|{z} Calcualte at t)⇒concat _reorder ( KI\M t t−1|{z} Cache from t−1,KMt t|{z} Calculate at t) (3) where Idenotes the set of all tokens involved in the denoising process. This design reflects our core observation in the previous section: only the decoded tokens are eligible for caching, while 5 the remaining masked tokens must be re-encoded at each step. Besides, this method solves the problem that we need to predefine or predict the denoising order, as Mtis explicitly known at each step. Cached keys and values corresponding to I \ M tare reused across steps, while non-finalized positions are recomputed at each step to ensure correctness under bidirectional attention masking. One-step Delayed Caching. As our analysis in Figure 2(c), the most significant change in Kand Voccurs exactly at the step where a token transitions from [MASK] to its decoded form. Currently, KDt twould be used for caching, as it is no longer in the masked set Mt. However, KDt tcan differ substantially from KDt t+1, and prematurely reusing it lead to severe performance degradation. To address this, we | https://arxiv.org/abs/2505.15781v1 |
introduce one-step delayed caching. At timestep t, we use the masking state from the previous step, Mt−1, to determine which tokens are cacheable. The method, named dKV-Cache-Decode, is finally formalized as: zt= softmax QMt−1 t KI t⊤ √dk! VI twith KI t= concat _reorder KI\M t−1 t−1,KMt−1 t VI t= concat _reorder VI\M t−1 t−1 ,VMt−1 t (4) While this slightly reduces efficiency, we find it to be critical for maintaining accuracy and stability in the proposed dKV-Cache mechanism for diffusion language models. Cache Refreshing Mechanism. While it is possible to apply caching after each token is decoded and reuse it throughout the denoising process, in practice, when the sequence is sufficiently long, occasionally recomputing the cache incurs small computational overhead. To maintain consistency and improve correctness during decoding, we add a cache refreshing mechanism. Every N steps, the stored cache would be discarded and refreshed. The calculation would revert back to the normal calculation, resulting in an empty set ∅to replace Mt−1for this refresh step in Eq.4. dKV-Cache-Prefill and dKV-Cache-PD. The set Mt−1can be further divided into two subsets: decoded tokens and always-decoded tokens, i.e., prefill tokens. Our experiments show that prefill tokens primarily attend to each other, indicating limited influence from later tokens. Based on this, we adopt a special strategy, dKV-Cache-Prefill, that caches prefill tokens without refreshing. This design aligns with the disaggregation of prefill and decoding phases [ 57] for serving DLMs. Building on this, we have another variant, dKV-Cache-PD, which intermittently refreshes only the newly decoded tokens, while keeping the key and values of prefill tokens without any recomputation. 3.4 dKV-Cache-Greedy: Greedy Formulation of dKV-Cache However, the above method still incurs O(L3)complexity, which is less efficient than the O(L2) complexity of ARs. This is primarily because Mtinitially consists of the entire sequence with L tokens and only narrows to a single token at the end. To improve efficiency, it is essential to decouple |Mt|for each step from the sequence length L. Instead of the minimally refreshed caches proposed above, we adopt a more relaxed cache mechanism to refresh more so as to mitigate the performance degradation caused by stale KandV. Building on our earlier observation that token representations undergo significant changes at their decoding step, we define Mtto include only three components: the token at the current step Dt, the token from the previous step D(t−1)(motivated by one-step delayed caching), and a local window W(t). For this local window, we extend it to include the token itself and its neighboring tokens within a fixed-size window Wt= xi i∈ Dt−w 2 , Dt+w 2 , where wis the window size. We evaluated local windows centered at both DtandDt−1, and found that the latter yields better performance. Since the window size |Wt|is fixed (set to at most 6 in our experiments), this strategy introduces additional computation but retains an overall time complexity of O(L2). 4 Experiments 4.1 Experimental Setup We tested our method under the original evaluation benchmark of LLaDA [ 37] and Dream [ 52]. Datasets: We conduct comprehensive evaluations across a diverse set of benchmarks that | https://arxiv.org/abs/2505.15781v1 |
assess 6 Table 1: Benchmark results on LLaDA-8B-Instruct. We use zero-shot evaluation here. Detailed configuration is listed in the Appendix. We set the cache refresh step for dKV-Cache-Decode to be 8 and dKV-Cache-Greedy to be 2. The window size of dKV-Cache-Greedy is listed in the bracket. Base Few-Steps dKV-Cache-Greedy dKV-Cache-Greedy Base Half-Steps dKV-Cache-Decode Remasking (random) (random) (random) w/ Window (random) (confidence) (confidence) (confidence) MMLU 51.79 43.19 45.77 47.70 (4) 51.11 51.11 51.00 30.20 47.49 (1.67×) 50.56 (1.57×) 45.72 (1.51×) 28.27 55.80 (1.97×) 66.52 (2.35×) GSM8K 72.25 65.58 67.93 68.23 (4) 77.56 77.91 78.85 15.16 24.08 (1.59×) 25.47 (1.68×) 24.76 (1.63×) 14.31 28.71 (2.00×) 27.50 (1.92×) Math500 27.4 21.8 26.0 27.0 (4) 36.6 34.2 36.8 12.00 19.36 (1.61×) 20.34 (1.70×) 19.86 (1.66×) 11.53 23.10 (2.00×) 24.46 (2.12×) GPQA 27.46 24.78 26.79 28.35 (4) 30.80 27.68 28.13 11.40 18.59 (1.63×) 19.27 (1.69×) 18.26 (1.60×) 11.86 23.88 (2.01×) 28.73 (2.42×) HumanEval 19.88 15.61 15.13 15.37 (4) 39.63 33.54 46.34 7.50 12.50 (1.67×) 12.31 (1.64×) 12.13 (1.62×) 7.08 14.18 (2.00×) 13.76 (1.83×) MBPP 21.4 15.6 17.8 20.4 (2) 40.4 33.8 40.4 7.51 12.97 (1.73×) 12.55 (1.67×) 12.44 (1.66×) 7.50 15.01 (2.00×) 13.93 (1.86×) Table 2: Benchmark results on Dream-Base-7B. We use the few-shot ICL here and the configuration is in Appendix. We set the cache refresh interval for dKV-Cache-Decode and dKV-Cache-PD to 4. Dream-7B Half-Steps dKV-Cache-Decode dKV-Cache-Prefill dKV-Cache-PD GSM8K (8-shot) L = 256T = 25676.88 68.08 76.57 75.66 74.07 15.1 (1.00×) 30.3 (2.00×) 31.6 (2.09×) 53.6 (3.55×) 50.2 (3.32×) T = 12868.81 46.63 65.35 65.96 63.31 30.3 (2.01×) 60.5 (4.01×) 62.31 (4.13×) 107.4 (7.11×) 99.5 (6.6×) MBPP (3-shot) L = 512T = 51255.8 45.2 53.4 55.2 51.0 5.4(1.00×) 10.8 (2.00×) 10.4 (1.93×) 13.6 (2.52×) 14.5 (2.69×) T = 25645.2 26.2 43.4 41.8 42.6 10.8 (2.00×) 21.5 (3.98×) 20.6 (3.81×) 27.1 (5.02×) 28.9 (5.35×) HumanEval (0-shot) L = 512T = 51257.93 37.20 57.32 56.10 59.76 10.3 (1.00×) 20.5 (1.99×) 15.5 (1.50×) 14.4 (1.40×) 17.4 (1.69×) T = 25637.20 18.29 31.70 33.54 31.70 20.5 (1.99×) 40.9 (3.97×) 31.1 (3.02×) 28.7 (2.79×) 34.8 (3.38×) general language understanding [ 23], mathematical reasoning [ 10,30,39], and code generation [ 8,4]. Since the multi-choice evaluation based on the token likelihood doesn’t require more than one step for inference, we request the models to generate the answer letter and match the generated answer with the ground-truth answer. Evaluation: We follow the prompt in simple-evals2for LLaDA, making the model reason step by step. On Dream, we follow the evaluation setting of Dream to conduct few-shot in-context learning [ 6]3. Other implementation details are listed in the Appendix. Baseline: We choose the few-step sampling method (50% steps for Half-Steps and 62.5% steps for Few-Steps) as our baseline and select the number of steps such that their sampling speed is comparable to or slower than ours, and showing that our method can have better performance. Metric. We report the accuracy for performance and token/s for speed. We tested the speed on A6000 (for LLaDA) and H20 (for Dream). Besides this, we use one more metric to show the cache ratio, calculated as:1 TPT i=1 Tcache i | https://arxiv.org/abs/2505.15781v1 |
/|Ti|, where Tis the total number of decoding steps. Tidenotes the number of tokens processed at step i(normally the whole sequence) and Tcache i is the subset of tokens whose KV pairs are reused from cache at timestep i, which is |Mi|. 4.2 Performance and Speed with dKV-Cache We begin by addressing the central question: What are the performance trade-offs and speedups introduced by applying dKV-Cache? For LLaDA, we evaluate two variants, dKV-Cache-Greedy and dKV-Cache-Decode, against baselines that accelerate generation by halving or reducing the number of denoising steps. As shown in Table 1, dKV-Cache-Greedy consistently outperforms few-step 2https://github.com/openai/simple-evals 3https://github.com/EleutherAI/lm-evaluation-harness 7 Table 3: Results on long prefill settings. dKV-Cache-Decode uses a refresh step of 4; dKV-Cache-Prefill never refreshes. Dream-7B Half-Steps dKV-Cache-Decode dKV-Cache-Prefill MMLU (5-shot) L = 8T = 872.19 72.21 71.74 71.76 9.1(1.00×) 18.1 (1.99×) 25.2 (2.77×) 57.6 (6.33×) T = 472.21 71.63 71.69 71.71 18.1 (1.99×) 36.1 (3.97×) 49.2 (5.41×) 67.3 (7.40×) GPQA (5-shot) L = 128T = 12836.83 35.49 35.71 35.27 7.4(1.00×) 14.7 (1.99×) 18.2 (2.46×) 75.40 (10.19 ×) T = 6435.49 35.27 34.15 35.27 14.7 (1.99×) 29.4 (3.97×) 36.8 (4.97×) 139.9 (18.91×) 0.2 0.3 0.4 0.5 Cache Ratio020406080GSM8K Accuracy No Delay DelayFigure 3: Effect of one- step delayed caching baselines across most benchmarks, except for HumanEval. Notably, integrating a lightweight cache window yields substantial gains with negligible computational overhead. For dKV-Cache-Decode, dKV-Cache-Decode achieves near-lossless performance with a high cache ratio and only a few refresh steps . Among all strategies, dKV-Cache-Decode delivers the best trade-off, outperforming both dKV-Cache-Greedy and the baselines. Crucially, it maintains accuracy nearly indistinguishable from the full model, demonstrating that KV-Cache can also be applied in diffusion language models without sacrificing performance. Since dKV-Cache-Greedy relies on a predefined (e.g., random) decoding order, which results in a marked accuracy drop relative to low-confidence remasking, we concentrate our experiments more on dKV-Cache-Decode. We provide several case studies in the Appendix that compare the generated text before and after applying dKV-Cache. The results on Dream are shown in Table 2 and Table 3. There is a small difference in the position of the decoded token since Dream is adapted from auto-regressive models and shifts the token position. We provide a detailed illustration in the appendix for this. Due to the use of few-shot in-context learning, the model requires a long input context, leading to significant overhead from encoding those tokens repeatedly. In this setting, dKV-Cache-Prefill provides substantial speed improvements; for instance, on MMLU and GPQA, it achieves up to a 10 ×acceleration. Across all tested datasets, we observe that dKV-Cache largely outperforms the baseline under different prefilling and decoding lengths. We further evaluate the impact of applying dKV-Cache to few-step diffusion models and observe consistent trends: as the number of diffusion steps increases, our method yields even larger gains over the baseline. For example, on GSM8K with a decoding length of 256, the baseline model with 64 steps achieves 46.63 Pass@1 with a 4 ×speedup, whereas dKV-Cache attains a 6.6 ×speedup while significantly improving performance to 63.31 ( +16.68 ). 4.3 Analysis The one-step delay in dKV-Cache. Figure 3 | https://arxiv.org/abs/2505.15781v1 |
illustrates the impact of applying a one-step delay to the cache mechanism. Without the delay, performance remains acceptable at low cache ratios but degrades rapidly as the cache ratio increases, ultimately collapsing to near-zero accuracy. In contrast, introducing a one-step delay stabilizes the generation quality, enabling the model to maintain nearly lossless performance even under high cache ratios. Performance on different decoding length, denoising steps and refreshing steps. Figure 4 presents the results of applying dKV-Cache-Decode and dKV-Cache-Greedy under various configu- rations, including different decoding lengths, refresh intervals, sampling steps, and window sizes. Overall, our method consistently achieves performance comparable to the original model without dKV-Cache, effectively pushing the Pareto front forward. We observe the following findings: (1) Decoding robustness: The performance impact of our method is largely insensitive to the number of decoding steps, indicating strong robustness across varying generation lengths and sampling steps. (2) Enhanced long-form generation: In tasks involving longer outputs (e.g., L = 512), our method outperforms the baseline, even improving generation quality from 80.97% to 83.13% on GSM8K and from 39.63% to 46.34% for HumanEval. The results imply a potential inefficiency in how bidirectional attention aggregates contextual signals, pointing to redundancy or underutilization in long-context modeling. (3) Effectiveness with only a few refreshing: Even with infrequent refreshes (e.g., every 16 steps), the performance degradation remains small. (4) Effectiveness of local windows: 8 20 40 60 80 100 120 140 160 Speed (T okens/s)65707580Performance (Accuracy %)L=512, S=512 L=512, S=256 L=512, S=128 L=128, S=64 L=128, S=32 LLaDA Pareto Front Cache Pareto Front w/o Cache Refresh=2 Refresh=4 Refresh=8 Refresh=16 20 40 60 80 100 120 140 Speed (T okens/s)3035404550556065Performance (Accuracy %) 2.3× 1.5× 1.6×Refresh = 2 Refresh = 4 Refresh = 8 Refresh = 16 Window = 0 Window = 1 Window = 2 Window = 4 w/o Cache L=128, S=128 L=128, S=64Figure 4: dKV-Cache-Decode(left) and dKV-Cache-Greedy(right) on GSM8K with different settings: decoding length L, sampling steps S, refresh intervals and the window size. L = 16 L = 64 L = 2560255075100125150175200225Throughput (tokens/s)55.341.912.4 94.171.921.7 147.0130.521.4 202.9157.249.1Speed (Prefill=128) Original dKV-Cache-Greedy-2 dKV-Cache-Decode-8 dKV-Cache-Greedy-8 L = 16 L = 64 L = 25601000020000300004000050000600007000080000Peak VRAM (MB)33577.239661.455885.9 38409.246083.666517.6 39521.947922.470825.4 38409.246083.666517.6Memory Footprint (Prefill=128) Original dKV-Cache-Greedy-2 dKV-Cache-Decode-8 dKV-Cache-Greedy-8 L = 16 L = 64 L = 256010203040506070Throughput (tokens/s)15.013.67.6 26.925.213.3 44.044.316.4 58.654.230.5Speed (Prefill=512) Original dKV-Cache-Greedy-2 dKV-Cache-Decode-8 dKV-Cache-Greedy-8 L = 16 L = 64 L = 256020000400006000080000Peak VRAM (MB)48789.851829.847773.9 57565.161401.256269.9 61027.065225.159877.0 57565.061401.256269.9Memory Footprint (Prefill=512) Original dKV-Cache-Greedy-2 dKV-Cache-Decode-8 dKV-Cache-Greedy-8 Figure 5: Speed and memory for dKV-Cache-Decode and dKV-Cache-Greedy. The number (2 and 8) means that in every n steps, the cache would be refreshed. Incorporating a local window notably enhances the performance of dKV-Cache-Greedy with minimal additional computational cost. Memory and speed analysis. We analyze the speed and memory footprint of dKV-Cache-Decode and dKV-Cache-Greedy across varying decoding and prefill lengths. Our method achieves substantial inference acceleration, ranging from 1.75 ×to 3.3×, while introducing only a modest increase in memory usage. Notably, dKV-Cache-Greedy demonstrates greater potential for accelerating inference while dKV-Cache-Decode would be capped. In our main experiments, we observed that setting the refresh interval larger than 2 for dKV-Cache-Greedy | https://arxiv.org/abs/2505.15781v1 |
may degrade performance. However, under the same refresh interval, dKV-Cache-Greedy consistently achieves higher speedups than dKV-Cache-Decode, highlighting its potential advantage when refresh frequency is relaxed. 5 Conclusions and Limitations In this work, we explore the feasibility of incorporating the caching mechanism into diffusion language models. Specifically, we propose a delayed KV-Cache for DLMs, motivated by our empirical observations on the dynamics of the token representations throughout the diffusion process. Building on this insight, we introduce two cache variants, dKV-Cache-Decode and dKV-Cache- Greedy, each designed to leverage delayed caching for improved compatibility with diffusion-based generation. Our analysis reveals that introducing a delay is crucial for the cache to function effectively in this setting. Extensive experiments demonstrate that our approach largely accelerates inference while maintaining model performance. One primary limitation of this work lies in its focus on algorithmic design in isolation. While our proposed method introduces an effective caching mechanism from a purely algorithmic perspective, diffusion language models also exhibit substantial room for improvement at the system level. We believe that future research integrating algorithmic innovations with system-level optimization, such as memory management, parallelism, and hardware-aware execution, could unlock further efficiency gains and performance improvements for DLMs. 9 References [1]Joshua Ainslie, James Lee-Thorp, Michiel De Jong, Yury Zemlyanskiy, Federico Lebrón, and Sumit Sanghai. Gqa: Training generalized multi-query transformer models from multi-head checkpoints. arXiv preprint arXiv:2305.13245 , 2023. [2]Marianne Arriola, Aaron Gokaslan, Justin T Chiu, Zhihan Yang, Zhixuan Qi, Jiaqi Han, Subham Sekhar Sahoo, and V olodymyr Kuleshov. Block diffusion: Interpolating between autoregressive and diffusion language models. arXiv preprint arXiv:2503.09573 , 2025. [3]Jacob Austin, Daniel D Johnson, Jonathan Ho, Daniel Tarlow, and Rianne Van Den Berg. Structured denoising diffusion models in discrete state-spaces. Advances in neural information processing systems , 34:17981–17993, 2021. [4]Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732 , 2021. [5]Tim Brooks, Bill Peebles, Connor Holmes, Will DePue, Yufei Guo, Li Jing, David Schnurr, Joe Taylor, Troy Luhman, Eric Luhman, Clarence Ng, Ricky Wang, and Aditya Ramesh. Video generation models as world simulators. 2024. [6]Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems , 33:1877–1901, 2020. [7]Huiwen Chang, Han Zhang, Lu Jiang, Ce Liu, and William T Freeman. Maskgit: Masked generative image transformer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 11315–11325, 2022. [8]Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert- V oss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, | https://arxiv.org/abs/2505.15781v1 |
Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code. 2021. [9]Pengtao Chen, Mingzhu Shen, Peng Ye, Jianjian Cao, Chongjun Tu, Christos-Savvas Bouganis, Yiren Zhao, and Tao Chen. δ-dit: A training-free acceleration method tailored for diffusion transformers, 2024. [10] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 , 2021. [11] Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G. Carbonell, Quoc V . Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a fixed-length context. ArXiv , abs/1901.02860, 2019. [12] DeepSeek-AI. Deepseek-v3 technical report. ArXiv , abs/2412.19437, 2024. [13] Justin Deschenaux and Caglar Gulcehre. Promises, outlooks and challenges of diffusion language modeling. arXiv preprint arXiv:2406.11473 , 2024. [14] Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, Anirudh Goyal, Anthony S. Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aur’elien Rodriguez, Austen Gregerson, and Ava Spataru et al. The llama 3 herd of models. ArXiv , abs/2407.21783, 2024. [15] Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transformers for high-resolution image synthesis. In Forty-first International Conference on Machine Learning , 2024. [16] Zach Evans, CJ Carr, Josiah Taylor, Scott H. Hawley, and Jordi Pons. Fast timing-conditioned latent audio diffusion. ArXiv , abs/2402.04825, 2024. [17] Guhao Feng, Yihan Geng, Jian Guan, Wei Wu, Liwei Wang, and Di He. Theoretical benefit and limitation of diffusion language model. ArXiv , abs/2502.09622, 2025. [18] Suyu Ge, Yunan Zhang, Liyuan Liu, Minjia Zhang, Jiawei Han, and Jianfeng Gao. Model tells you what to discard: Adaptive KV cache compression for LLMs. In The Twelfth International Conference on Learning Representations , 2024. 10 [19] Shansan Gong, Shivam Agarwal, Yizhe Zhang, Jiacheng Ye, Lin Zheng, Mukai Li, Chenxin An, Peilin Zhao, Wei Bi, Jiawei Han, et al. Scaling diffusion language models via adaptation from autoregressive models. arXiv preprint arXiv:2410.17891 , 2024. [20] Jiatao Gu, James Bradbury, Caiming Xiong, Victor OK Li, and Richard Socher. Non-autoregressive neural machine translation. arXiv preprint arXiv:1711.02281 , 2017. [21] Xiaochuang Han, Sachin Kumar, and Yulia Tsvetkov. Ssd-lm: Semi-autoregressive simplex-based diffusion language model for text generation and modular control. arXiv preprint arXiv:2210.17432 , 2022. [22] Zhengfu He, Tianxiang Sun, Kuanning Wang, Xuanjing Huang, and Xipeng Qiu. Diffusionbert: Improving generative masked language models with diffusion models. arXiv preprint arXiv:2211.15029 , 2022. [23] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. Proceedings of the International Conference on Learning Representations (ICLR) , 2021. [24] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information | https://arxiv.org/abs/2505.15781v1 |
processing systems , 33:6840–6851, 2020. [25] Emiel Hoogeboom, Didrik Nielsen, Priyank Jaini, Patrick Forré, and Max Welling. Argmax flows and multinomial diffusion: Learning categorical distributions. Advances in neural information processing systems , 34:12454–12465, 2021. [26] Coleman Hooper, Sehoon Kim, Hiva Mohammadzadeh, Michael W Mahoney, Sophia Shao, Kurt Keutzer, and Amir Gholami. Kvquant: Towards 10 million context length llm inference with kv cache quantization. Advances in Neural Information Processing Systems , 37:1270–1303, 2024. [27] Jaeyeon Kim, Kulin Shah, Vasilis Kontonis, Sham Kakade, and Sitan Chen. Train for the worst, plan for the best: Understanding token ordering in masked diffusions. arXiv preprint arXiv:2502.06768 , 2025. [28] Weijie Kong, Qi Tian, Zijian Zhang, Rox Min, Zuozhuo Dai, Jin Zhou, Jiangfeng Xiong, Xin Li, Bo Wu, Jianwei Zhang, et al. Hunyuanvideo: A systematic framework for large video generative models. arXiv preprint arXiv:2412.03603 , 2024. [29] Xiang Li, John Thickstun, Ishaan Gulrajani, Percy S Liang, and Tatsunori B Hashimoto. Diffusion-lm improves controllable text generation. Advances in neural information processing systems , 35:4328–4343, 2022. [30] Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. arXiv preprint arXiv:2305.20050 , 2023. [31] Haozhe Liu, Wentian Zhang, Jinheng Xie, Francesco Faccio, Mengmeng Xu, Tao Xiang, Mike Zheng Shou, Juan-Manuel Perez-Rua, and Jürgen Schmidhuber. Faster diffusion via temporal attention decomposition. arXiv preprint arXiv:2404.02747 , 2024. [32] Zichang Liu, Aditya Desai, Fangshuo Liao, Weitao Wang, Victor Xie, Zhaozhuo Xu, Anastasios Kyrillidis, and Anshumali Shrivastava. Scissorhands: Exploiting the persistence of importance hypothesis for llm kv cache compression at test time. Advances in Neural Information Processing Systems , 36:52342–52364, 2023. [33] Aaron Lou, Chenlin Meng, and Stefano Ermon. Discrete diffusion modeling by estimating the ratios of the data distribution. arXiv preprint arXiv:2310.16834 , 2023. [34] Zhengyao Lv, Chenyang Si, Junhao Song, Zhenyu Yang, Yu Qiao, Ziwei Liu, and Kwan-Yee K Wong. Fastercache: Training-free video diffusion model acceleration with high quality. arXiv preprint arXiv:2410.19355 , 2024. [35] Xinyin Ma, Gongfan Fang, Michael Bi Mi, and Xinchao Wang. Learning-to-cache: Accelerating diffusion transformer via layer caching. Advances in Neural Information Processing Systems , 37:133282–133304, 2024. [36] Xinyin Ma, Gongfan Fang, and Xinchao Wang. Deepcache: Accelerating diffusion models for free. arXiv preprint arXiv:2312.00858 , 2023. [37] Shen Nie, Fengqi Zhu, Zebin You, Xiaolu Zhang, Jingyang Ou, Jun Hu, Jun Zhou, Yankai Lin, Ji-Rong Wen, and Chongxuan Li. Large language diffusion models. arXiv preprint arXiv:2502.09992 , 2025. [38] William Peebles and Saining Xie. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 4195–4205, 2023. [39] David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R. Bowman. GPQA: A graduate-level google-proof q&a benchmark. In First Conference on Language Modeling , 2024. 11 [40] Subham Sahoo, Marianne Arriola, Yair Schiff, Aaron Gokaslan, Edgar Marroquin, Justin Chiu, Alexander Rush, and V olodymyr Kuleshov. Simple and effective masked diffusion language models. Advances in Neural Information Processing Systems , 37:130136–130184, 2024. [41] Yang Song and Stefano Ermon. Generative modeling by | https://arxiv.org/abs/2505.15781v1 |
estimating gradients of the data distribution. Advances in neural information processing systems , 32, 2019. [42] Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding. Neurocomputing , 568:127063, 2024. [43] Qwen Team. Qwen2.5 technical report. ArXiv , abs/2412.15115, 2024. [44] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems , 30, 2017. [45] Maurice V Wilkes. Slave memories and dynamic storage allocation. IEEE Transactions on Electronic Computers , (2):270–271, 2006. [46] Felix Wimbauer, Bichen Wu, Edgar Schoenfeld, Xiaoliang Dai, Ji Hou, Zijian He, Artsiom Sanakoyeu, Peizhao Zhang, Sam Tsai, Jonas Kohler, et al. Cache me if you can: Accelerating diffusion models through block caching. arXiv preprint arXiv:2312.03209 , 2023. [47] Tong Wu, Zhihao Fan, Xiao Liu, Hai-Tao Zheng, Yeyun Gong, Jian Jiao, Juntao Li, Jian Guo, Nan Duan, Weizhu Chen, et al. Ar-diffusion: Auto-regressive diffusion model for text generation. Advances in Neural Information Processing Systems , 36:39957–39974, 2023. [48] Enze Xie, Junsong Chen, Junyu Chen, Han Cai, Haotian Tang, Yujun Lin, Zhekai Zhang, Muyang Li, Ligeng Zhu, Yao Lu, and Song Han. SANA: Efficient high-resolution text-to-image synthesis with linear diffusion transformers. In The Thirteenth International Conference on Learning Representations , 2025. [49] Minkai Xu, Tomas Geffner, Karsten Kreis, Weili Nie, Yilun Xu, Jure Leskovec, Stefano Ermon, and Arash Vahdat. Energy-based diffusion language models for text generation. In The Thirteenth International Conference on Learning Representations , 2025. [50] Xingyi Yang and Xinchao Wang. Hash3d: Training-free acceleration for 3d generation. arXiv preprint arXiv:2404.06091 , 2024. [51] Jiacheng Ye, Zhenyu Wu, Jiahui Gao, Zhiyong Wu, Xin Jiang, Zhenguo Li, and Lingpeng Kong. Implicit search via discrete diffusion: A study on chess. ArXiv , abs/2502.19805, 2025. [52] Jiacheng Ye, Zhihui Xie, Lin Zheng, Jiahui Gao, Zirui Wu, Xin Jiang, Zhenguo Li, and Lingpeng Kong. Dream 7b, 2025. [53] Jiasheng Ye, Zaixiang Zheng, Yu Bao, Lihua Qian, and Mingxuan Wang. Dinoiser: Diffused conditional sequence learning by manipulating noises. arXiv preprint arXiv:2302.10025 , 2023. [54] Zhihang Yuan, Hanling Zhang, Lu Pu, Xuefei Ning, Linfeng Zhang, Tianchen Zhao, Shengen Yan, Guohao Dai, and Yu Wang. Ditfastattn: Attention compression for diffusion transformer models. Advances in Neural Information Processing Systems , 37:1196–1219, 2024. [55] Lingxiao Zhao, Xueying Ding, and Leman Akoglu. Pard: Permutation-invariant autoregressive diffusion for graph generation. arXiv preprint arXiv:2402.03687 , 2024. [56] Xuanlei Zhao, Xiaolong Jin, Kai Wang, and Yang You. Real-time video generation with pyramid attention broadcast. arXiv preprint arXiv:2408.12588 , 2024. [57] Yinmin Zhong, Shengyu Liu, Junda Chen, Jianbo Hu, Yibo Zhu, Xuanzhe Liu, Xin Jin, and Hao Zhang. Distserve: Disaggregating prefill and decoding for goodput-optimized large language model serving. In USENIX Symposium on Operating Systems Design and Implementation , 2024. 12 A Design for concat _reorder concat _reorder is our implementation of dKV-Cache designed to improve the speed of dKV-Cache in diffusion language models. Unlike standard KV-Cache used in autoregressive models, dKV-Cache requires gathering and scattering keys and values from arbitrary positions, introducing indexing operations | https://arxiv.org/abs/2505.15781v1 |
that are less efficient than the simple concatenation in contiguous space used in ARs. In dKV-Cache, the cache process involves two additional indexing operations: (1) At the cache step: After computing keys and values, we need to gather the corresponding states of cached tokens at non-continuous positions. (2) At the reuse step: to obtain the whole matrices for key and value, we need to scatter these vectors back to their original positions in the sequence. In contrast, KV-Cache in ARs only requires matrix slicing and concatenation, making it significantly more efficient. To minimize the overhead of gathering and scattering, we propose an algorithm similar to that of standard KV-Cache to avoid too many indexing operations. The key idea is to reorder token positions during the forward calculation of the Transformer, placing all cached tokens contiguously on one side (e.g., left) and newly decoded tokens on the other. This allows us to move parts of the indexing operation to the token level (matrices with shape [B, L]) instead of the intermediate states (matrices with shape [B, L, D]): •At Step t-1: Gather the cached key KI\M t−1 t−1 and value states VI\M t−1 t−1 based on the position index I \ M t−1with one indexing operation. •At Step t: Reorder the sequence, making the cached tokens (at position I \ M t−1) at the left, and uncached tokens (at position Mt−1) at the right. •At Step t: Using concat _reorder for KI\M t−1 t−1,KMt−1 t and for VI\M t−1 t−1 ,VMt−1 t : First, concatenate the cached and current key/value states directly without further gath- ering/scattering ( concat , for getting all K and V to calculate attention), and reorder the whole KV matrics based on VI\M t t to get the cached states for the next step ( reorder , for obtaining the cache). The reorder operation is to know the position mapping from [I \ M t−1;Mt−1]to[I \ M t;Mt]. For example, if the unmasked position at t−1is [2, 4, 5] from a sequence of 8 tokens, and at step t+ 1is [2, 4, 5, 7]. Then [I \ M t−1;Mt−1]would be [2, 4, 5, 0, 1, 3, 6, 7], and [I \ M t;Mt] would be [2, 4, 5, 7, 0, 1, 3, 6]. The mapping would be [0, 1, 2, 7, 3, 4, 5, 6], and we only need to get the corresponding entries [0, 1, 2, 7] fromh KI\M t−1 t−1 ;KMt−1 ti andh VI\M t−1 t−1 ;VMt−1 ti . The only remaining thing is that the change of token position would impact the positional encoding. However, this is easy to solve; we can also reorder the positional embedding. Reordering positional embeddings is required only once per model evaluation and can be shared across layers, thus, it would not cost much time. Furthermore, since our method introduces a one-step shift in caching, the position of cached tokens at step tcorresponds to the token positions decoded from step t−1. This alignment allows us to track which key and value entries need to be cached without storing the entire key/value | https://arxiv.org/abs/2505.15781v1 |
matrices, which, to cache which tokens, can only be known after the decoding results at step t. We present the pseudo-algorithm of our approach in Algorithm 1. While it largely improves inference speed over the naive implementation, the concat and reorder operations still introduce some overhead. We believe there is substantial potential for further optimization. B Design for Dream Dream has a different caching strategy with LLaDA. The main reason for this is that Dream is adapted from pre-trained autoregressive models, which would make the position of output align with the probability of the next token, instead of the current token in the traditional setting of masked diffusion models. This would make a difference in the caching strategy and we investigate between different designs of those caching strategies: 13 Algorithm 1 Pseudo code for dKV-Cache-Decode. We take step t as an example. Require: Sequence x1:L c(t)at step t(Simplied as x), position index of masked tokens Mt, cached Key KI\M t−1 t−1 and Value VI\M t−1 t−1 1:x′←x[Mt−1] ▷Mt−1:t−1for one-step shift 2:PE′← PE[I \ M t−1] ;PE[Mt−1] ▷Positional embeddings: cached on left, uncached on right 3:QMt t,KMt t,VMt t← T (x′) ▷T: Calculation in Transformer to get Q, K and V 4:KI t←Concat KI\M t−1 t−1 ,KMt−1 t ,VI t←Concat VI\M t−1 t−1 ,VMt−1 t ▷Get all K and V 5:KI\M t t←Reorder( KI t, I′),VI\M t t←Reorder( VI t, I′) ▷I′:The index of I \ M tin the x[I \ M t−1] ;x[Mt−1] 6:p′← A (QMt t,KI t,VI t) 7:p←Scatter( p′,Mt−1) ▷Put the token logits back to the original position 8:Return p,KI\M t t ,VI\M t t vv(a) Un-ShiftStep 𝑡𝐾!"𝐷! CacheStep 𝑡+1𝐾!#$" 𝐷!MASK TokenDecoded Tokenat step tCache for KVCompute KVCompute QReorder𝐷!"#𝐾!%$&!"#𝑉!%$&!"#𝐷!$# 𝑄!&!"# 𝐾!%!𝑉!%!Cache Projection𝐷!$#𝐷!𝐷!$# 𝐷!$#v(b)Right-ShiftStep 𝑡𝐾!"𝐷! CacheStep 𝑡+1𝐾!#$" 𝐷!𝐷!"#𝐾!%$&!"#𝑉!%$&!"#𝐷!$# 𝑄!&!"# 𝐾!%!𝑉!%!Cache 𝐷!$#𝐷!𝐷!$# 𝐷!$#v(c)Un&Right-ShiftStep 𝑡𝐾!"𝐷! CacheStep 𝑡+1𝐾!#$" 𝐷!𝐷!"#𝐾!%$&!"#𝑉!%$&!"#𝐷!$# 𝑄!&!"# 𝐾!%!𝑉!%!Cache 𝐷!$#𝐷!𝐷!$# 𝐷!$#𝐷!$& 𝐷!$&Case b: NotCached 𝐷!Case b: NotCachedCase a: NotCachedCase a: NotCached Figure 6: Three variants for the caching strategy for diffusion language models adapted from auto- regressive language models, which would have shifted output position. •Un-Shift, Figure 6(a): We cache the unshifted token representations. Specifically, for the t-th token, we store its key and value as KtandVtat position t. •Right-Shift, Figure 6(b): Given that the hidden state is highly sensitive to changes in input, we also explore a right-shifted variant. Here, for the t-th token, we cache Kt+1andVt+1. •Un&Right-Sift, Figure 6(c): We introduce a stricter variant where caching is conditioned on both input stability and decoding completion. For the t-th token, we cache its features only after its input is fixed and it has been decoded. The one-step shift is still used here. For example, in the right-shift variant, the t-th token is fed into the model at position t+1 in the next step, and we cache its output Kt+1andVt+1then. The results are shown in Table 4, where Un&right-shift would have the best performance, and the right shift would largely harm the model performance. However, we use the Un-Shift in our main experiment, since Un&right-shift is incompatible with the above concat _reorder . Table 4: Comparison between different types of caching strategy | https://arxiv.org/abs/2505.15781v1 |
for Dream-Base-7B. Un-Shift Right-Shift Un&Right Shift MMLU 71.78 64.60 71.73 GSM8K 76.34 32.68 77.71 C Evaluation Details C.1 For LLaDA We re-implemented the evaluation of LLaDA on those reported datasets. We generate and extract the final answer instead of comparing the log prob in the multiple-choice question. Thus, the result of MMLU and GPQA is lower than reported since the model sometimes cannot generate the answer in 14 the given format or does not generate the answer. We show the configuration of each experiment in Table 5 Table 5: Configurations of experiments on LLaDA-Instruct. Base Few-Steps dKV-Cache-Greedy dKV-Cache-Greedy Half-Steps dKV-Cache-Decode Remasking (random / confidence) (random) (random) (random) (confidence) (confidence) Configuration Steps T Cache Interval Window Size Steps T Cache Interval MMLU L=32, T=32, B=16 T=20 2 4 T=16 8 GSM8K L=256, T=256, B=32 T=160 2 4 T=128 8 Math500 L=256, T=256, B=64 T=160 2 4 T=128 8 GPQA L=128, T=128, B=64 T=80 2 4 T=64 8 HumanEval L=512, T=512, B=32 T=320 2 4 T=256 8 MBPP L=512, T=512, B=32 T=320 2 2 T=256 8 C.2 For Dream We follow the original evaluation pipeline for Dream4and we also adopt two datasets, MMLU and GPQA, to generate the answer instead of comparing the probabilities. We follow all the hyperparameters set in the evaluation script, including the temperature, the remasking strategy, top_p and the number of few-shot in in-context learning. D Impact of batch size on speed 0 20 40 60 80 100 120 Batch Size255075100125150175200225T oken/s173.17189.74202.31 159.24175.95183.28 119.55126.86130.58 75.7278.7681.18 67.1969.28 69.62 58.51 59.43 59.71 46.6948.53 48.93 34.53 34.95 35.42L = 16 L = 32 L = 64 L = 128dKV-Cache-Decode Origin Figure 7: Impact of batch size on decoding speed. Evaluated on LLaDA with a single NVIDIA H20; prefill length fixed at 100 tokens. Our inference pipeline relies heavily on indexing operations, gathers and scatters, that generate a stream of small, non-contiguous memory accesses. At a batch size of one, these uncoalesced reads make the inference memory-bound. As a result, the GPU’s compute units sit idle waiting for data. In some cases with the batch size equal to 1, inference with caching can actually underperform the unaccelerated baseline. By contrast, scaling up the batch size can solve this problem and deliver far greater, more stable speed-up ratios over the baseline. E Case Study We provide the case study of our method in Table 6 and Table 7 and Table 8. 4https://github.com/HKUNLP/Dream/blob/main/eval/eval_dream_gen.sh 15 Table 6: Case study on LLaDA-8B-Instruct. Question: Lily can run 12 kilometers per hour for 4 hours. After that, she runs 6 kilometers per hour. How many kilometers can she run in 8 hours? Model Generate Text (L=128, T=128, B=64) LLaDA-8B- Instruct (low confidence)1. Calculate the distance Lily runs in the first 4 hours: - Speed: 12 kilometers per hour - Time: 4 hours - Distance: 12 km/h * 4 h = 48 km 2. Calculate the distance Lily runs in the next 4 hours: - Speed: 6 kilometers per hour - Time: 4 hours - Distance: 6 km/h * 4 h = 24 km 3. Add the distances together: - Total | https://arxiv.org/abs/2505.15781v1 |
distance: \boxed{72} km dKV-Cache- Decode, Refresh = 41. Calculate the distance Lily runs in the first 4 hours: - Distance = Speed × Time = 12 km/h × 4 h = 48 km 2. Calculate the distance Lily runs in the next 4 hours: - Distance = Speed × Time = 6 km/h × 4 h = 24 km 3. Add the distances from both periods to find the total distance Lily runs in 8 hours: - Total Distance = 48 km + 24 km = 72 km \boxed{72 km} dKV-Cache- Decode, Refresh=161. Calculate the distance Lily runs in the first 4 hours: 12 km/hour * 4 hours = 48 km 2. Calculate the distance Lily runs in the next 4 hours: 6 km/hour * 4 hours = 24 km 3. Add the distances from the first 4 hours and the next 4 hours to find the total distance she runs: 48 km + 24 km = 72 km Therefore, Lily can run \boxed{72} kilometers in 8 hours. dKV-Cache- Greedy, Refresh = 2To determine how many kilometers Lily can run in 8 hours, we need to break down her running schedule and sum the distances. First, calculate the distance she runs at 12 kilometers per hour for 4 hours: 12∗4 = 48 After 4 hours, she runs 6 kilometers per hour for the remaining 4 hours: 6∗4 = 24 Now, add the distances: 48 + 24 = 72 \boxed{72} dKV-Cache- Greedy, Refresh = 4 (random)For the first 4 hours Lily runs at 12 kilometers per hour, so she runs 12 * 4 = 48 kilometers. 8 hours - 4 hours are left after the first 4 (8 - 4 = 4 hours). For the next 4 hours, she runs at 6 kilometers per hour, so she runs 6 * 4 = 24 kilometers. Adding the two distances together, Lily can run 48 + 24 = 72 kilometers in 8 hours. Answer: \boxed{72} 16 Table 7: Case study on Dream-Instruct-7B. Request: Write a story that ends with "Finally, Joey and Rachel get married." Model Generate Text (L=256, T=256) Dream- Instruct-7B (Entropy)Once upon a time, there was a young man named Joey who lived in a small town in the middle of nowhere. He was a kind and gentle soul who loved to sing and play the guitar. One day, he met a beautiful girl named Rachel who was also a talented singer and musician. They fell in love and decided to get married. However, there was a problem. Joey’s parents were against the marriage because they didn’t know Rachel well enough. They thought that she was not a good match for their son. Rachel, on the other hand, was not worried about Joey’s parents’ opinion. She knew that she loved Joey and that he was the one for her. She decided to try and win over Joey’s parents by showing them how much she cared about him. She started by cooking for them and inviting them to her house for dinner. She also took them on a trip to the mountains where they | https://arxiv.org/abs/2505.15781v1 |
could enjoy the scenery and the fresh air. After a while, Joey’s parents started to see Rachel in a different light. They realized that she was a good person and that she loved Joey very much. They decided to give their blessing to the marriage. Finally, Joey and Rachel got married and lived happily ever after. dKV-Cache- Decode, Refresh = 16Once upon a time, in a small town called New York, there lived a young man named Joey and a young woman named Rachel. They were both very happy with their lives, but they were also very different from each other. Joey was a very outgoing person who loved to be around people, while Rachel was a very introverted person who loved to be alone. One day, Joey and Rachel met at a coffee shop and fell in love. They spent hours talking and laughing together, and they realized that they were perfect for each other. They decided to be together and build their lives together. As the years went by, Joey and Rachel grew closer and closer. They shared their dreams and their fears, and they supported each other through thick and thin. They were a perfect match, and they knew that they were meant to be together. Finally, Joey and Rachel got married. They had a beautiful wedding with all their friends and family in attendance. They were so happy to be together, and they knew that they would always be together. dKV-Cache- PrefillOnce upon a time, in a small town called New York, there lived a young man named Joey. Joey was a kind and caring person who always tried to help others. He had a beautiful girlfriend named Rachel, who was also kind and caring. They were in love with each other and wanted to spend their lives together. One day, Joey and Rachel decided to get married. They planned their wedding for a long time, and everyone in the town was excited to attend. On the day of the wedding, Joey and Rachel exchanged vows and promised to love each other forever. After the wedding, Joey and Rachel moved into a beautiful house in the countryside. They were happy and content with their lives, and they had two children, a boy and a girl. The children grew up to be kind and caring people, just like their parents. As the years went by, Joey and Rachel continued to love each other deeply. They had many ups and downs, but they always found a way to overcome their challenges and stay together. Finally, after many years of marriage, Joey and Rachel decided to retire and live in a small cottage in the countryside. They were happy and content with their lives, and they knew that they had found the love of their lives in each other. 17 Table 8: Case study on Dream-Instruct-7B. Question: Lily can run 12 kilometers per hour for 4 hours. After that, she runs 6 kilometers per hour. How many kilometers can she run in 8 hours? Model Generate Text (L=128, T=128) Dream- Instruct-7B | https://arxiv.org/abs/2505.15781v1 |
(Entropy)In the first 4 hours, Lily runs 12 kilometers per hour, so she runs a total of 12 * 4 = 48 kilometers. In the next 4 hours, Lily runs 6 kilometers per hour, so she runs a total of 6 * 4 = 24 kilometers. Therefore, in 8 hours, Lily can run a total of 48 + 24 = 72 kilometers. #### 72 dKV-Cache- Decode, Refresh = 4In the first 4 hours, Lily can run 12 kilometers per hour, so she can run a total of 12 * 4 = 48 kilometers. In the next 4 hours, Lily can run 6 kilometers per hour, so she can run a total of 6 * 4 = 24 kilometers. Therefore, in 8 hours, Lily can run a total of 48 + 24 = 72 kilometers. #### 72 dKV-Cache- Decode, Refresh = 16In the first 4 hours, Lily runs 12 kilometers per hour, so she runs a total of 12 * 4 = 48 kilometers. In the next 4 hours, Lily runs 6 kilometers per hour, so she runs a total of 6 * 4 = 24 kilometers. Therefore, in 8 hours, Lily can run a total of 48 + 24 = 72 kilometers. The answer is: 72 dKV-Cache- Prefill, No refreshIn the first 4 hours, Lily runs 12 kilometers per hour, so she runs a total of 12×4 = 48 kilometers. In the next 4 hours, she runs 6 kilometers per hour, so she runs a total of 6×4 = 24 kilometers. Therefore, in 8 hours, Lily can run a total of 48 + 24 = \boxed{72}kilometers.The answer is: 72 18 | https://arxiv.org/abs/2505.15781v1 |
arXiv:2505.15784v1 [cs.LG] 21 May 2025Large Language Models as Computable Approximations to Solomonoff Induction Jun Wan UBS AG jun.wan@ubs.comLingrui Mei State Key Lab of AI Safety meilingrui25b@ict.ac.cn Abstract The rapid advancement of large language models (LLMs) calls for a rigorous theo- retical framework to explain their empirical success. While significant progress has been made in understanding LLM behaviors, existing theoretical frameworks remain fragmented in explaining emergent phenomena through a unified mathemat- ical lens. We establish the first formal connection between LLM architectures and Algorithmic Information Theory (AIT) by proving two fundamental results: (1) thetraining process computationally approximates Solomonoff prior through loss minimization interpreted as program length optimization, and (2) next-token prediction implements approximate Solomonoff induction . We leverage AIT to provide a unified theoretical explanation for in-context learning, few-shot learn- ing, and scaling laws. Furthermore, our theoretical insights lead to a principled method for few-shot example selection that prioritizes samples where models ex- hibit lower predictive confidence. We demonstrate through experiments on diverse text classification benchmarks that this strategy yields significant performance improvements, particularly for smaller model architectures, when compared to selecting high-confidence examples. Our framework bridges the gap between theoretical foundations and practical LLM behaviors, providing both explanatory power and actionable insights for future model development. 1 Introduction Large Language Models (LLMs) have recently achieved significant advancements across multiple domains[Brown et al., 2020, OpenAI, 2024, DeepSeek-AI et al., 2025b, Qwen et al., 2025], and notably, their reasoning capabilities have improved substantially, as they can now generate interme- diate reasoning steps, enhancing performance on complex tasks[Kojima et al., 2022, DeepSeek-AI et al., 2025a, Team, 2024, Team et al., 2025, He et al., 2025a]. This unprecedented advancement has prompted researchers to seek theoretical frameworks that can systematically explain the emer- gent phenomena observed in these models[Wei et al., 2022, Nanda and Bloom, 2022, Wang et al., 2023, Meng et al., 2023, Delétang et al., 2024, Zheng et al., 2024, Ghandeharioun et al., 2024, Luo and Specia, 2024, Rai et al., 2025], yet providing a unified mathematical account for abilities like in-context learning[Dong et al., 2024], few-shot adaptation[Brown et al., 2020], and empirical scaling laws[Snell et al., 2024] remains a significant challenge for existing theories. Foundational theories from computability and information theory offer potential avenues for deeper understanding. Notably, Algorithmic Information Theory (AIT)[Blum, 1967b,a] provides principles for universal sequence prediction based on algorithmic probability[Cover et al., 1989]. Key con- cepts within this framework, such as the Solomonoff prior and Solomonoff induction—formalized concepts[Solomonoff, 1964a,b] originating from the work of Ray Solomonoff—offer a powerful lens for analyzing generative models. Their focus on sequence generation complexity provides a rigorous mathematical basis for universal prediction and inductive inference, thereby serving as the Both authors contributed equally. theoretical bedrock for our analysis.[Kolmogorov, 1965, Chaitin, 1966, 1977, Li et al., 2008, Downey and Hirschfeldt, 2010]. In this work, we forge a novel and rigorously established theoretical bridge between the operational principles of LLMs and the foundational concepts of AIT. We demonstrate that LLMs can be un- derstood not merely as statistical pattern matchers but as practical, computable instantiations of Solomonoff’s idealized | https://arxiv.org/abs/2505.15784v1 |
framework for universal induction. Our primary contribution is a construc- tive mathematical proof establishing that the standard LLM training paradigm—specifically, the minimization of prediction loss—serves as a computational approximation to the Solomonoff prior. This is achieved by reinterpreting the optimization process as a search for the shortest programs capable of generating the training data, thereby intrinsically linking learning efficiency to algorithmic compressibility. InformsKolmogrorovComplexityMinimal Description LengthSolomonoffPriorUniversalAlgorithmicProbabilityLeads toSolomonoffInductionOptimal Universal Predction LLM Training LLM Inference LLM ProcessAIT Figure 1: Conceptual diagram of our theoretical framework linking LLM processes to AIT. LLM Process (Top): Training optimizes parameters ( θ′) via loss minimization L(θ;D), while inference usesθto predict xt+1from x1:tviaPθ(xt+1|x1:t).AIT (Bottom): Kolmogorov Complexity K(x) informs the Solomonoff prior (approximated as M(x)), which underlies Solomonoff induction M(xt+1|x1:t). Crucially, LLM training is shown to approximate the Solomonoff prior, and LLM inference’s predictive distribution Pθapproximates Solomonoff induction (via M(xt+1|x1:t)), bridg- ing LLM operations with AIT. Building upon this, we develop a formal argument showing that the next-token prediction mechanism inherent in LLMs forms a computable approximation of Solomonoff induction, which provides a robust theoretical underpinning for their remarkable generalization capabilities by casting predictive power as a form of principled inductive inference. Leveraging this established connection, our framework offers a unified theoretical lens through which diverse emergent LLM behaviors can be coherently understood as natural consequences of a system approximating universal induction. Finally, guided by these AIT-based insights, particularly the convergence properties of Solomonoff induction, we introduce and empirically validate a novel, principled method for selecting few-shot demonstration examples. This strategy posits that, for few-shot learning, data points exposing the model’s current predictive weaknesses (i.e., instances where the model exhibits lower confidence in the correct prediction) are more valuable for rapid adaptation than reinforcing already well-learned patterns. Our comprehensive experiments, conducted on text classification benchmarks such as SMS spam detection, emotion recognition, and news categorization using LLMs, consistently demonstrate that prioritizing these lower-confidence samples yields significant performance improvements over selecting high-confidence examples, demonstrating our framework’s practical utility and explanatory power. Collectively, these contributions both advance a deeper theoretical understanding of LLMs and also offer actionable insights for their continued development and application. 2 2 Related Works AIT builds on three foundational contributions: Solomonoff’s universal prediction framework[Blum, 1967b,a, Cover et al., 1989], Kolmogorov’s complexity metric[Kolmogorov, 1965], and Chaitin’s incomputability results[Chaitin, 1966]. Recent advancements in machine learning have explored the integration of Solomonoff induction into neural networks to enhance rapid learning from limited data. [Grau-Moya et al., 2024] Similarly, This builds on established connections between deep learning generalization and AIT, where minimal description length models exhibit superior generalization[Blier and Ollivier, 2018]. The compression perspective has become central to language modeling research, with studies demonstrating LLMs implicitly implement compression strategies[Everitt and Hutter, 2018, Lu et al., 2021, Delétang et al., 2024]. Further studies establishes formal equivalences between model scaling and approximations of conditional Kolmogorov complexity through increased computational capabilities[Wan, 2025]. 3 Preliminaries 3.1 Turing Machines, Neural Networks, and Large Language Models The Turing machine (TM), introduced by Alan Turing in 1936 [Turing et al., 1936], is a foundational model of computation. Turing machines encompass | https://arxiv.org/abs/2505.15784v1 |
both specific Turing machines, denoted T, and universal Turing machines (UTMs), denoted U. A UTM Ucan simulate any other TM by processing a program pand its input was arguments, denoted U(p, w). From a theoretical perspective, large language models (LLMs) can be viewed as specific Turing machines. Given an input context x1:t, an LLM processes this sequence through a deep neural network to produce a conditional probability distribution P(xt+1|x1:t)over the vocabulary V. A decoding strategy (e.g., greedy search, beam search, or temperature sampling) then generates the next token xt+1from this distribution. While the output appears stochastic due to sampling, the process is driven by deterministic pseudo-random number generators. Thus, the LLM as a whole functions as a deterministic Turing machine. Definition Definition 1 (Language Model Generation Function) LetXbe the set of input prompts, S the set of random seeds, and Rthe set of possible model outputs. g:X × S → X∗(1) such that for any x∈ X ands∈ S,g(x, s) =x∗=x◦r, where r∈ R is the output generated by the model given xands, and◦denotes string concatenation. According to the above definition, x∗is the full generated sequence starting with prompt xand extended by r. The random seed sinfluences both the semantic content and the length |x∗|of the output. 3.2 Prefix Kolmogorov Complexity A prefix Universal Turing Machine (prefix UTM) is a fundamental construct in AIT. It processes input programs that form a prefix code—meaning no valid program is a prefix of another—ensuring unambiguous decoding of each input without requiring explicit delimiters. The prefix Kolmogorov complexity [Li et al., 2008] quantifies the intrinsic information content of an object (e.g., a string or number). Formally, it is defined as the length of the shortest program that, when executed on a prefix UTM U, produces the object. For a prefix UTM U, the prefix Kolmogorov complexity of a string xis defined as: KU(x) = min {ℓ(p) :U(p) =x} (2) where prepresents a binary program, ℓ(p)denotes its length in bits, and U(p) =xindicates that U halts and outputs xwhen given input p. Conceptually, KU(x)represents the minimal descriptive complexity of x. Strings with inherent patterns or structure can be generated by concise programs, 3 resulting in lower complexity values, whereas algorithmically random strings lack compact descrip- tions and exhibit higher complexity. Prefix Kolmogorov complexity exhibits two crucial properties: (1) While KU(x)depends on the specific choice of U, the difference between complexities measured under different prefix UTMs is bounded by a constant independent of x. Consequently, the subscript Uis frequently omitted in notation ( Invariance Theorem ). There exists no algorithm that can compute K(x)precisely for all arbitrary strings, making it a non-recursive function ( Uncomputability ). 3.3 Solomonoff Prior and Solomonoff Induction The Solomonoff prior [Solomonoff, 1960], introduced by Ray Solomonoff in the 1960s, is a foun- dational idea in AIT. It formalizes universal induction, a theoretically optimal method for inductive inference. The Solomonoff prior Massigns a probability to any binary string xas: M(x):=X p:U(p)=x∗2−ℓ(p)(3) where ℓ(p)denotes the length (in bits) of program p,x∗represents any string with prefix x, andUis a prefix universal Turing machine (prefix UTM). The | https://arxiv.org/abs/2505.15784v1 |
summation encompasses all programs psuch thatU(p)outputs a string beginning with x. This formulation embodies Occam’s razor, as shorter programs contribute more significantly to M(x). Two critical design choices warrant explanation: (1) The inclusion of outputs beginning with xfacilitates prediction—having observed sequence x, we aim to infer its continuation; (2) The prefix condition ensures that Mconstitutes a semi-measure, satisfyingP xM(x)≤1, which is essential for probabilistic interpretation. Given an observed sequence x1:t, Solomonoff induction defines the predictive probability for the next bit as: M(xt+1|x1:t) =M(x1:t+1) M(x1:t)(4) This framework has strong theoretical guarantees. Although the Solomonoff prior is uncomputable, it is semi-computable [Hutter, 2005], meaning we can approximate it increasingly well using com- putable functions. 4 Main Results 4.1 The Training Process of LLMs as a Computable Approximation of the Solomonoff Prior Theorem Theorem 2 (LLM Training Approximates Solomonoff Prior) Letf(x, s)be a program constructed according to Definition 1, and define the approximate Solomonoff prior M(x):=∞X s=12−ℓ(f(x,s)) where ℓ(f(x, s))denotes the length of the program describing f(x, s). Then: 1.Upper Bound: M(x)≤M(x), where M(x)is the Solomonoff prior. 2.Approximation: As the loss of fdecreases, M(x)increasingly approaches M(x). As discussed in Section 3.1, large language models (LLMs) can be viewed as specific instances of Turing machines. Consequently, training an LLM can be interpreted as the process of identifying a Turing machine that best explains the observed data. In this section, we present a constructive argu- ment demonstrating that the training process of LLMs is mathematically equivalent to a computable approximation of the Solomonoff prior. For any given string x, we can construct a program fsuch that a universal Turing machine U satisfies U(f) = x. This program fcomprises several components. The core model compo- nent includes the weight parameters of the LLM, inference logic, and sampling algorithm, with its binary representation denoted as m(2). Based on the theoretical work on language modeling as compression[Delétang et al., 2023], the compression and encoding component uses the LLM 4 in conjunction with arithmetic coding to losslessly compress the string x, resulting in a binary encoding e(x)(2). Additionally, the decoding control component specifies the number of itera- tions n(x)needed to decode e(x)(2)back to the original string xwith its binary representation denoted as n(x)(2). Finally, the random generation component provides a random seed s, with binary representation s(2), required by the LLM to generate subsequent content based on e(x)(2). Takeaway 1 : The LLM training process, driven by loss minimization, can be inter- preted as an implicit search for programs of minimal algorithmic complexity that generate the training data, directly linking learning efficiency to data compressibility.In summary, given a string xand random seed s, the program fcan be represented as a 4-tuple: f(x, s) = (m(2), n(x)(2), s(2), e(x)(2))(5) The execution of this program on the univer- sal Turing machine Uproceeds as follows: (1) Based on model parameters m(2)and com- pressed code e(x)(2), the machine performs n(x)iterations to restore the original string x (decoding phase ); (2) using m(x)(2), the restored x, and random seed s(2), the machine samples to generate the continuation of the output sequence x∗\x(generation phase ). By combining | https://arxiv.org/abs/2505.15784v1 |
the decoded xwith the generated continuation, the final output is the complete sequence x∗. This con- struction f(x, s)has the following two key properties: (1) For a fixed input x, the number of decoding iterations n(x)is deterministic; the random seed scan be any natural number. (2) Since the model parameters m(2)remain fixed after training, they can, by Lemma 4, be internalized into the universal Turing machine U. As a result, the program can be simplified to f(x, s) = (n(x)(2), s(2), e(x)(2)). Since the Solomonoff prior is defined over a prefix universal Turing machine, we must encode n(x)(2),s(2), and e(x)(2)as prefix codes. To achieve this, we apply Elias gamma coding to each component, yielding the prefix-encoded representation: f(x, s) = (n(x)(2),s(2),e(x)(2)) (6) It can be shown that f(x, s)constitutes a valid set of prefix codes. LetFdenote the set of all such prefix-encoded programs. Based on this construction and Lemma 5, we define a prefix universal Turing machine UFwith the following properties: •UFis a prefix universal Turing machine; • For any f∈F, we have UF(f) =U(f). Hence, for all xands, the following holds: UF(f(x, s)) =x∗(7) Next, we define the computable prior M(x)as: M(x):=∞X s=12−ℓ(f(x,s))(8) Clearly, the quantity M(x)forms a subset of the Solomonoff prior M(x), implying that M(x)≤ M(x). For any given x, since n(x)(2)is fixed, minimizing the binary encoding length |e(x)(2)|is crucial for making M(x)as close as possible to M(x). As established in Delétang et al. [2023], Wan [2025], this objective aligns with minimizing the training loss of the large language model (LLM). Consequently, the training process of an LLM can be interpreted as a computable approximation of the Solomonoff prior M(x). 4.2 The Inference Process of LLMs as a Computable Approximation of Solomonoff Induction As discussed in Section 4.1, M(x)can be viewed as a computable approximation of the Solomonoff priorM(x). Solomonoff induction approximates the next symbol using the following equation: M(xt+1|x1:t) =M(x1:t+1) M(x1:t)≈M(x1:t+1) M(x1:t):=M(xt+1|x1:t) (9) 5 Given the properties of Elias gamma coding, the code length for encoding a natural number nis ⌊log2n⌋+ 1 + ⌊log2n⌋ ≈2 log2nbits. Leveraging this property, we can express the code lengths for Equation 6 as follows: s(2) ≈2 log2s n(x)(2) ≈2 log2n(x) e(x)(2) ≈ e(x)(2) + 2 log2 e(x)(2) (10) These relationships enable us to derive the following theorem. Theorem Theorem 3 (LLM Inference Approximates Solomonoff Induction) For a sufficiently trained LLM with parameters θ, the conditional next-token probability Pθ(xt+1|x1:t)approx- imates the Solomonoff inductive inference M(xt+1|x1:t). Asymptotically for large context length t, the relationship is given by: M(xt+1|x1:t)≈M(xt+1|x1:t)≈t2 4(t+ 1)2·Pθ(xt+1|x1:t) (11) Proof. We begin by computing the prior probability of the sequence x1:tusing Equation 6 and Equation 10: M(x1:t) =∞X s=12−l(f(x1:t,s))(12) ≈∞X s=11 s21 t21 e(x1:t)(2) 22−|e(x1:t)(2)|(13) =π2 61 t21 e(x1:t)(2) 22−|e(x1:t)(2)|(14) Similarly, for the sequence x1:t+1: M(x1:t+1) =∞X s=12−l(f(x1:t+1,s))(15) ≈π2 61 (t+ 1)21 e(x1:t+1)(2) 22−|e(x1:t+1)(2)|(16) Thus, based on the Equation 9, we have: M(xt+1|x1:t)≈t2 (t+ 1)2 e(x1:t)(2) 2 e(x1:t+1)(2) 22|e(x1:t)(2)| 2|e(x1:t+1)(2)|(17) On one hand, when tis large,|e(x1:t)(2)|2 |e(x1:t+1)(2)|2≈1On the other hand, e(x1:t)(2) ≈2t− Pt i=1log2P(xi|x1:i−1), where P(xi|x1:i−1)is the LLM’s predicted probability for the next token. Combining the analysis above, we arrive at the | https://arxiv.org/abs/2505.15784v1 |
key approximation: M(xt+1|x1:t)≈t2 4(t+ 1)2P(xt+1|x1:t) (18) It should be noted that M(x)is a semi-measure and needs to be normalized when converting to a probability. Sincet2 4(t+1)2is a value independent of the token, it will automatically cancel out during normalization. This result indicates that the prediction probability of the next token by a large language model is essentially a computable approximation to Solomonoff’s inductive inference. 6 4.3 Explaining Various Phenomena in LLMs using Solomonoff prior Letµbe a computable target probability distribution. We have the following theorem [Hutter, 2005]: ∞X t=1X x1:t∈Btµ(x1:t) M(0|x1:t)−µ(0|x1:t)2 ≤1 2ln 2·K(µ) +c <∞ (19) Takeaway 2 : An LLM’s next-token pre- diction is not merely statistical pattern matching but an approximation of optimal inductive inference, offering a theoretical basis for its remarkable generalization ca- pabilities on unseen sequences.Here, µdenotes the target distribution, M(0| x1:t)is the predictive probability of the next bit being 0 based on Solomonoff induction, and µ(0|x1:t)is the predictive probability under the target distribution given the same condi- tions. K(µ)denotes the prefix Kolmogorov complexity of µ. Since µis computable, the term1 2ln 2·K(µ) +cis finite. Btdenotes the set of all binary strings of length t. For the above infinite series to converge, its terms must approach zero. Specifically, this means that as t→ ∞ , the prediction error M(0| x1:t)−µ(0|x1:t)almost surely (with probability 1 under µ) converges to zero. Therefore, the Solomonoff prior Mwill eventually converge to the target probability distribution µ. We have previously noted that large language models (LLMs) can be viewed as computable approxi- mations of the Solomonoff prior M. Thus, leveraging the above theorem, we can attempt to explain several important phenomena observed in large language models: 1.In-context learning phenomenon: Since Mis a universal prior, for any computable tar- get distribution µ, it is possible to carefully design a context x1:tsuch that M(0|x1:t) approximates µ(0|x1:t), thereby achieving learning effects. 2.Few-shot learning phenomenon: Adding a few examples (few-shot examples) in the prompt can sometimes significantly improve model performance. These examples increase the value of µ(x1:t), thereby giving them a higher weight in the error term of the theorem and accelerating the convergence of M(0|x1:t)to the target distribution. 3.Parameter scaling laws: Improving model performance by increasing the number of param- eters is essentially a more precise approximation of the Solomonoff prior through higher expressive capacity. 4.Inference scaling laws: Enhancing model performance by allowing more computational steps during inference (e.g., longer context windows or more decoding steps). This corresponds to increasing tin the above theorem, enabling M(0|x1:t)to converge more quickly to the true probability µ(0|x1:t). 4.4 Few-shot Example Selection Techniques An excellent theoretical framework should not only provide a reasonable explanation of existing phenomena but also possess the capability to predict unknown scenarios. Based on the theoretical theorem proposed in Section 4.3, we innovatively introduce a sample selection method for few-shot learning, which can significantly enhance model performance. Takeaway 3 : For few-shot learning, the data points exposing the model’s cur- rent predictive weaknesses (i.e., lower- confidence predictions) may be more valu- able for rapid adaptation than reinforcing already well-learned patterns.Consider a given computational problem for | https://arxiv.org/abs/2505.15784v1 |
which there exist multiple computable distribu- tions µ1, µ2,···,that can serve as valid solu- tions, with their prefix Kolmogorov complexi- tiesK(µ)being approximately equal. Suppose we have collected a large number of sample se- quences x1:t, where different subsets may orig- inate from different computable distributions. We propose the following sample selection strat- egy: Prioritize selecting sample sequences x1:t that exhibit a larger difference between M(0|x1:t)andµ(0|x1:t). This selection criterion can significantly accelerate the convergence of the predictive model M(0|x1:t). 7 Taking the task of text classification as an example, our method is implemented as follows: among a large number of samples, we prioritize those where the large language model exhibits lower prediction accuracy (more precisely, samples where the model assigns lower probability to the correct next token), rather than those with high prediction accuracy. This selection strategy allows for a more targeted improvement in the model’s few-shot learning performance. In summary, this strategy effectively improves the model’s performance in few-shot learning scenarios by selectively choosing informative samples. 5 Experiments 5.1 Setup We evaluate our few-shot sample selection strategies on three text classification datasets using models from the Qwen2.5 (3B, 7B) [Yang et al., 2024] and Llama (Llama 3.1 8B, Llama 3.2 8B) [Grattafiori et al., 2024] families, and all models used in our experiments are instruction-tuned versions. For each task, specific prompt templates were designed (see Appendix F for examples). Our methodology for few-shot example selection involves two phases, using a fixed number of 10 few-shot examples. In the first phase (low-confidence selection), we iterate through available samples for each class, identifying those for which the model, given the current prompt, assigns the lowest confidence to the correct label. The confidence is the model’s softmax output probability for the ground-truth label token. This process continues until K samples are selected per class, forming the low-confidence set E1. In the second phase (high-confidence selection), serving as a comparative baseline, we similarly select K samples per class with the highest predicted label probabilities to form setE2. These sets E1andE2are then used as the few-shot examples, and classification accuracy is evaluated on a held-out test set. Additional details are provided in Appendix D. Algorithm 1: Confidence-Based Sample Selection for Few-shot Text Classification Input: Dataset D={(xi, yi)}n i=1, Model M, Initial prompt p, Number of samples K Output: Low-confidence sample subset D′for few-shot learning Initialize D′← ∅ ; // Empty set to store selected samples pcurrent←p; // Initialize current prompt with base prompt fort←1toKdo min_conf ←1.0; // Initialize minimum confidence score xselected←null ; // Sample with minimum confidence for(x, y)∈ D \ D′do ptemp←pcurrent⊕x; // Concatenate prompt with sample conf← M (y|ptemp); // Get probability of correct label ifconf<min_conf then min_conf ←conf; xselected←(x, y); D′← D′∪ {xselected}; // Add selected sample to output set pcurrent←pcurrent⊕xselected ; // Update prompt with new sample return D′ 5.2 Datasets We evaluate our approach on three benchmark text classification datasets. The first is a binary spam classification dataset [Almeida and Hidalgo, 2011]. The second is a 6-class emotion recognition dataset [Saravia et al., 2018], and the third is a 4-class news | https://arxiv.org/abs/2505.15784v1 |
article classification dataset [Zhang et al., 2015]. For the datasets provided by Saravia et al. [2018] and Zhang et al. [2015], which already contain predefined training and test splits, we randomly sampled a subset from the original training set as our selection pool while using the official test set for evaluation. Regarding the Almeida and Hidalgo [2011] dataset that lacks predefined splits, we first partitioned the data into training and test sets before applying the same sampling strategy. 8 Model Family Version Size ConfidenceSMS EMOTION AG NEWS Acc. (%) / Mean@10 QWEN 2.53BHigh ↑ 76.62 55.21 71.38 Low ↓ 90.07 56.03 74.67 7BHigh ↑ 92.73 57.58 77.09 Low ↓ 94.60 57.68 80.35 LLAMA3.2 3BHigh ↑ 64.94 36.40 45.98 Low ↓ 73.22 41.86 47.34 3.1 8BHigh ↑ 85.22 52.98 74.45 Low ↓ 85.56 53.22 76.92 Table 1: Comparative performance of few-shot example selection strategies on text classification benchmarks. The table displays accuracy (%) for Qwen and Llama model variants on SMS, EMO- TION, and AG NEWS datasets when using high-confidence versus low-confidence example selection. Results indicate that selecting low-confidence examples (Low ↓) consistently yields higher accuracy across models and datasets compared to high-confidence selection (High ↑). 5.3 Results and Analysis The results presented in Table 1 demonstrate that our theoretically-grounded strategy of selecting low- confidence samples for few-shot learning consistently yields significant accuracy improvements across all tested models and datasets. This empirical validation aligns with our hypothesis that exposing models to instances where their current predictive understanding is weakest (lower confidence) fosters more rapid and effective adaptation, a principle echoing the error-correction mechanisms inherent in Solomonoff induction. Notably, while the low-confidence strategy remains superior, the magnitude of the performance gain appears to moderate with increasing model scale. This observation might suggest that larger models, possessing greater intrinsic capacity and having learned more robust priors during pre-training, may already have a better initial grasp of the task distribution. Consequently, while they still benefit from the targeted information provided by low-confidence examples, their baseline performance with high-confidence (or even randomly selected) examples is already higher, leading to a less pronounced, though still present, advantage for the low-confidence approach. 6 Conclusion This paper establishes a formal theoretical link between large language models and Solomonoff’s theory of universal induction. We have proven that LLM training approximates the Solomonoff prior and that their inference mechanism approximates Solomonoff induction. This AIT-grounded framework offers a unified explanation for key LLM phenomena, including in-context learning, few-shot adaptation, and scaling laws, viewing them as outcomes of a system approximating optimal inductive inference. Our theoretical insights directly motivated a novel few-shot example selection strategy: prioritizing samples that expose the model’s predictive weaknesses (lower-confidence predictions) to accelerate adaptation. Experiments across diverse text classification benchmarks confirmed that this approach significantly outperforms conventional high-confidence selection, particularly for smaller models. This result not only validates our theory but also offers a practical method for enhancing LLM efficiency. By bridging empirical LLM success with foundational AIT principles, this work provides both a deeper understanding of these models and actionable strategies for their improvement. We contend that viewing LLMs | https://arxiv.org/abs/2505.15784v1 |
as computable approximations of Solomonoff induction paves the way for more principled advancements in their design and application, encouraging a perspective that recognizes them as sophisticated, albeit approximate, universal inductive reasoners. 9 References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. Tiago Almeida and Jos Hidalgo. SMS Spam Collection. UCI Machine Learning Repository, 2011. DOI: https://doi.org/10.24432/C5CC84. Kaikai An, Shuzheng Si, Helan Hu, Haozhe Zhao, Yuchi Wang, Qingyan Guo, and Baobao Chang. Rethinking semantic parsing for large language models: Enhancing llm performance with semantic hints. arXiv preprint arXiv:2409.14469 , 2024a. Kaikai An, Fangkai Yang, Liqun Li, Junting Lu, Sitao Cheng, Shuzheng Si, Lu Wang, Pu Zhao, Lele Cao, Qingwei Lin, et al. Thread: A logic-based data organization paradigm for how-to question answering with retrieval augmented generation. arXiv preprint arXiv:2406.13372 , 2024b. Kaikai An, Li Sheng, Ganqu Cui, Shuzheng Si, Ning Ding, Yu Cheng, and Baobao Chang. Ultraif: Advancing instruction following from the wild. arXiv preprint arXiv:2502.04153 , 2025. Baolong Bi, Shaohan Huang, Yiwei Wang, Tianchi Yang, Zihan Zhang, Haizhen Huang, Lingrui Mei, Junfeng Fang, Zehao Li, Furu Wei, et al. Context-dpo: Aligning language models for context-faithfulness. arXiv preprint arXiv:2412.15280 , 2024a. Baolong Bi, Shenghua Liu, Lingrui Mei, Yiwei Wang, Pengliang Ji, and Xueqi Cheng. Decod- ing by contrasting knowledge: Enhancing llms’ confidence on edited facts. arXiv preprint arXiv:2405.11613 , 2024b. Baolong Bi, Shenghua Liu, Yiwei Wang, Lingrui Mei, Junfeng Fang, Hongcheng Gao, Shiyu Ni, and Xueqi Cheng. Is factuality enhancement a free lunch for llms? better factuality can lead to worse context-faithfulness. arXiv preprint arXiv:2404.00216 , 2024c. Baolong Bi, Shenghua Liu, Yiwei Wang, Yilong Xu, Junfeng Fang, Lingrui Mei, and Xueqi Cheng. Parameters vs. context: Fine-grained control of knowledge reliance in language models. arXiv preprint arXiv:2503.15888 , 2025. Léonard Blier and Yann Ollivier. The description length of deep learning models. Advances in Neural Information Processing Systems , 31, 2018. Manuel Blum. A machine-independent theory of the complexity of recursive functions. J. ACM , 14(2):322–336, April 1967a. ISSN 0004-5411. doi: 10.1145/321386.321395. URL https: //doi.org/10.1145/321386.321395 . Manuel Blum. On the size of machines. Information and Control , 11(3):257–265, 1967b. ISSN 0019-9958. doi: https://doi.org/10.1016/S0019-9958(67)90546-3. URL https://www. sciencedirect.com/science/article/pii/S0019995867905463 . Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners, 2020. URL https: //arxiv.org/abs/2005.14165 . Gregory J Chaitin. On the length of programs for computing finite binary sequences. Journal of the ACM (JACM) , 13(4):547–569, 1966. Gregory J Chaitin. Algorithmic information theory. IBM journal of research and development , 21(4): 350–359, 1977. Thomas M Cover, Peter Gacs, and Robert M Gray. Kolmogorov’s contributions to information theory and algorithmic complexity. The annals of probability , | https://arxiv.org/abs/2505.15784v1 |
17(3):840–865, 1989. 10 DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wang, Bochao Wu, Bei Feng, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, and et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning, 2025a. URL https://arxiv.org/abs/2501.12948 . DeepSeek-AI, Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Daya Guo, Dejian Yang, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Haowei Zhang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Li, and et al. Deepseek-v3 technical report, 2025b. URL https://arxiv.org/abs/2412.19437 . Grégoire Delétang, Anian Ruoss, Paul-Ambroise Duquenne, Elliot Catt, Tim Genewein, Christo- pher Mattern, Jordi Grau-Moya, Li Kevin Wenliang, Matthew Aitchison, Laurent Orseau, et al. Language modeling is compression. arXiv preprint arXiv:2309.10668 , 2023. Grégoire Delétang, Anian Ruoss, Paul-Ambroise Duquenne, Elliot Catt, Tim Genewein, Christopher Mattern, Jordi Grau-Moya, Li Kevin Wenliang, Matthew Aitchison, Laurent Orseau, Marcus Hutter, and Joel Veness. Language modeling is compression, 2024. URL https://arxiv.org/ abs/2309.10668 . Chao Deng, Jiale Yuan, Pi Bu, Peijie Wang, Zhong-Zhi Li, Jian Xu, Xiao-Hui Li, Yuan Gao, Jun Song, Bo Zheng, et al. Longdocurl: a comprehensive multimodal long document benchmark integrating understanding, reasoning, and locating. arXiv preprint arXiv:2412.18424 , 2024. Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Jingyuan Ma, Rui Li, Heming Xia, Jingjing Xu, Zhiyong Wu, Tianyu Liu, Baobao Chang, Xu Sun, Lei Li, and Zhifang Sui. A survey on in-context learning, 2024. URL https://arxiv.org/abs/2301.00234 . Rodney G Downey and Denis R Hirschfeldt. Algorithmic randomness and complexity . Springer Science & Business Media, 2010. Tom Everitt and Marcus Hutter. Universal artificial intelligence: Practical agents and fundamental challenges. Foundations of trusted autonomy , pages 15–46, 2018. Yuyao Ge, Shenghua Liu, Yiwei Wang, Lingrui Mei, Lizhe Chen, Baolong Bi, and Xueqi Cheng. Innate reasoning is not enough: In-context learning enhances reasoning large language models with less overthinking, 2025. URL https://arxiv.org/abs/2503.19602 . Asma Ghandeharioun, Avi Caciularu, Adam Pearce, Lucas Dixon, and Mor Geva. Patchscopes: A unifying framework for inspecting hidden representations of language models, 2024. URL https://arxiv.org/abs/2401.06102 . Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurelien Rodriguez, Austen Gregerson, Ava Spataru, Baptiste Roziere, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Allonsius, Daniel Song, Danielle Pintz, Danny Livshits, Danny Wyatt, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino, Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael Smith, Filip Radenovic, Francisco Guzmán, | https://arxiv.org/abs/2505.15784v1 |
Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Georgia Lewis Anderson, Govind Thattai, Graeme Nail, and et al. The llama 3 herd of models, 2024. URL https://arxiv.org/abs/2407.21783 . Jordi Grau-Moya, Tim Genewein, Marcus Hutter, Laurent Orseau, Grégoire Delétang, Elliot Catt, Anian Ruoss, Li Kevin Wenliang, Christopher Mattern, and Matthew Aitchison. Learning universal predictors. arXiv preprint arXiv:2401.14953 , 2024. 11 Jujie He, Jiacai Liu, Chris Yuhao Liu, Rui Yan, Chaojie Wang, Peng Cheng, Xiaoyu Zhang, Fuxiang Zhang, Jiacheng Xu, Wei Shen, Siyuan Li, Liang Zeng, Tianwen Wei, Cheng Cheng, Bo An, Yang Liu, and Yahui Zhou. Skywork open reasoner series. https://capricious-hydrogen-41c. notion.site/Skywork-Open-Reaonser-Series-1d0bc9ae823a80459b46c149e4f51680 , 2025a. Notion Blog. Minghua He, Fangkai Yang, Pu Zhao, Wenjie Yin, Yu Kang, Qingwei Lin, Saravan Rajmohan, Dongmei Zhang, and Qi Zhang. Execoder: Empowering large language models with executability representation for code translation. arXiv preprint arXiv:2501.18460 , 2025b. Sirui Hong, Xiawu Zheng, Jonathan Chen, Yuheng Cheng, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, et al. Metagpt: Meta programming for multi-agent collaborative framework. CoRR , abs/2308.00352, 2023. Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents, 2022. URL https://arxiv. org/abs/2201.07207 . Marcus Hutter. Universal artificial intelligence: Sequential decisions based on algorithmic probabil- ity. Springer Science & Business Media, 2005. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. Advances in Neural Information Processing Systems , 35: 22199–22213, 2022. Andrei N Kolmogorov. Three approaches to the quantitative definition ofinformation’. Problems of information transmission , 1(1):1–7, 1965. Ming Li, Paul Vitányi, et al. An introduction to Kolmogorov complexity and its applications , volume 3. Springer, 2008. Zhong-Zhi Li, Ming-Liang Zhang, Fei Yin, and Cheng-Lin Liu. Lans: A layout-aware neural solver for plane geometry problem. arXiv preprint arXiv:2311.16476 , 2023. Zhong-Zhi Li, Ming-Liang Zhang, Fei Yin, Zhi-Long Ji, Jin-Feng Bai, Zhen-Ru Pan, Fan-Hu Zeng, Jian Xu, Jia-Xin Zhang, and Cheng-Lin Liu. Cmmath: A chinese multi-modal math skill evaluation benchmark for foundation models. arXiv preprint arXiv:2407.12023 , 2024. Zhong-Zhi Li, Duzhen Zhang, Ming-Liang Zhang, Jiaxin Zhang, Zengyan Liu, Yuxuan Yao, Haotian Xu, Junhao Zheng, Pei-Jie Wang, Xiuyi Chen, Yingying Zhang, Fei Yin, Jiahua Dong, Zhiwei Li, Bao-Long Bi, Ling-Rui Mei, Junfeng Fang, Zhijiang Guo, Le Song, and Cheng-Lin Liu. From system 1 to system 2: A survey of reasoning large language models, 2025a. URL https: //arxiv.org/abs/2502.17419 . Zhong-Zhi Li, Duzhen Zhang, Ming-Liang Zhang, Jiaxin Zhang, Zengyan Liu, Yuxuan Yao, Haotian Xu, Junhao Zheng, Pei-Jie Wang, Xiuyi Chen, et al. From system 1 to system 2: A survey of reasoning large language models. arXiv preprint arXiv:2502.17419 , 2025b. Junting Lu, Zhiyang Zhang, Fangkai Yang, Jue Zhang, Lu Wang, Chao Du, Qingwei Lin, Saravan Rajmohan, Dongmei Zhang, and Qi Zhang. Axis: Efficient human-agent-computer interaction with api-first llm-based agents, 2025. URL https://arxiv.org/abs/2409.17140 . Ming Lu, Peiyao Guo, Huiqing Shi, Chuntong Cao, and Zhan Ma. Transformer-based image compression. arXiv preprint arXiv:2111.06707 , 2021. Haoyan Luo and Lucia Specia. From understanding to utilization: A survey on explainability for large language models, 2024. URL https://arxiv.org/abs/2401.12874 . | https://arxiv.org/abs/2505.15784v1 |
Lingrui Mei, Shenghua Liu, Yiwei Wang, Baolong Bi, and Xueqi Chen. Slang: New concept comprehension of large language models. arXiv preprint arXiv:2401.12585 , 2024a. Lingrui Mei, Shenghua Liu, Yiwei Wang, Baolong Bi, Jiayi Mao, and Xueqi Cheng. " not aligned" is not" malicious": Being careful about hallucinations of large language models’ jailbreak. arXiv preprint arXiv:2406.11668 , 2024b. 12 Lingrui Mei, Shenghua Liu, Yiwei Wang, Baolong Bi, Ruibin Yuan, and Xueqi Cheng. Hid- denguard: Fine-grained safe generation with specialized representation router. arXiv preprint arXiv:2410.02684 , 2024c. Lingrui Mei, Shenghua Liu, Yiwei Wang, Baolong Bi, Yuyao Ge, Jun Wan, Yurong Wu, and Xueqi Cheng. a1: Steep test-time scaling law via environment augmented generation, 2025. URL https://arxiv.org/abs/2504.14597 . Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. Locating and editing factual associations in gpt, 2023. URL https://arxiv.org/abs/2202.05262 . Neel Nanda and Joseph Bloom. Transformerlens. https://github.com/TransformerLensOrg/ TransformerLens , 2022. OpenAI. Gpt-4 technical report, 2024. URL https://arxiv.org/abs/2303.08774 . OpenAI. Introducing openai o1-preview. https://openai.com/index/ introducing-openai-o1-preview/ , 2024. Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. Instruction tuning with gpt-4. arXiv preprint arXiv:2304.03277 , 2023. Qwen, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tianyi Tang, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. Qwen2.5 technical report, 2025. URL https://arxiv.org/abs/2412.15115 . Daking Rai, Yilun Zhou, Shi Feng, Abulhair Saparov, and Ziyu Yao. A practical review of mechanistic interpretability for transformer-based language models, 2025. URL https://arxiv.org/abs/ 2407.02646 . Elvis Saravia, Hsien-Chi Toby Liu, Yen-Hao Huang, Junlin Wu, and Yi-Shin Chen. CARER: Contex- tualized affect representations for emotion recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , pages 3687–3697, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1404. URLhttps://www.aclweb.org/anthology/D18-1404 . Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y . K. Li, Y . Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathemat- ical reasoning in open language models, 2024a. URL https://arxiv.org/abs/2402.03300 . Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, YK Li, Yu Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024b. Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute optimally can be more effective than scaling model parameters, 2024. URL https://arxiv.org/abs/ 2408.03314 . Ray J Solomonoff. A preliminary report on a general theory of inductive inference. Citeseer, 1960. Ray J Solomonoff. A formal theory of inductive inference. part i. Information and control , 7(1):1–22, 1964a. Ray J Solomonoff. A formal theory of inductive inference. part ii. Information and control , 7(2): 224–254, 1964b. Kimi Team, Angang Du, Bofei Gao, Bowei Xing, Changjiu Jiang, | https://arxiv.org/abs/2505.15784v1 |
Cheng Chen, Cheng Li, Chenjun Xiao, Chenzhuang Du, Chonghua Liao, Chuning Tang, Congcong Wang, Dehao Zhang, Enming Yuan, Enzhe Lu, Fengxiang Tang, Flood Sung, and et al. Kimi k1.5: Scaling reinforcement learning with llms, 2025. URL https://arxiv.org/abs/2501.12599 . 13 Qwen Team. Qwq: Reflect deeply on the boundaries of the unknown, November 2024. URL https://qwenlm.github.io/blog/qwq-32b-preview/ . Alan Mathison Turing et al. On computable numbers, with an application to the entscheidungsproblem. J. of Math , 58(345-363):5, 1936. Aron Vallinder. Solomonoff induction: A solution to the problem of the priors? 2012. Jun Wan. Unifying two types of scaling laws from the perspective of conditional kolmogorov complexity. arXiv preprint arXiv:2501.06802 , 2025. Lean Wang, Lei Li, Damai Dai, Deli Chen, Hao Zhou, Fandong Meng, Jie Zhou, and Xu Sun. Label words are anchors: An information flow perspective for understanding in-context learning, 2023. URLhttps://arxiv.org/abs/2305.14160 . Lu Wang, Fangkai Yang, Chaoyun Zhang, Junting Lu, Jiaxu Qian, Shilin He, Pu Zhao, Bo Qiao, Ray Huang, Si Qin, Qisheng Su, Jiayi Ye, Yudi Zhang, Jian-Guang Lou, Qingwei Lin, Saravan Raj- mohan, Dongmei Zhang, and Qi Zhang. Large action models: From inception to implementation, 2025a. URL https://arxiv.org/abs/2412.10047 . Peijie Wang, Zhong-Zhi Li, Fei Yin, Xin Yang, Dekang Ran, and Cheng-Lin Liu. Mv-math: Evaluating multimodal math reasoning in multi-visual contexts. arXiv preprint arXiv:2502.20808 , 2025b. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. Emergent abilities of large language models, 2022. URLhttps://arxiv.org/abs/2206.07682 . Suhang Wu, Minlong Peng, Yue Chen, Jinsong Su, and Mingming Sun. Eva-kellm: A new benchmark for evaluating knowledge editing of llms, 2023. URL https://arxiv.org/abs/2308.09954 . Yurong Wu, Fangwen Mu, Qiuhong Zhang, Jinjing Zhao, Xinrun Xu, Lingrui Mei, Yang Wu, Lin Shi, Junjie Wang, Zhiming Ding, et al. Vulnerability of text-to-image models to prompt template stealing: A differential evolution approach. arXiv preprint arXiv:2502.14285 , 2025. Haotian Xu, Xing Wu, Weinong Wang, Zhongzhi Li, Da Zheng, Boyuan Chen, Yi Hu, Shijia Kang, Jiaming Ji, Yingying Zhang, et al. Redstar: Does scaling long-cot data unlock better slow-reasoning systems? arXiv preprint arXiv:2501.11284 , 2025. An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tianyi Tang, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. Qwen2.5 technical report. arXiv preprint arXiv:2412.15115 , 2024. Yan Zeng, Guoqiang Wei, Jiani Zheng, Jiaxin Zou, Yang Wei, Yuchen Zhang, and Hang Li. Make pixels dance: High-dynamic video generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 8850–8860, 2024a. Yan Zeng, Hanbo Zhang, Jiani Zheng, Jiangnan Xia, Guoqiang Wei, Yang Wei, Yuchen Zhang, Tao Kong, and Ruihua Song. What matters in training a | https://arxiv.org/abs/2505.15784v1 |
gpt4-style language model with multimodal inputs? In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 7930–7957, 2024b. Jiaxin Zhang, Zhongzhi Li, Mingliang Zhang, Fei Yin, Chenglin Liu, and Yashar Moshfeghi. Geoeval: benchmark for evaluating llms and multi-modal models on geometry problem-solving. arXiv preprint arXiv:2402.10104 , 2024a. 14 Ming-Liang Zhang, Zhong-Zhi Li, Fei Yin, Liang Lin, and Cheng-Lin Liu. Fuse, reason and verify: Geometry problem solving with parsed clauses from diagram. arXiv preprint arXiv:2407.07327 , 2024b. Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. Character-level convolutional networks for text classification. In NIPS , 2015. Bowen Zheng, Ming Ma, Zhongqiao Lin, and Tianming Yang. Distributed rule vectors is a key mechanism in large language models’ in-context learning, 2024. URL https://arxiv.org/ abs/2406.16007 . Jiani Zheng, Lu Wang, Fangkai Yang, Chaoyun Zhang, Lingrui Mei, Wenjie Yin, Qingwei Lin, Dongmei Zhang, Saravan Rajmohan, and Qi Zhang. Vem: Environment-free exploration for training gui agent with value environment model. arXiv preprint arXiv:2502.18906 , 2025. Huichi Zhou, Kin-Hei Lee, Zhonghao Zhan, Yue Chen, Zhenhao Li, Zhaoyang Wang, Hamed Haddadi, and Emine Yilmaz. Trustrag: Enhancing robustness and trustworthiness in rag, 2025. URLhttps://arxiv.org/abs/2501.00879 . Appendix A Limitations While our work establishes a novel theoretical connection between LLMs and Algorithmic Informa- tion Theory, certain limitations should be acknowledged. Firstly, the proposed link between LLM training/inference and the Solomonoff prior/induction is an approximation. Solomonoff’s framework is uncomputable, and our results demonstrate how LLMs offer a computable approximation , which, while powerful, inherently diverges from the theoretical ideal due to practical constraints such as finite model capacity and optimization heuristics. Secondly, our experimental validation of the few-shot example selection strategy, while promising, was conducted on specific text classification tasks and a subset of LLM architectures. Further research is needed to ascertain the generalizability of these findings across a broader range of tasks, modalities, and model scales. Finally, while our theory provides a unifying lens for phenomena like scaling laws and in-context learning, the precise quantification of factors like the Kolmogorov complexity of target distributions ( K(µ)) in real-world LLM scenarios remains a complex endeavor, making direct measurement challenging. B Impact Statement This research advances theoretical understanding of LLMs, guiding more principled, efficient develop- ment. The AIT connection can inform interpretability, generalization, and data-efficient learning. Our few-shot selection strategy improves LLM performance, especially for smaller models, enhancing accessibility and reducing computational costs. We foresee no direct negative societal impacts from this theoretical work and selection method, as it offers an analytical framework and efficiency gains, not new high-risk capabilities. While any AI advancement could theoretically be misused, we stress that responsible, ethical LLM development is paramount. Our research aims to contribute positively to AI’s scientific understanding and responsible progress. C Notation and Symbols This section provides a summary of the key mathematical notations and symbols used throughout the paper. •T: A specific Turing machine. •U: A universal Turing machine (UTM). •p: A program for a Turing machine. •w: Input for a program pon a TM. 15 •U(p, w): Output | https://arxiv.org/abs/2505.15784v1 |
of UTM Uwith program pand input w. •U(p): Output of UTM Ugiven program p. •x1:t: An input sequence of tokens (context) of length t. •xi: The i-th token in a sequence. •x∗: A sequence starting with prefix x. In LLM generation (Def 1), x∗=x◦r, the full sequence generated from prompt xand model output r. •V: V ocabulary of tokens. •P(A|B): Conditional probability of Agiven B. •Pθ(xt+1|x1:t): Conditional next-token probability of an LLM with parameters θ. •ℓ(p): Length of program pin bits. •KU(x): Prefix Kolmogorov complexity of string xwith respect to prefix UTM U. •K(x): Prefix Kolmogorov complexity of x(UTM Uimplied). •M(x): The Solomonoff prior probability of string x. •M(xt+1|x1:t): Solomonoff induction predictive probability for the next token xt+1given x1:t. •g:X × S → X∗: Language Model Generation Function (Definition 1). •X: Set of input prompts for an LLM. •S: Set of random seeds for an LLM. •R: Set of possible model outputs (continuations) from an LLM. •s: A random seed. •r: Output sequence generated by an LLM. •◦: String concatenation. •f(x, s): A 4-tuple program (m(2), n(x)(2), s(2), e(x)(2))constructed for an LLM. •m(2): Binary representation of the core LLM model component. •e(x)(2): Binary encoding of string xusing the LLM and arithmetic coding. •n(x)(2): Binary representation of the number of decoding iterations for e(x)(2). •s(2): Binary representation of the random seed s. •f(x, s): Prefix-coded version of the program f(x, s), i.e., (n(x)(2),s(2),e(x)(2)). •n(x)(2),s(2),e(x)(2): Elias gamma coded versions of n(x)(2), s(2), e(x)(2)respectively. •ℓ(f(x, s)): Length of the program f(x, s). •M(x): Approximate Solomonoff prior defined asP∞ s=12−ℓ(f(x,s)). •F: The set of all prefix-encoded programs f(x, s). •UF: A prefix UTM defined based on F. •ℓ(e(x)(2)): Length of the binary encoding e(x)(2). •θ: Parameters of a trained LLM. •θ′: Parameters of a LLM without trianing. •log2: Logarithm to the base 2. •π: The mathematical constant pi (approx 3.14159). •µ: A computable target probability distribution. •K(µ): Prefix Kolmogorov complexity of the distribution µ. •Bt: The set of all binary strings of length t. •c: A generic constant. •D: A dataset, typically a set of pairs {(xi, yi)}. •(x, y): A data sample (input x, label y). 16 •M: A large language model. •p(in Algorithm 1): Initial prompt. •K(in Algorithm 1): Number of few-shot samples to select. •D′: Subset of data selected for few-shot learning. •pcurrent : Current prompt being constructed. • conf: Confidence score (model’s probability for the correct label). •⊕: Symbol used in Algorithm 1 to denote prompt concatenation. •E1: Set of low-confidence few-shot examples. •E2: Set of high-confidence few-shot examples. D Detailed Experimental Configuration All inference tasks for the experiments were conducted on a system equipped with 4 NVIDIA A100 GPUs. The cumulative computation time for running all experiments, encompassing different models, datasets, and both few-shot selection strategies (low-confidence and high-confidence), amounted to approximately 1.5 days. A critical parameter for our experiments was the decoding temperature, which was uniformly set to 0 for all large language model inference steps. This choice is pivotal for ensuring deterministic outputs. By setting the temperature to 0, we effectively select the most probable token at each step | https://arxiv.org/abs/2505.15784v1 |
of the generation process, thereby eliminating randomness typically introduced by temperature-based sampling. This determinism is methodologically important for several reasons: 1.Reproducibility: It ensures that results are perfectly reproducible given the same model and input. 2.Alignment with Theoretical Framework: As discussed in Section 3.1, our theoretical framework views LLMs as specific Turing machines, which are inherently deterministic. Setting temperature to 0 makes the practical LLM behavior more closely approximate this deterministic ideal. The model’s output becomes a direct function of its learned parameters and the input sequence, without the confound of stochastic sampling. 3.Fair Comparison: It provides a stable baseline for comparing the efficacy of the few- shot selection strategies ( E1vs.E2). Any observed performance differences can be more confidently attributed to the selection strategy itself, rather than variations due to sampling. This approach allows for a more rigorous evaluation of how different few-shot examples influence the model’s underlying predictive tendencies, in line with our goal of understanding LLMs as systems approximating Solomonoff induction, which is itself a deterministic (though uncomputable) predictive framework. E Two lemmas about Turing machines Lemma 4 LetMbe a universal Turing machine, and let Fbe a prefix-free set of programs. Then there exists a universal prefix Turing machine UFsuch that every program p∈Fis valid on UFand satisfies: UF(p) =M(p) Proof. Since Fis prefix-free, there exists a finite string sthat is not a prefix of any program in F. Such a string sexists because Fis prefix-free and hence satisfies the Kraft inequality. We define UFlike this. For any input p, Ifp∈F, simulate M(p). Else, check if pstarts with s. If yes, let p=s·q. Simulate U(q), where Uis a standard universal prefix Turing machine. If no, UF diverges (halts with no output). Hence, UFis a universal prefix Turing machine that satisfies UF(p) =M(p)for all p∈F. 17 Lemma 5 LetUbe a universal Turing machine and Sbe an arbitrary Turing machine. There exists a universal Turing machine U′that satisfies: 1.U′embeds Swithin its structure while maintaining universality 2.U′can directly accept inputs intended for S Proof. We can construct U′as follows: 1.U′first examines its input to determine whether it’s intended for direct execution by Sor for universal simulation. 2.U′reserves a special prefix symbol or sequence (let’s call it s) to indicate that the remaining input should be directly processed by S. 3.U′implements the following algorithm: (a)If the input begins with prefix s, strip the prefix and run Sdirectly on the remaining input. (b) Otherwise, run Uon the input as normal. F Prompts Used in Experiments System prompt for SMS: Classify the following SMS message as either spam or ham. Respond with only one word: "spam" or "ham" (without quotes or any additional text). Examples: {examples} System prompt for EMOTION: ## Emotion Classification Task Classify the following text into one of these six basic emotions: - sadness - joy - love - anger - fear - surprise ## Response Guidelines: - Respond with only one word—the most relevant emotion from the list above. - Do not include quotes, punctuation, or any additional text. - Choose the emotion that best represents the overall sentiment of the text. | https://arxiv.org/abs/2505.15784v1 |
arXiv:2505.15792v1 [cs.CL] 21 May 2025Long-Form Information Alignment Evaluation Beyond Atomic Facts Danna Zheng1, Mirella Lapata1, Jeff Z. Pan1,2 1School of Informatics, University of Edinburgh, UK 2Huawei Edinburgh Research Centre, CSI, UK dzheng@ed.ac.uk, mlap@inf.ed.ac.uk, http://knowledge-representation.org/j.z.pan/ Abstract Information alignment evaluators are vital for various NLG evaluation tasks and trustwor- thy LLM deployment, reducing hallucinations and enhancing user trust. Current fine-grained methods, like FactScore, verify facts individ- ually but neglect inter-fact dependencies, en- abling subtle vulnerabilities. In this work, we introduce MONTAGE LIE, a challenging bench- mark that constructs deceptive narratives by “montaging” truthful statements without intro- ducing explicit hallucinations. We demonstrate that both coarse-grained LLM-based evalua- tors and current fine-grained frameworks are susceptible to this attack, with AUC-ROC scores falling below 65%. To enable more robust fine-grained evaluation, we propose DOVESCORE , a novel framework that jointly verifies factual accuracy and event-order con- sistency. By modeling inter-fact relation- ships, DOVESCORE outperforms existing fine- grained methods by over 8%, providing a more robust solution for long-form text alignment evaluation. Our code and datasets are available at https://github.com/dannalily/DoveScore. 1 Introduction Previous studies (Huang et al., 2025) have identi- fied a critical issue in LLM deployment: factual inaccuracies, commonly known as hallucinations. To address this challenge, recent approaches (Asai et al., 2024; Roy et al., 2024; Ji et al., 2023; Man- akul et al., 2023) introduce reflection mechanisms that perform post-hoc verification by comparing generated texts against retrieved documents or given contexts, and subsequently regenerating er- roneous segments when necessary. At the core of these approaches lies the information alignment evaluator, which determines whether a target text accurately aligns with a source text. Deng et al. -Mike and Amy broke up.-Amy went to movies with John.-Mike hit Amy.Mike hit Amy. Mike and Amy broke up. Amy went to movies with John.TruthMontage LieAmy went to movies with John. Mike hit Amy. Mike and Amy broke up. Small Unit to Be Checked:Evaluation Result:Mike hit Amy … Amy went to … Figure 1: The figure illustrates the limitation of existing fine-grained evaluators such as FactScore and Align- Score, which struggle to detect lies composed of the exact small units that make up the truth. (2021) highlight the importance of information alignment evaluation across various NLG evalu- ation tasks. Consequently, developing robust infor- mation alignment evaluators is essential. Unlike fact-checking tasks (Si et al., 2024; Ma et al., 2024), which typically involve short, sentence-level comparisons, information alignment evaluation often requires reasoning over extended contexts (Zha et al., 2023), where both the source and target may span multiple paragraphs. While recent advances in long-context LLMs (Liu et al., 2025) support coarse-grained alignment evalua- tion across entire texts, many real-world applica- tions (Min et al., 2023; Zhang et al., 2024; Ye et al., 2025) demand fine-grained evaluators that can as- sess individual factual units and pinpoint detailed inaccuracies. Existing fine-grained frameworks (Min et al., 2023; Song et al., 2024b; Wei et al., 2024) typi- cally decompose the target text into atomic facts, 1 verify each against retrieved evidence, and then ag- gregate the results into an overall judgment. While effective at identifying surface-level | https://arxiv.org/abs/2505.15792v1 |
errors, this approach has a critical blind spot: it overlooks re- lationships and dependencies between facts. Even when all individual statements are accurate, re- ordering them can reverse implied causal chains and mislead readers. As illustrated in Figure 1, the Lie version suggests that Amy’s outing with John triggered Mike’s violence, subtly shifting blame onto her. The Truth reveals that the violence pre- ceded the breakup and her subsequent actions. By altering the sequence of accurate statements, the text introduces a discourse-level manipulation that distorts causality without introducing any false- hoods. However, existing fine-grained evaluators are inherently incapable of detecting such manipu- lations. To investigate this vulnerability, we introduce MONTAGE LIE, a novel benchmark designed to test the limitations of current information align- ment evaluators. Drawing inspiration from the cinematic concept of montage1, which creates new meaning by rearranging real scenes in novel se- quences, MONTAGE LIEconstructs "montage-style lies": deceptive texts composed entirely of truthful statements, deliberately reordered to imply mis- leading narratives. These manipulations do not introduce fabricated facts but instead distort causal relationships by altering the sequence of events. To systematically assess model robustness, our bench- mark includes four levels of difficulty, each reflect- ing increasing subtlety in the causal distortion. Such rearranging strategies, while factually ac- curate at the level of small textual units, exploit deceptive tactics commonly used in human commu- nication and pose a sophisticated challenge that cur- rent evaluators are ill-equipped to detect. They also represent a realistic and underexplored attack vec- tor in adversarial prompting (Kim et al., 2024) and misinformation campaigns(Hu et al., 2025; Macko et al., 2025). Experimental results demonstrate that existing fine-grained frameworks, as well as state- of-the-art long-context LLMs in coarse-grained evaluation settings, struggle to identify these subtle manipulations, achieving AUC-ROC scores consis- tently below 65%. Besides, to address the limitations of current 1https://en.wikipedia.org/wiki/Montage_(filmmaking)fine-grained evaluators, we propose DOVESCORE (Descriptive and Ordered-Event Verification Score ), a fine-grained evaluation framework that explicitly incorporates both atomic factual accu- racy and event ordering consistency. DOVESCORE decomposes target texts into descriptive and event- based facts, verifying their individual correctness and event sequencing against the source text, then computing a weighted precision score. Experi- mental evaluations show that DOVESCORE outper- forms existing fine-grained methods by over 8%. We anticipate that both our benchmark, MON- TAGE LIE, and our proposed evaluation method, DOVESCORE , will offer valuable insights and make a meaningful contribution to the ongoing development of robust and reliable information alignment evaluators. 2 Background and Related Works 2.1 Definition of Information Alignment The concept of information alignment (also termed factual consistency) was first formally defined by Deng et al. (2021), who proposed a unified evalua- tion framework for NLG tasks and identified infor- mation alignment as a core principle. Initially, it was defined at the token level: each token in the tar- get should be supported by the source. Later, Zha et al. (2023) introduced a more practical sequence- level definition: Text baligns with source aif all information in bappears accurately in awithout contradiction. An Information Alignment Evalua- tor can be formalized as: f: (a,b)→y (1) | https://arxiv.org/abs/2505.15792v1 |
where higher yindicates stronger alignment of b with respect to a. 2.2 Benchmarks Early benchmarks mainly addressed sentence- level alignment, such as FEVER (Thorne et al., 2018), FEVEROUS (Aly et al., 2021), and A VERITEC (Schlichtkrull et al., 2023). More recent efforts target longer texts and diverse do- mains. SummaC (Laban et al., 2022) aggre- gates six summarization-focused datasets, while TRUE (Honovich et al., 2022) combines 11 datasets across summarization, dialogue, fact ver- ification, and paraphrasing. The latest, LLM- AggreFact (Tang et al., 2024a), curates recent 2 datasets such as AGGREFACT (Tang et al., 2023), TOFUEV AL (Tang et al., 2024b), and ClaimVer- ify (Liu et al., 2023a). Despite broader coverage, these benchmarks mostly focus on unsupported or contradictory claims. Our proposed dataset, MONTAGE LIE, in- troduces a harder case: each individual claim aligns with the source, yet their combination yields a mis- leading narrative. 2.3 Evaluators Coarse-grained Evaluators These methods as- sess alignment holistically, typically via overlap or semantic similarity. Traditional metrics like BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), and METEOR (Banerjee and Lavie, 2005) em- phasize lexical overlap, while embedding-based metrics (e.g., BERTScore (Zhang et al., 2020), BartScore (Yuan et al., 2021)) and fine-tuned models (e.g., BLEURT (Sellam et al., 2020), FactCC (Kryscinski et al., 2020)) capture seman- tics more effectively. However, most struggle with long-form texts due to model limitations. More recently, LLM-as-Evaluator approaches (Liu et al., 2023b; Luo et al., 2023) prompt LLMs directly for quality scores, enabling more flexible, semantically rich assessments. Fine-grained Evaluators Fine-grained methods offer interpretable, diagnostic feedback. QA-based approaches (e.g., QuestEval (Scialom et al., 2021), QAFactEval (Fabbri et al., 2022)) extract entities, generate questions, and verify answers using the source. These methods are limited by coverage and are computationally intensive. Another strat- egy segments the target into sentences (Laban et al., 2022; Zha et al., 2023) and verifies them individu- ally, but this ignores cross-sentence dependencies. Recent work leverages LLMs to decompose texts into atomic facts (Min et al., 2023; Song et al., 2024b,a; Wei et al., 2024), offering more precise semantic units and better handling of indirect ref- erences. Yet, all existing methods verify claims indepen- dently, failing to detect montage lies—cases where individually correct claims, when combined, form a misleading whole. This independence assump- tion prevents them from capturing higher-order semantics across claims. To overcome this, wepropose DOVESCORE , a novel framework that ex- plicitly models inter-fact relationships. 3MONTAGE LIE: Information Alignment Evaluation Benchmark We introduce MONTAGE LIE, a benchmark de- signed to evaluate the ability of evaluators to detect misalignment in text that preserves correct individ- ual claims but disrupts the overall intended narra- tive. Below we describe (1) the data construction process, (2) dataset statistics and quality checks, and (3) the evaluation metrics. 3.1 Data Construction The construction process for MONTAGE LIEcon- sists of three main stages: seed data sampling, montage-style lie generation, and paraphrasing. The LLM used in data construction process isgpt-4o-mini-2024-07-18 , and prompts are shown in Table 5 in Appendix A.1. 3.1.1 Seed Data Sampling We start with publicly available long-form summa- rization datasets and | https://arxiv.org/abs/2505.15792v1 |
randomly sample pairs (s, g) from each, where sis the source document and g is its corresponding summary, which we label as correct target text. •SummScreen (Chen et al., 2022): TV-series transcripts paired with human-written recaps capturing dialogue-driven narratives and char- acter actions. •BookSum (Kryscinski et al., 2022): Liter- ary texts paired with long-form summaries, emphasizing long-range causal and temporal dependencies. These datasets were chosen for their narrative intensity and complementary styles (dialogue ver- sus exposition). We sample uniformly to cover a diverse range of source and target text lengths. 3.1.2 Montage-Style Lie Generation For each correct target text g, we generate four montage-style lies le,lm,lh, and lehat varying difficulty levels: easy, medium, hard, and extreme hard. 3 Step 1: Decompose ginto Events EWe prompt the LLM to decompose ginto a chronological se- quence of independent events, denoted as: E= [e1, e2, . . . , e n]. Figure 6 in Appendix B.2 presents the distribution of the number of decomposed events. Step 2: Shuffle Ewith Controlled Difficulty We define difficulty based on the Shuffle Degree ShuffleD , which is to measure how out-of-order a permuted list F= [f1, f2, . . . , f n]is relative to the original E. Let πbe the unique permutation such that fi=eπ(i). Then ShuffleD is defined as: ShuffleD( E, F) =Inv(π) Invmax(n)∈[0,1](2) where Inv(π)is the inversion count: Inv(π) = {(i, j)|1≤i < j≤n, π(i)> π(j)} (3) andInvmax(n) = n 2 =n(n−1) 2is the maximum possible number of inversions. A lower value of ShuffleD implies a more simi- lar sequence to the original, making the lie harder to detect. We define the difficulty levels as fol- lows: easy when D∈[0.80,0.90], medium when D∈[0.55,0.65], hard when D∈[0.30,0.40], and extreme hard when D∈[0.05,0.15]. These inter- vals are disjoint to ensure clear separation between difficulty levels. To generate a permutation corresponding to a given difficulty level, we first determine the ap- propriate range for Inv(π)and randomly sample a target inversion count within this range. Fig 7 (see Appendix B.2) shows the distribution of the sampled inversion count in our data construction process. We then use Lehmer codes (Knuth, 1998) to construct a permutation2that has the exact sam- pled inversion count. As illustrated in Algorithm 1 (see Appendix B.1), the process begins with the maximal Lehmer code, and iteratively decrements its entries at random until the total inversion count equals the target. The resulting Lehmer code is then decoded into the corresponding permutation. Step 3: Incremental Lie Generation After ob- taining F, we generate the lie text incrementally. 2Exhaustively enumerating permutations is computation- ally infeasible due to the combinatorial explosion of the per- mutation space.Property Number Total Instance 1303 Instance from BOOKSUM 637 Instance from SUMM_SCREEN 666 Word Lengths of Source Text (Min, Max, Avg) (312, 9937, 4201.79 ) Word Lengths of Target Text (Min, Max, Avg) (62, 991, 258.61) Table 1: MontageLie Benchmark Statistic Giving the full Fto the LLM often leads to fail- ure in preserving the order, so instead, we use an incremental generation strategy. We start with the first event in Fand sequentially add the rest. At | https://arxiv.org/abs/2505.15792v1 |
each step, LLM is explicitly instructed that the new event occurs after the existing text and should be integrated as the next logical event in the narrative. The LLM is asked to continue the paragraph in a natural and coherent manner, without inserting un- necessary transitional phrases or restructuring ear- lier content. This approach enables precise control over event order while ensuring that the generated lie remains fluent and lexically close to the original g, yet semantically altered due to the reordering. 3.1.3 Paraphrasing To test whether alignment evaluators are sensitive to narrative variation, we also generate paraphrases of both the correct target text and the lies. These paraphrases preserve the meaning of the original but present the events in a different narrative order or style. We instruct LLM to rephrase the text using a different narrative technique (chronological, flash- back, interjection, supplementary narration) than the original. For each correct target text gand its lies le,lm,lh,leh, we generate corresponding paraphrases g′,l′ e,l′ m,l′ h, and l′ eh. 3.2 Dataset Summary 3.2.1 Dataset Format and Statistic Each data instance in the MONTAGE LIEbench- mark is represented as a tuple: d=<s, g, l e, lm, lh, leh, g′, l′ e, l′ m, l′ h, l′ eh>(4) where sis the source text. The texts gandg′are aligned with the source text and labeled as 1, while le,lm,lh,lehand their paraphrases are not aligned with the source and are labeled as 0. Table 1 presents detailed statistics for MON- TAGE LIE, which comprises 1,303 data instances. 4 DifficultyGenerated Lies Paraphrase SemanticS EventI Coherence SemanticF StructuralV Easy 100.00 100.00 94.00 98.00 98.00 Medium 100.00 98.00 98.00 100.00 98.00 Hard 98.00 100.00 96.00 98.00 100.00 Extreme 96.00 100.00 98.00 100.00 98.00 Table 2: Human evaluation on the quality of Mon- tageLie Benchmark. SemanticS denotes Semantic Shift, EventI denotes Event Integrity, SemanticF denotes Se- mantic Fidelity, and StructuralV denotes Structural Vari- ation. Source texts contain up to 9,937 words, while tar- get texts have lengths of up to 991 words. The length distributions are illustrated in Figure 5 (see Appendix B.2). 3.2.2 Data Quality To verify data quality, we conducted a human eval- uation, separately assessing montage-style lies and paraphrases. Montage-style lies were evaluated on three crite- ria:Semantic Shift — whether the meaning differs fromg;Event Integrity — whether the core events remain unchanged (no additions, deletions, or al- terations); Coherence — whether the text reads smoothly, without awkward transitions or overuse of conjunctions. Paraphrases were assessed on two criteria: Se- mantic Fidelity — whether the meaning remains faithful to g;Structural Variation — whether the narrative structure is meaningfully altered. Annotators labeled each instance as yesorno. To standardize evaluation, two annotators jointly labeled five instances, achieving agreement scores of 100%, 100%, and 87.5% for montage-style lies, and 87.5% and 75.0% for paraphrases. We then randomly sampled 50 instances, yield- ing 200 lies and 200 paraphrases for human evalu- ation. Each annotator evaluated 25 instances. As shown in Table 2, most examples meet all criteria, confirming the overall quality of the benchmark. 3.3 Evaluation Metrics We assess the effectiveness of the alignment evalu- | https://arxiv.org/abs/2505.15792v1 |
ator using the AUC-ROC score, which quantifies the evaluator’s ability to distinguish the correct tar- get texts from deceptive alternatives (“lies”) across varying levels of difficulty. For each target text, the evaluator assigns a score indicating how wellit aligns with the given source. To compute the AUC-ROC for a specific difficulty level, we com- pare the scores of correct target texts with those of their corresponding deceptive counterparts. This process is repeated independently for each of the four difficulty levels. Finally, we report the overall effectiveness as the average AUC-ROC across all difficulty levels. 4 M ONTAGE LIEChallenges Evaluators 4.1 Evaluated Evaluators Coarse-grained Evaluators MONTAGE LIE comprises long-form source and target texts, which limits the applicability of evaluators that are not designed to handle long contexts. Therefore, we restrict our evaluation to evaluators that support long-form input. Specifically, we report ROUGE- 1, ROUGE-2, and ROUGE-L scores, as well as evaluations from LLM-as-Evaluator methods. For the latter, we adopt prompts adapted from G-Eval’s consistency evaluation template (see Table 6 in Appendix A.2), and employ long-context LLMs, including gpt-4o-mini-2024-07-18 ,Qwen-3 models (1.7B, 4B, 8B, 14B, 32B), and various Llama3-instruct models (1B, 3B, 7B, and 70B). Fine-grained Evaluators As discussed in Sec- tion 2.3, existing fine-grained evaluators inher- ently struggle to detect montage-lies. To em- pirically validate this limitation, we report re- sults from four representative methods: SummaC- ZS, SummaC-Conv, AlignScore, and FactScore. SummaC-ZS/Conv and AlignScore decompose the target text into individual sentences. SummaC- ZS/Conv leverages a natural language inference (NLI)-based model to assess the factual consis- tency of each sentence with the source, whereas AlignScore employs a fine-tuned model for the same purpose. In contrast, FactScore decomposes the target text into atomic facts and verifies each one via LLM prompting. In our experiments, we usegpt-4o-mini-2024-07-18 as the LLM back- bone for FactScore. 4.2 Experiment Setup For all evaluations, we apply greedy decoding with a temperature of 0 to ensure deterministic outputs. All LLM-based evaluations are conducted in a zero- shot setting. 5 Evaluator Easy Medium Hard Extreme A VG Coarse-GrainedROUGE-1 53.92 54.06 53.77 54.11 53.96 ROUGE-2 54.57 54.93 54.71 54.63 54.71 ROUGE-L 53.91 54.15 53.91 54.40 54.09 qwen-3-1.7b 51.14 51.21 50.76 50.48 50.90 qwen-3-4b 60.27 58.90 57.50 54.93 57.90 qwen-3-8b 64.43 60.98 57.83 54.20 59.36 qwen-3-14b 62.22 61.88 60.63 57.95 60.67 qwen-3-32b 65.80 65.80 63.13 59.24 63.49 llama-3.2-instruct-1b 50.20 49.79 49.75 49.70 49.86 llama-3.2-instruct-3b 49.44 49.30 49.84 50.08 49.66 llama-3.1-instruct-8b 56.53 56.52 55.22 53.60 55.47 llama-3.3-instruct-70b 61.14 60.87 58.41 54.44 58.71 gpt-4o-mini 68.77 66.39 63.17 58.57 64.23 Fine-GrainedSummaC-ZS 51.13 51.69 51.96 51.70 51.62 SummaC-Conv 56.54 56.02 55.17 55.68 55.85 AlignScore 56.51 56.89 56.82 56.30 56.63 FactScore (gpt-4o-mini) 50.85 51.06 50.24 49.65 50.45 Table 3: AUC-ROC Performance of existing evaluators on MontageLie. Figure 2: Violin plots of scores from gpt-4o-mini on MONTAGE LIE. The similar distributions for original and rephrased targets indicate robustness to rephrasing. Comparable trends are observed for other evaluators (see Appendix C). 4.3 Results Table 3 summarizes the performance of existing evaluators on the M ONTAGE LIESbenchmark. Upper Bound of Current Evaluators Remains Low The best-performing model, gpt-4o-mini , achieves an average | https://arxiv.org/abs/2505.15792v1 |
AUC-ROC of only 64.23%, reflecting the inherent difficulty of the task and underscoring the pressing need for more effective information alignment evaluators. Fine-Grained Evaluators Suffers from Inher- ent Difficulty Fine-grained evaluation methods struggle to detect montage-style lies, as they often overlook the relationships between decomposedfactual units. All evaluated methods achieve AUC- ROC scores below 57%, highlighting their lim- ited effectiveness. SummaC and AlignScore out- perform the LLM-based FactScore, likely due to their coarser sentence-level segmentation, whereas FactScore operates at the more granular level of atomic facts. When both methods are based on gpt-4o-mini , the fine-grained FactScore under- performs its coarse-grained counterpart by 13.78%. Lexical Similarity Limits ROUGE’s Effective- ness The ROUGE score achieves an AUC-ROC of approximately 54%, as the correct target text and the montage lies are lexically similar by de- sign during data construction. As a result, tradi- tional n-gram-based metrics like ROUGE struggle to effectively distinguish between true and false content. Small LLMs (<4B) Are Ineffective LLMs with fewer than 4 billion parameters in both the Qwen3 and LLaMA3 families perform poorly, achieving AUC-ROC scores below 51%, which suggests they lack the capacity to reliably detect montage-style lies. As shown in the score distribution in Figure 8 (see Appendix C), these smaller LLMs tend to assign high scores to both accurate target text and montage-style lies. Qwen3 Outperform LLaMA3 Counterparts Across comparable model sizes, Qwen3 consis- tently outperforms LLaMA3: Qwen-3-8B outper- 6 Target TextAmy, a beautiful girl, went to the movies with John. Mike hit Amy. Mike and Amy broke up.Source Text (can be retrieved document or given contexts)Mike hit Amy. Amy, a beautiful girl with a bright smile and kind eyes, was devastated. Their relationship quickly fell apart, and they broke up. Though heartbroken, Amy knew she deserved better. In time, she went to the movies with John. He was kind and attentive, and for the first time in a while, she felt safe and valued. That night marked the beginning of a new, happier chapter in her life. 1.Decomposer SD= 1SE= 1SEO= 1-ShuffleD=1/3¾SESEO + ¼SD= 0.5DoveScore 2.Fact Checker 3.Sorter-Amy is a beautiful girl. -Amywent to the movies with John.-Mike hit Amy.-Mike and Amy broke up.Descriptive FactsEvent Facts -Amy went to the movies with John.-Mike hit Amy.-Mike and Amy broke up.Verified Correct Event Facts 1. Amy wenttothemovies with John.2. Mike hit Amy.3. Mike and Amy broke up.Sort based on TARGET text1. Mike hit Amy.2. Mike and Amy broke up.3. Amy went to the movies with John.Sort based on SOURCE text-Amy is a beautiful girl. -Amywent to the movies with John.-Mike hit Amy.-Mike and Amy broke up.Descriptive FactsEvent Facts Figure 3: The illustration of DoveScore which includes three core components: the Decomposer, the Fact Checker, and the Sorter. formsLLaMA-3.1-instruct-8B by∼4%, and Qwen-3-32B surpasses LLaMA-3.3-70B by∼5%. Evaluators Are Robust to Narrative Variations We find that existing evaluators are generally robust to variations in narrative technique. As detailed in Section 3.1.3, we rephrased each target text to alter the narrative order while preserving its orig- inal semantics. As shown in Figure 2, the score distributions for the original and rephrased | https://arxiv.org/abs/2505.15792v1 |
target texts are highly similar. This suggests that current models are not easily misled by paraphrasing, a desirable property in factuality evaluation. 5 D OVESCORE : A Fine-Grained Information Alignment Evaluation Framework As discussed in Section 4, current fine-grained evaluation methods are insufficiently equipped to detect misinformation tactics like montage- style lies. To address the limitations, we pro- pose DOVESCORE , a novel fine-grained evaluation framework designed to enable a comprehensive and nuanced assessment of information alignment. 5.1 Method As illustrated in Figure 3, the DOVESCORE frame- work consists of three core components: the De- composer , the Fact Checker , and the Sorter .Decomposer Unlike conventional methods that uniformly segment text into sentences or atomic facts, DOVESCORE accounts for the inherent het- erogeneity among factual elements. Specifically, we distinguish between two categories of facts: de- scriptive facts and event facts. Descriptive facts convey stable, order-independent attributes (e.g., “Octopuses have three hearts”), while event facts denote temporally ordered actions or states (e.g., “Dr. Lin submitted her resignation”). Based on this taxonomy, the decomposer partitions the target text into two lists: the event facts list ( FE) and the descriptive facts list ( FD). Fact Checker The fact checker verifies each fact inFEandFDagainst the source text, resulting in two validated subsets: the set of correct event facts (Fc E) and the set of correct descriptive facts ( Fc D). The Event Score is then computed as SE=|Fc E| |FE|, and the Descriptive Score as SD=|Fc D| |FD|. Sorter The sorter reorganizes the verified cor- rect event facts Fc Einto two ordered sequences. The first sequence, denoted Sorted( Fc E, s), re- flects their chronological order in the source text, while the second, Sorted( Fc E, t), reflects their order in the target text. The similarity between these sequences is measured by the Event Order Score, defined as SEO= 1− 7 (a) AlignScore (b) FactScore (c) DoveScore(ours)Figure 4: Score Distribution Comparison of Fine-grained Evaluators. SummaC exhibits a similar pattern to AlignScore, assigning low scores to both correct and wrong target texts (See Appendix C). Easy Medium Hard Extreme A VG Coarse-Grained 68.77 66.39 63.17 59.24 64.23 Fine-Grained 56.54 56.89 56.82 56.30 56.63 DoveScore (Fine-Grained)69.06 68.27 65.80 57.87 65.25 Table 4: ROC-AUC of DOVESCORE onMONTAGE LIE, compared to top scores by existing evaluators at each difficulty level. Bold marks the best, underlined the second best. ShuffleD Sorted( Fc E, s),Sorted( Fc E, t) . Score Computation The final DoveScore is computed by combining SE,SEO,SDusing a frequency-based weighting factor α=|FE| |FE|+|FD|, which adjusts the relative importance of event and descriptive facts according to their frequency in the target text: DoveScore =α·SE·SEO+ (1−α)·SD(5) 5.2 Experiment Setup For the decomposer, fact checker, and sorter used in this experiment, we prompt LLM with the prompts as shown in Table 7, Table 8 and Table 9. We exampled with gpt-4o-mini-2024-07-18 as the LLM with temperature 0. 5.3 Results As shown in Table 3, DOVESCORE achieves the highest average AUC-ROC score of 65.25%, out- performing existing fine-grained evaluators by over 8%. Compared to FactScore, which uses the same LLM backbone, DOVESCORE | https://arxiv.org/abs/2505.15792v1 |
improves perfor- mance by 14.8%. The distribution of its sub-scores(SE,SEO,SD) in Figure 10 (Appendix C) high- lights the contribution of SEOin enhancing the model’s ability to distinguish between truthful and deceptive texts. Further evidence from Figure 4 reveals system- atic differences in how evaluators handle com- plex deception styles. Sentence-split-based meth- ods such as SummaC and AlignScore show lim- ited discrimination, often assigning similarly low scores to both correct and incorrect targets—likely due to inference ambiguity from rigid segmenta- tion. FactScore, which evaluates at the fact level, tends to assign uniformly high scores across tar- gets, ignoring inter-fact coherence. In contrast, DOVESCORE consistently assigns higher scores to correct targets and lower scores to deceptive ones, reflecting stronger discrimination capabilities and robustness to diverse misinformation strategies. 6 Conclusion In this work, we present MONTAGE LIE, a novel benchmark designed to reveal a critical vulner- ability in current information alignment evalua- tors: their inability to detect misleading narra- tives composed of reordered yet truthful state- ments. We show that both coarse-grained and fine-grained methods struggle with such manip- ulations, with AUC-ROC scores falling below 65% onMONTAGE LIE. We propose DOVESCORE , a fine-grained evaluation framework that jointly con- siders factual accuracy and event order consistency, improving performance from 50.45% to 65.25% over FactScore. DOVESCORE is designed as a modular framework, allowing each component to be independently refined and improved. Among 8 these, the sorter stands out as a critical and cur- rently underexplored component that deserves tar- geted research efforts. While substantial room for further exploration remains, our work marks an important step toward more robust alignment eval- uation for long-form content. Limitations MontageLie Benchmark The MontageLie benchmark used in our study is entirely generated by large language models rather than written by human annotators. While this approach enables scalable and diverse data generation, it may intro- duce distributional artifacts or stylistic patterns that do not fully reflect real-world human-written misinformation. Moreover, the benchmark cur- rently only covers English. Extending MontageLie to include human-curated data and multilingual variants would improve its generalizability and practical relevance. DoveScore Framework In this work, we demon- strate DoveScore using GPT-4o-mini as the back- bone model. However, DoveScore is designed as a modular and model-agnostic framework—each component (e.g., evidence extractor, fact scorer, and sorter) can be flexibly instantiated with dif- ferent language models. Future work could ex- plore alternative backbones to assess the robust- ness and adaptability of the framework under vary- ing resource constraints and capabilities. Within DoveScore, our current sorter module takes the full list of candidate responses as input and predicts a globally reordered list. This design is chosen over pairwise comparison-based sorting methods to reduce computational complexity. However, the trade-off between efficiency and ranking accuracy remains an open research question. More sophisti- cated or hybrid sorting strategies may offer better performance while maintaining tractable runtime. Ethics Statement All data used in this study are derived from pub- licly available datasets and do not contain any per- sonally identifiable or sensitive information. The additional data used for the MontageLie bench- mark were generated using LLMs. To | https://arxiv.org/abs/2505.15792v1 |
ensure the quality of the generated data, we conducted a man- ual evaluation with two human annotators: one isone of the authors of this paper, and the other is an external contributor who received compensation at the standard hourly rate designated for tutors and demonstrators at our university. References Rami Aly, Zhijiang Guo, Michael Sejr Schlichtkrull, James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Oana Cocarascu, and Arpit Mittal. 2021. The fact extraction and VERification over unstructured and structured information (FEVEROUS) shared task. In Proceedings of the Fourth Workshop on Fact Extraction and VERifica- tion (FEVER) , pages 1–13, Dominican Republic. Association for Computational Linguistics. Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi. 2024. Self-RAG: Learn- ing to retrieve, generate, and critique through self- reflection. In The Twelfth International Conference on Learning Representations . Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with im- proved correlation with human judgments. In Pro- ceedings of the ACL Workshop on Intrinsic and Ex- trinsic Evaluation Measures for Machine Transla- tion and/or Summarization , pages 65–72, Ann Arbor, Michigan. Association for Computational Linguis- tics. Mingda Chen, Zewei Chu, Sam Wiseman, and Kevin Gimpel. 2022. SummScreen: A dataset for abstrac- tive screenplay summarization. In Proceedings of the 60th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers) , pages 8602–8615, Dublin, Ireland. Association for Compu- tational Linguistics. Mingkai Deng, Bowen Tan, Zhengzhong Liu, Eric Xing, and Zhiting Hu. 2021. Compression, transduction, and creation: A unified framework for evaluating natural language generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing , pages 7580–7605, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Alexander Fabbri, Chien-Sheng Wu, Wenhao Liu, and Caiming Xiong. 2022. QAFactEval: Improved QA- based factual consistency evaluation for summariza- tion. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 2587–2601, Seattle, United States. Asso- ciation for Computational Linguistics. Or Honovich, Roee Aharoni, Jonathan Herzig, Hagai Taitelbaum, Doron Kukliansy, Vered Cohen, Thomas 9 Scialom, Idan Szpektor, Avinatan Hassidim, and Yossi Matias. 2022. TRUE: Re-evaluating factual consistency evaluation. In Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering , pages 161– 175, Dublin, Ireland. Association for Computational Linguistics. Beizhe Hu, Qiang Sheng, Juan Cao, Yang Li, and Danding Wang. 2025. Llm-generated fake news in- duces truth decay in news ecosystem: A case study on neural news recommendation. arXiv preprint arXiv:2504.20013 . Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, and Ting Liu. 2025. A survey on hallucination in large lan- guage models: Principles, taxonomy, challenges, and open questions. ACM Trans. Inf. Syst. , 43(2). Ziwei Ji, Tiezheng Yu, Yan Xu, Nayeon Lee, Etsuko Ishii, and Pascale Fung. 2023. Towards mitigating LLM hallucination via self reflection. In Findings of the Association for Computational Linguistics: EMNLP 2023 , pages 1827–1843, Singapore. Associ- ation for Computational Linguistics. Jinhwa Kim, Ali Derakhshan, and Ian Harris. 2024. | https://arxiv.org/abs/2505.15792v1 |
Robust safety classifier against jailbreaking attacks: Adversarial prompt shield. In Proceedings of the 8th Workshop on Online Abuse and Harms (WOAH 2024) , pages 159–170, Mexico City, Mexico. Asso- ciation for Computational Linguistics. Donald E Knuth. 1998. The Art of Computer Program- ming: Sorting and Searching, volume 3 . Addison- Wesley Professional. Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 9332–9346, Online. Association for Computa- tional Linguistics. Wojciech Kryscinski, Nazneen Rajani, Divyansh Agar- wal, Caiming Xiong, and Dragomir Radev. 2022. BOOKSUM: A collection of datasets for long-form narrative summarization. In Findings of the Associ- ation for Computational Linguistics: EMNLP 2022 , pages 6536–6558, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Philippe Laban, Tobias Schnabel, Paul N. Bennett, and Marti A. Hearst. 2022. SummaC: Re-visiting NLI- based models for inconsistency detection in summa- rization. Transactions of the Association for Compu- tational Linguistics , 10:163–177.Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out , pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Jiaheng Liu, Dawei Zhu, Zhiqi Bai, Yancheng He, Huanxuan Liao, Haoran Que, Zekun Wang, Chenchen Zhang, Ge Zhang, Jiebin Zhang, and 1 others. 2025. A comprehensive survey on long context language modeling. arXiv preprint arXiv:2503.17407 . Nelson Liu, Tianyi Zhang, and Percy Liang. 2023a. Evaluating verifiability in generative search engines. InFindings of the Association for Computational Linguistics: EMNLP 2023 , pages 7001–7025, Singa- pore. Association for Computational Linguistics. Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023b. G-eval: Nlg evaluation using gpt-4 with better human align- ment. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 2511–2522. Zheheng Luo, Qianqian Xie, and Sophia Anani- adou. 2023. Chatgpt as a factual inconsistency evaluator for text summarization. arXiv preprint arXiv:2303.15621 . Huanhuan Ma, Weizhi Xu, Yifan Wei, Liuji Chen, Liang Wang, Qiang Liu, Shu Wu, and Liang Wang. 2024. EX-FEVER: A dataset for multi-hop ex- plainable fact verification. In Findings of the As- sociation for Computational Linguistics: ACL 2024 , pages 9340–9353, Bangkok, Thailand. Association for Computational Linguistics. Dominik Macko, Aashish Anantha Ramakrishnan, Ja- son Samuel Lucas, Robert Moro, Ivan Srba, Adaku Uchendu, and Dongwon Lee. 2025. Beyond spec- ulation: Measuring the growing presence of llm- generated texts in multilingual disinformation. arXiv preprint arXiv:2503.23242 . Potsawee Manakul, Adian Liusie, and Mark Gales. 2023. SelfcheckGPT: Zero-resource black-box hal- lucination detection for generative large language models. In The 2023 Conference on Empirical Meth- ods in Natural Language Processing . Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Koh, Mohit Iyyer, Luke Zettle- moyer, and Hannaneh Hajishirzi. 2023. FActScore: Fine-grained atomic evaluation of factual precision in long form text generation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 12076–12100, Singa- pore. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method | https://arxiv.org/abs/2505.15792v1 |
for automatic evalu- ation of machine translation. In Proceedings of the 10 40th Annual Meeting of the Association for Compu- tational Linguistics , pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Nirmal Roy, Leonardo F. R. Ribeiro, Rexhina Blloshmi, and Kevin Small. 2024. Learning when to retrieve, what to rewrite, and how to respond in conversa- tional QA. In Findings of the Association for Com- putational Linguistics: EMNLP 2024 , pages 10604– 10625, Miami, Florida, USA. Association for Com- putational Linguistics. Michael Sejr Schlichtkrull, Zhijiang Guo, and Andreas Vlachos. 2023. A Veritec: A dataset for real-world claim verification with evidence from the web. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track . Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano, Alex Wang, and Patrick Gallinari. 2021. QuestEval: Summariza- tion asks for fact-based evaluation. In Proceedings of the 2021 Conference on Empirical Methods in Natu- ral Language Processing , pages 6594–6604, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text genera- tion. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 7881–7892, Online. Association for Computational Linguistics. Jiasheng Si, Yibo Zhao, Yingjie Zhu, Haiyang Zhu, Wenpeng Lu, and Deyu Zhou. 2024. CHECKWHY: Causal fact verification via argument structure. In Proceedings of the 62nd Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers) , pages 15636–15659, Bangkok, Thai- land. Association for Computational Linguistics. Hwanjun Song, Hang Su, Igor Shalyminov, Jason Cai, and Saab Mansour. 2024a. FineSurE: Fine-grained summarization evaluation using LLMs. In Proceed- ings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers) , pages 906–922, Bangkok, Thailand. Associa- tion for Computational Linguistics. Yixiao Song, Yekyung Kim, and Mohit Iyyer. 2024b. VeriScore: Evaluating the factuality of verifiable claims in long-form text generation. In Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 9447–9474, Miami, Florida, USA. Association for Computational Linguistics. Liyan Tang, Tanya Goyal, Alex Fabbri, Philippe La- ban, Jiacheng Xu, Semih Yavuz, Wojciech Kryscin- ski, Justin Rousseau, and Greg Durrett. 2023. Un- derstanding factual errors in summarization: Errors,summarizers, datasets, error detectors. In Proceed- ings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 11626–11644, Toronto, Canada. Association for Computational Linguistics. Liyan Tang, Philippe Laban, and Greg Durrett. 2024a. MiniCheck: Efficient fact-checking of LLMs on grounding documents. In Proceedings of the 2024 Conference on Empirical Methods in Natural Lan- guage Processing , pages 8818–8847, Miami, Florida, USA. Association for Computational Linguistics. Liyan Tang, Igor Shalyminov, Amy Wong, Jon Burnsky, Jake Vincent, Yu’an Yang, Siffi Singh, Song Feng, Hwanjun Song, Hang Su, Lijia Sun, Yi Zhang, Saab Mansour, and Kathleen McKeown. 2024b. TofuEval: Evaluating hallucinations of LLMs on topic-focused dialogue summarization. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 4455–4480, Mexico City, Mexico. Association for Computational Linguistics. James | https://arxiv.org/abs/2505.15792v1 |
Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers) , pages 809–819, New Orleans, Louisiana. Association for Computational Linguistics. Jerry Wei, Chengrun Yang, Xinying Song, Yifeng Lu, Nathan Zixia Hu, Jie Huang, Dustin Tran, Daiyi Peng, Ruibo Liu, Da Huang, Cosmo Du, and Quoc V Le. 2024. Long-form factuality in large language models. In The Thirty-eighth Annual Conference on Neural Information Processing Systems . Junjie Ye, Guanyu Li, SongYang Gao, Caishuang Huang, Yilong Wu, Sixian Li, Xiaoran Fan, Shi- han Dou, Tao Ji, Qi Zhang, Tao Gui, and Xuanjing Huang. 2025. ToolEyes: Fine-grained evaluation for tool learning capabilities of large language models in real-world scenarios. In Proceedings of the 31st International Conference on Computational Linguis- tics, pages 156–187, Abu Dhabi, UAE. Association for Computational Linguistics. Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021. BARTScore: Evaluating generated text as text gener- ation. In Advances in Neural Information Processing Systems . Yuheng Zha, Yichi Yang, Ruichen Li, and Zhiting Hu. 2023. AlignScore: Evaluating factual consistency with a unified alignment function. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , 11 pages 11328–11348, Toronto, Canada. Association for Computational Linguistics. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2020. Bertscore: Eval- uating text generation with bert. In International Conference on Learning Representations . Yue Zhang, Jingxuan Zuo, and Liqiang Jing. 2024. Fine-grained and explainable factuality evaluation for multimodal summarization. arXiv preprint arXiv:2402.11414 . A Prompts A.1 Prompts Used in Data Construction The prompt used in MONTAGE LIEconstruction is provided in Table 5. A.2 Prompts Used in LLM-based Evaluators The prompts used in coarse-grained LLM-as- evaluator is provided in Table 6. The prompt used in DoveScore decomposer, fact checker, and sorter are provided in Table 7, 8, and 9 respectively. B About M ONTAGE LIE B.1 Shuffle Algorithm Used in Montage-Style Lie Generation Algorithm 1 shows that how to obtain a random shuffled list given a list and a target inversion count. To generate a random shuffled list with a given number of inversions, the algorithm begins by as- signing the maximum possible inversions to each position, assuming the list is fully reversed. It then randomly decreases these inversion values until the total number of inversions equals the target. This creates an inversion sequence that still respects the desired count but adds randomness. Finally, it builds the shuffled list by selecting elements from a sorted version of the original list, guided by the in- version values, ensuring the resulting permutation has exactly the specified number of inversions. B.2 Data Distribution The distribution of the decomposed event number during MONTAGE LIEconstruction, the sampled inversion count when shuffling event lists, and the source and target text length are shown in Figure 6, 7, and 5.C Score Distribution on MONTAGE LIE The score distribution of coarse-grained evaluators , fine-grained evaluators are shown in Figure | https://arxiv.org/abs/2505.15792v1 |
8 and 9. The subscore distribution of DoveScore is provided in Figure 10. 12 Algorithm 1: RANDOM SHUFFLE WITHINVERSIONS Input : A= (a1, a2, . . . , a m): original list of length m K: target inversion count, 0≤K≤m(m−1) 2 Output : B: a random permutation of AwithInv(B) =K /* Step 1: Initialize maximum inversion sequence */ 1m← |A|; 2fori←1tomdo 3 ei←m−i; /* Maximum possible inversions at position i*/ /* Step 2: Reduce total inversions to desired K */ 4∆←m(m−1) 2−K 5while ∆>0do 6 i←UniformRandomInteger(1 , m); 7 ifei>0then 8 ei←ei−1; 9 ∆←∆−1 /* Step 3: Decode inversion sequence into permutation */ 10C←sorted copy of A; 11B←empty list of size m 12fori←1tomdo 13 Bi←C[ei+ 1] ; /* Choose the (ei+ 1) -th smallest available element */ 14 Remove C[ei+ 1] from C 15return B Figure 5: Distribution of words length in MontageLie benchmark. 13 Prompt used in step 1: Decompose $g$ into Events $E$ Break down the following paragraph into a list of independent events, listed in chronological order. Resolve all pronouns and referring expressions to their corresponding specific entities. Output only the event list and nothing else. {{Paragraph}} Prompt used in step 3: Incremental Lie Generation Here is what has happened so far: {{CurrentParagraph}} The following new fact occurred after the events described above: {{Event}} Please append this new fact directly to the current paragraph. If the addition feels awkward, make only minimal word adjustments to ensure the paragraph flows smoothly—without adding extra narrative details or transitional phrases such as "next" or "following that." Output only the updated paragraph. Prompt used in rephrasing with different narrative technic # Task: Rephrase Rephrase a given paragraph by applying a different narrative sequencing technique. Follow the steps below carefully: ## Step 1: Identify the Original Narrative Technique Read the original paragraph and determine which of the following sequencing techniques it uses: •Chronological Order — Events are presented strictly in the order they occurred. •Flashback — The paragraph begins with a later or climactic moment, then shifts back to earlier events. •Interjection — The main narrative is interrupted by a relevant insert such as a memory, reflection, or side story. •Supplementary Narration — Contextual background is added to support understanding, even if the details weren’t part of the original sequence. ## Step 2: Rephrase Using a Different Technique Choose a different narrative sequencing method from the list above and rephrase the paragraph accordingly. Guidelines for Rephrasing: •Use as much of the original wording as possible. •Donotadd any new events or fabricate details not present in the original. •Avoid ambiguous expression. Please output the result in the following format: •Original_Narrative_Technique: <original_narrative_technique> •Choosed_Narrative_Technique: <choosed_narrative_technique> •Rephrased: <rephrased_paragraph> {{Paragraph}} Table 5: Prompts used in the data construction process 14 Prompt used in coarse-grained LLM-as-Evaluators You will be given a source text. You will then be given one target text to be evaluated. Your task is to rate the information alignment of target text against the source text. Please make sure you read and understand these instructions carefully. Please keep this source text open while reviewing, and refer to | https://arxiv.org/abs/2505.15792v1 |
it as needed. Evaluation Criteria: Consistency (1-5) - the information alignment between the target text and the source text. A consistent target text contains only statements that are entailed by the source source text. Annotators were also asked to penalize target texts that contained hallucinated facts. 1 - worst, 5 - best. Evaluation Steps: 1. Read the source text carefully and identify the main facts and details it presents. 2. Read the target text and compare it to the source text. Check if the target text contains any factual errors that are not supported by the source text. 4. Assign a score for consistency based on the Evaluation Criteria. Note: only output the score for consistency, no other text. Source Text: {{Source}} Target Text: {{Target}} Evaluation Form (scores ONLY): - Consistency: Table 6: The prompts used in benchmarking coarse-grained LLM-as-Evaluators. 15 Prompt used in Decomposer Please analyze the following paragraph and extract all independent factual statements, categorized into two types: Event Facts and Descriptive Facts. ## Definitions: - Event Facts: Time-dependent facts that describe specific actions, changes, occurrences, or emotional/mental states. These involve entities doing something or experiencing something dynamically at a particular point in time, and can be situated along a timeline. Examples: The spacecraft entered Mars’ orbit after a six-month journey. Dr. Lin submitted her resignation. Mary felt happy about her promotion. - Descriptive Facts: Time-independent facts that define, classify, or describe static attributes or relationships of entities. These do not occur at a specific time, and are considered stable or inherent properties. Examples: Helianthus is a genus in the daisy family Asteraceae. Octopuses have three hearts. ## Instructions: 1. Break down the paragraph into individual, self-contained factual statements. 2. Resolve all pronouns and referring expressions to their full entity names for clarity. 3. Categorize each fact as either an Event Fact or a Descriptive Fact, according to the definitions above. 3. Output two separate lists: - Event Facts List: List in chronological order. - Descriptive Facts List: Order does not matter. Paragraph: {{Paragraph}} Event Facts List and Descriptive Facts List: Table 7: The prompts used in decomposer of DoveScore. Prompt used in fact checker Check if the fact is true based on the given context. Return True or False. Context: {{Source}} Fact:{{Fact}} True or False? Output: Table 8: The prompts used in fact checker of DoveScore. 16 Prompt used in sorter You are a helpful assistant that determines the correct chronological order of events in a paragraph. Do NOT add, remove, or change any events. Only reorder the exact events from the input list. Example 1: Paragraph : Tom woke up early. He brushed his teeth and then had breakfast. After that, he went for a run. Events: - Tom had breakfast - Tom woke up - Tom went for a run - Tom brushed his teeth Ordered Events : [Tom woke up, Tom brushed his teeth, Tom had breakfast, Tom went for a run] Example 2: Paragraph : After she went out for lunch, Sarah called her friend. Earlier in the morning, she had replied to a message | https://arxiv.org/abs/2505.15792v1 |
right after checking her email. Events: - Sarah checked her email - Sarah went out for lunch - Sarah called her friend - Sarah replied to a message Ordered Events : [Sarah checked her email, Sarah replied to a message, Sarah went out for lunch, Sarah called her friend] Now sort the following events based on the paragraph below, and return as a list of events: Paragraph: {{Paragraph}} Events: {{Events}} Ordered Events: Table 9: The prompts used in sorter of DoveScore. 17 Figure 6: Distribution of number of event decomposed in Step 1 of Montage-Style Lie Generation: Decompose g into Events E. Figure 7: Distribution of Inversion Count Sampled in Step 2 of Montage-Style Lie Generation: Shuffle E with Controlled Difficulty. 18 (a) ROUGE-1(b) ROUGE-2(c) ROUGE-L (d) Llama-3.2-instruct-1B(e) Llama-3.2-instruct-3B(f) Llama-3.1-instruct-8B (g) Llama-3.3-instruct-70B (h) Qwen-3-1.7B(i) Qwen-3-4B (j) Qwen-3-8B(k) Qwen-3-14B(l) Qwen-3-32B (m) gpt-4o-miniFigure 8: Violin Plots of Score Obtained By Coarse-Grained Evaluators on M ONTAGE LIE 19 (a) SummaC-ZS(b) SummaC-Conv(c) AlignScore (d) FactScore (e) DoveScore(ours)Figure 9: Violin Plots of Score Obtained By Fine-Grained Evaluators on M ONTAGE LIE (a) Event Score SE (b) Event Order Score SEO (c) Descriptive Score SD Figure 10: Violin Plots of SubScores obtained by D OVESCORE on M ONTAGE LIE. 20 | https://arxiv.org/abs/2505.15792v1 |
arXiv:2505.15795v1 [cs.CL] 21 May 2025Reverse Engineering Human Preferences with Reinforcement Learning Lisa Alazraki∗ Imperial College LondonTan Yi-Chern CohereJon Ander Campos Cohere Maximilian Mozes CohereMarek Rei Imperial College LondonMax Bartolo Cohere Abstract The capabilities of Large Language Models (LLMs) are routinely evaluated by other LLMs trained to predict human preferences. This framework—known as LLM-as-a-judge —is highly scalable and relatively low cost. However, it is also vulnerable to malicious exploitation, as LLM responses can be tuned to overfit the preferences of the judge. Previous work shows that the answers generated by a candidate-LLM can be edited post hoc to maximise the score assigned to them by a judge-LLM. In this study, we adopt a different approach and use the signal provided by judge-LLMs as a reward to adversarially tune models that generate text preambles designed to boost downstream performance. We find that frozen LLMs pipelined with these models attain higher LLM-evaluation scores than existing frameworks. Crucially, unlike other frameworks which intervene directly on the model’s response, our method is virtually undetectable. We also demonstrate that the effectiveness of the tuned preamble generator transfers when the candidate-LLM and the judge-LLM are replaced with models that are not used during training. These findings raise important questions about the design of more reliable LLM-as-a-judge evaluation settings. They also demonstrate that human preferences can be reverse engineered effectively, by pipelining LLMs to optimise upstream preambles via reinforcement learning—an approach that could find future applications in diverse tasks and domains beyond adversarial attacks. 1 Introduction The LLM-as-a-judge framework has to a large extent replaced human evaluation in the large-scale assessment of LLMs [ 4,38,2,15,31,36], as several widely used benchmarks now rely on this form of evaluation to judge model performance across various tasks [ 24,17,18,26,35,36]. Judge-LLMs are trained to predict human preferences, which makes them a scalable, lower-cost alternative to human annotations [ 24,36]. On the other hand, previous work has shown that judge-LLMs are vulnerable to adversarial attacks aimed at artificially boosting their scores [ 28,30,34]. In particular, it is possible to find text sequences that, once appended to, or substituted for, a response, maximise the score awarded to it by a judge [ 28,30,37]. This type of attack intervenes post hoc on the text being evaluated and can thus be detected via human inspection or by computing the perplexity (PPL) of the modified response [16, 28, 30]. In this study, we investigate a different, novel approach based on reward modelling w.r.t. the judge- LLM’s evaluation scores, testing both its effectiveness and detectability. Specifically, we utilise such ∗Work done while at Cohere. Correspondence to lisa.alazraki20@imperial.ac.uk. Preprint. Under review. evaluation scores to tune an adversarial model that generates textual preambles2—i.e., additional sets of instructions—to be injected into a frozen candidate-LLM, so that the responses of the latter are awarded higher scores by the judge. The loss function that optimises the preamble generator is adapted from Contrastive Policy Gradient [ 10], but depends on rewards computed solely on the generations of the candidate-LLM and without directly observing the preambles. We refer to this technique of pipelining multiple LLMs to indirectly | https://arxiv.org/abs/2505.15795v1 |
optimise upstream preambles in an RL fashion asReinforcement Learning for Reverse Engineering (RLRE). There are several advantages to tuning an upstream preamble generator as opposed to directly overfitting the candidate-LLM to the judge-LLM’s rewards: (1) the specialised preamble generator can be smaller in size, hence computationally cheaper to train; (2) tuning the preambles while leaving the candidate-LLM frozen retains the original capabilities of the latter and is less likely to result in noticeable stylistic changes in its output, thus potentially making the attack harder to detect; (3) the generated preambles that align the candidate-LLM to the judge are natural language instructions, which can be analysed and interpreted; (4) once trained, the preamble generator has the potential to be used as a plug-and-play component and be pipelined with different candidate-LLMs (we experiment with the ability to transfer generated preambles across candidate-LLMs in Section 5.1). Indeed, we find that responses produced by candidate-LLMs pipelined with the preamble generator receive from judge-LLMs evaluation scores that are higher—often substantially—than responses attacked with other strategies, while also eluding detection methods. In contrast, existing attacks can be detected using perplexity analysis [16] or human evaluation. Finally, we perform an analysis of the optimal preambles and find high variability in their fluency across different model pipelines. While natural language preambles enhance interpretability, our findings raise important questions about whether constraining conditioning tokens—such as preambles or reasoning tokens—to the manifold of natural language may inadvertently limit model capabilities. This paper makes the following main contributions: 1.We show that an adversarially tuned preamble generator, pipelined with a frozen LLM, is effective at deceiving judge-LLMs into assigning higher scores. To the best of our knowledge, this is the first work that optimises preambles to be injected into a frozen LLM for this purpose. In contrast, previous studies focus on finding text sequences to be appended to pre-generated responses. 2.We demonstrate that our adversarial preamble generator can be successfully pipelined with candidate-LLMs and judge-LLMs not seen during training. 3.We show that our attack does not increase PPL scores and is rarely flagged by human evaluators. Hence, it cannot be detected using existing safeguards. 4.We observe variations in optimal preamble style, fluency and naturalness across models, suggesting that conditioning LLMs on human-readable sequences only (for example in preambles or reasoning traces) may be overly restrictive from a performance perspective. 5.Our work highlights intrinsic vulnerabilities in the LLM-as-a-judge paradigm and calls into question its reliability and robustness. 6.More broadly, this work introduces RLRE, a novel approach that pipelines LLMs to optimise upstream textual preambles in a reinforcement learning setting. While here we use RLRE to reverse engineer human preferences with the aim of boosting LLM-as-a-judge evaluation, we postulate that this method could be paired with different downstream rewards to optimise preambles for a variety of applications beyond just adversarial attacks—including but not limited to meaningful tasks such as reducing toxicity or mitigating bias. 2 Related Work Prior work investigating the robustness of LLM-as-a-judge has found that this approach suffers from multiple inherent biases. Existing research has sought to exploit these biases more or less directly | https://arxiv.org/abs/2505.15795v1 |
by crafting adversarial attacks aimed at maximising the scores assigned by a judge-LLM to candidate responses. 2In this context, preambles are also known as system prompts. 2 Give gene ral advice to help answer the following question…: [q] Preamble generator p p’q + + R(q, c’ ) p +q Candidate-LLM Judge-LLM Training dataset Comparison c R(q, c)Evaluation R(q, c) Figure 1: Reinforcement Learning for Reverse Engineering (RLRE) pipeline for training a preamble generator. Given a question qfrom a training set, we prepend to it a general instruction and feed it to the preamble generator π. In order for πto learn the policy, we sample two preambles per question, p andp′. The respective rewards are obtained by appending qtopandp′, respectively, and (i) passing each as input to the candidate-LLM, which generates the responses candc′, and (ii) having the judge-LLM evaluate each question-response pair and extracting the respective numerical rewards from these evaluations. The loss function that optimises πdepends on the delta between the rewards R(q, c)andR(q, c′). Biases. Judge-LLMs are not unbiased evaluators. [ 22] and [ 25] observe that judge-LLMs prefer their own generations to those of other models in the large majority of cases. [ 32] further show that when asked to choose the best out of multiple responses, GPT models favour the first candidate displayed, regardless of its quality. Additionally, [ 3] find that LLMs tasked with scoring other models prefer visually appealing generations regardless of content, and generations that are attributed, even falsely, to authority figures. Similarly, [ 34] conclude that LLM judgment is vulnerable to spurious attributions (‘authority bias’). They additionally observe that judge-LLMs tend to prefer longer responses (‘verbosity bias’), responses falsely identified as majority beliefs (‘bandwagon-effect bias’), and responses presented as the result of a refinement process (‘refinement-aware bias’). Adversarial attacks. [34] show that the biases of judge-LLMs can be manipulated to artificially boost their evaluation scores. They append specific text sequences to candidate responses: a false book citation to exploit authority bias, a sentence stating that most people favour that response to exploit bandwagon-effect bias, or a piece of text suggesting the response has been through a refinement process, to leverage refinement-aware bias. In each case, they find that the resulting responses are evaluated more favourably by all or some of the judge-LLMs. They also show that evaluation is affected when the length of a response is increased, without any improvement in its quality. [ 28] take this type of adversarial attack further, tuning a universal text sequence that, when appended to a pre-generated response, increases its evaluation score. This sequence is found by searching sequentially through the vocabulary, choosing and adding each time the word that maximises the average reward of the judge on the training set. Their method is successful at inflating LLM judgement on Topical-Chat [ 12] and SummEval [ 9], and they show the attack transfers to new judge-LLMs that were not used in the search process. Rather than tuning a universal phrase, [ 30] train a sample-specific text sequence to be selected more often by a judge in pairwise comparison. | https://arxiv.org/abs/2505.15795v1 |
Similarly to [ 28], they append the tuned sequence to a pre-generated LLM response. Finally, [ 37] experiment with replacing responses with fixed instructions that invalidate the original LLM-as-a-judge prompt. Note that [ 28], [30] and [ 37] all observe that their attack can in large part be detected by measuring the perplexity of the responses. As the attack intervenes directly on the response and alters it, attacked responses tend to display higher PPL. Unlike the above methods, the attack we propose does not modify the generated text post hoc . This makes its detection substantially more difficult. 3 Method Given a training dataset of questions D={(qj)1≤j≤N}and a fixed instruction prompt i, we aim to train a preamble generator πθ(pj|i, qj)to generate textual preambles pconditioned on iandq. We formalize the RL problem as: J(πθ) =E(i,q)∼DEp∼πθ(p|i,q)Ec∼LLM C(c|p,q)[R(q, c)] 3 where LLM Cis a frozen LLM—referred to as the candidate-LLM—which takes a preamble pjand the corresponding question qjand outputs a candidate response cj. Note that the reward is a function of the preamble because LLM Cis conditioned on it. Our reward model is a frozen LLM that outputs a verbal critique followed by a numerical score, as in the LLM-as-a-judge framework. We refer to this model as the judge-LLM. In our case, the score output by the judge is discrete on the 1–10 scale, elicited using MT-Bench [ 36] prompts for single (as opposed to pairwise) evaluation. We use this numerical score as the reward in our training pipeline. Figure 1 illustrates the training pipeline in detail. In order to optimize the RL problem we adapt Contrastive Policy Gradient ( CoPG ) [10]. The rationale behind this choice, as well as a comparison between CoPG and other RL algorithms, are further elaborated in Appendix A. For a pair of two sampled preambles pjandp′ jwe introduce the following sampling loss: L(pj, p′ j;π) = R(qj, cj)−R(qj, c′ j)−β lnπ(pj|i, qj) πref(pj|i, qj)−lnπ(p′ j|i, qj) πref(p′ j|i, qj)!! 2 . πrefis a reference model used for regularising the RL problem, which we set to be the base LLM underlying the preamble generator. As in [ 10],βis a hyperparameter regulating the importance of the sequence log-likelihoods in the overall loss. Asπis trained to generate preambles that are question-specific, we pipeline it with the candidate-LLM at test time to dynamically generate responses to new questions. We prompt the preamble generator with the same fixed instruction iused during training, together with a question q, to generate the response c. A judge-LLM is then prompted to assign a score to c. As an additional consideration, it is worth noting that [ 10] investigate CoPG solely in an offline manner. However, they hypothesise that the method should also scale to the online setting. To the best of our knowledge, this work is the first to successfully apply a similar method to online learning. 4 Experiments 4.1 Models and Hyperparameters To test the generalisability of our method, we use LLMs from both the Command3and the Llama 3.1 [13] model series. We train and test three distinct pipelines, illustrated in Table 1. Note | https://arxiv.org/abs/2505.15795v1 |
that all pipelines are trained with the same judge-LLM, i.e. Command R+ prompted as in [ 36]. We tune the Command R7B preamble generators on a Google Cloud TPU v5e containing 64 chips. We train the Llama 3.1 8B Instruct preamble generator on a single Nvidia H100 GPU. Candidate- and judge-LLMs from the Command family are accessed via API. The Llama 3.1 70B Instruct candidate-LLM is deployed and queried on a local server. Due to compute limitations, we perform all hyperparameter tuning on the Command R7B+R7B pipeline, and apply the same hyperparameters to the other two. As shown in Section 4.4, the Command R7B+R7B pipeline attains the greatest performance improvements over the baselines, and it is therefore likely that additional hyperparameter tuning would further raise the scores obtained by theCommand R7B+R andLlama 8B+70B pipelines. This may be especially true for the latter, which comprises a family of models different from those used for hyperparameter tuning. All hyperparameters, API model IDs, and training process details are given in Appendix B. 4.2 Datasets We test all pipelines on MT-Bench [ 36], which consists of 160 open-ended questions, split among two conversational turns, grounded in the following domains: writing, roleplay, reasoning, math, coding, extraction, STEM, and humanities. We choose this benchmark since it is established and widely used in the assessment of LLMs. Crucially, MT-Bench includes an evaluation strategy for independent judgement as opposed to pairwise comparison with other models, while other popular benchmarks 3https://cohere.com/command 4 Table 1: The training pipelines include models of different size and family as preamble generators and/or candidate-LLMs. Command R+ (104B parameters) is used as the judge-LLM in all pipelines. Pipeline identifier Preamble generator Candidate-LLM Command R7B+R7B Command R7B Command R7B Command R7B+R Command R7B Command R (35B) Llama 8B+70B Llama 3.1 8B Instruct Llama 3.1 70B Instruct such as [ 19] and [ 18] only allow for the latter type of assessment. Additionally, MT-Bench tests a balanced distribution of diverse skills on challenging multi-turn questions [ 36], while, e.g., [ 19] and [18] have been found to be less challenging, less diverse and carry a higher response length bias [20,8]. It is also worth noting that MT-Bench represents the setup we target in our approach, which makes it suitable to demonstrate that it is possible to reverse engineer human feedback in a controlled setting. Since MT-Bench does not comprise a training set, we fine-tune and validate the preamble generators using questions from UltraFeedback [ 6] (using MT-Bench prompts for single evaluation to elicit the downstream rewards). The questions in UltraFeedback are extracted from ShareGPT [ 5], FLAN [ 23], Evol-Instruct [ 33], UltraChat [ 7], FalseQA [ 14], and TruthfulQA [ 21]. Collectively, these datasets cover a wide range of topics, and the domains in MT-Bench are represented in UltraFeedback. The distribution of the different tasks within the training data is further analysed in Appendix C.2. 4.3 Baselines We compare the evaluation scores assigned to candidate-LLMs attacked with the preamble generator with those given to candidates that have not been attacked. Additionally, we compare against existing methods that exploit | https://arxiv.org/abs/2505.15795v1 |
vulnerabilities in the LLM-as-a-judge framework to artificially boost their evaluation scores. We describe these additional baselines below. Note that all of them modify responses pre-generated using the non-attacked model. Verbosity bias attack. We ask the candidate-LLM to increase the length of a pre-generated response. To this end, we use the prompt designed by [ 34] to lengthen the response without necessarily improving its quality. Bandwagon-effect bias attack. We append to each pre-generated response a text sequence stating that a high percentage of people think that response should be awarded the highest rating. Consistent with [34], we randomly choose percentages between 60% and90%. Authority bias attack. Using the same prompting strategy as [ 34], we ask the candidate-LLM to invent a plausible book source for a pre-generated response, given a citation template. The citation is then appended to the response to increase its perceived authority. Refinement-aware bias attack. Given a pre-generated response, we have the candidate-LLM polish it using the refinement prompt in [ 34]. The judge is then presented with the original response, followed by the refinement prompt and the new, polished response. Universal adversarial attack. We append to each pre-generated response the universal adversarial attack phrase in [ 28], learned for absolute assessment when attacking the Topical-Chat [ 12] overall score. Similarly to UltraFeedback and MT-Bench, Topical-Chat encompasses a wide range of topics. Additionally, the questions in Topical-Chat are multi-turn as in MT-Bench. Note that learning a new universal phrase on our UltraFeedback training set is computationally infeasible: for ntraining samples, the exhaustive search method in [ 28] computed over the 20k-word vocabulary makes 20000 ×4×ncalls to the judge-LLM. In our case, that would equal 4×109inference calls. 4.4 Results Table 2 shows the average evaluation scores obtained by three candidate-LLMs, in all baseline settings and when pipelined with our adversarial preamble generators. Models injected with adversarial 5 Table 2: MT-Bench evaluation scores assigned by the Command R+ judge-LLM to candidate-LLMs attacked with different strategies. Each setup is run five times and the scores are averaged (showing the standard deviation in the subscript) to account for small variations due to temperature sampling. Candidate-LLMAttack type No attack Verbosity Bandwagon Authority Refinement Universal Preambles Turn 1 7.600.07 7.330.08 7.470.05 7.510.05 7.780.05 7.580.06 8.21 0.07 Command R7B Turn 2 6.990.08 7.290.02 7.170.08 7.290.10 7.450.08 7.250.03 7.66 0.09 Overall 7.290.08 7.310.05 7.320.06 7.400.07 7.610.06 7.410.04 7.93 0.08 Turn 1 8.090.08 7.990.10 7.980.06 8.170.08 8.100.05 8.100.06 8.45 0.07 Command R Turn 2 7.570.12 7.730.08 7.720.14 7.650.06 7.810.05 7.750.09 7.92 0.03 Overall 7.830.10 7.860.09 7.850.10 7.910.07 7.950.05 7.920.07 8.18 0.05 Turn 1 8.470.08 8.290.06 8.390.05 8.380.07 8.510.07 8.500.05 8.56 0.08Llama 3.1 70B InstructTurn 2 7.640.06 7.490.08 7.650.08 7.620.07 7.750.09 7.850.07 7.88 0.08 Overall 8.060.07 7.890.07 8.020.07 8.000.07 8.130.08 8.170.06 8.22 0.08 WritingRoleplayReasoning Math Coding Extraction STEMHumanities 3 3 3 33 4 4 4 44 5 5 5 55 6 6 6 66 7 7 7 77 8 8 8 88 9 9 9 99 (a) Command R7B WritingRoleplayReasoning Math Coding Extraction STEMHumanities 5 5 5 55 6 6 6 6 6 7 7 7 7 7 8 8 | https://arxiv.org/abs/2505.15795v1 |
8 8 8 9 9 9 9 9Refinement Universal Preambles (b) Command R WritingRoleplayReasoning Math Coding Extraction STEMHumanities 5 5 5 55 6 6 6 6 6 7 7 7 7 7 8 8 8 8 8 9 9 9 9 9 (c) Llama 3.1 70B Instruct Figure 2: Average scores per question type obtained by candidate-LLMs using a refinement-aware bias attack, [28]’s universal adversarial attack, and the adversarial preamble generator. preambles consistently obtain higher evaluations from the Command R+ judge at both question turns, with the Command R7B+R7B pipeline (i.e., the one on which we perform hyperparameter tuning) achieving the most substantial improvements. We also find that turn 1 responses benefit the most from the preambles. Note that in MT-Bench, turn 2 questions require adjusting turn 1 answers according to new constraints. This type of task is not well represented in UltraFeedback, and can thus be considered OOD w.r.t. the training data (see also Appendix C.2). Nevertheless, the tuned preamble generators are still effective at raising turn 2 scores at test time. We observe that the strongest baselines across candidate-LLMs are given by the refinement-aware bias attack and [ 28]’s universal adversarial attack. On average, the adversarial preambles raise Command R7B’s overall score, measured on a 1–10 continuous scale, by 0.32above the refinement-aware bias attack and 0.52over the universal attack. They also result in an overall score that is 0.64higher than the non-attacked model’s. For Command R, the corresponding increases are +0.23,+0.26and +0.35, respectively. Finally, injecting adversarial preambles into Llama 3.1 70B Instruct raises its overall score by 0.16 above the non-attacked model, 0.09 above the refinement-aware attack and 0.05 above the universal attack. In Figure 2, we illustrate the evaluation scores per question type for the two best-performing baselines (refinement-aware bias and universal adversarial attack) and the adversarial preamble generator. Overall, the adversarially tuned preambles are most effective at raising the scores of reasoning and math responses, followed by extraction, STEM and roleplay. Standard deviations for these results are shown in Appendix D. 6 Table 3: Candidate transferability (a) and judge tranferability (b) of our preamble-based attack on MT-Bench. All scores are averaged over five runs. (a) Candidate transferability Candidate-LLMPreambles from pipeline Command R7B+R7BCommand R7B+RLlama 8B+70B Command R7B7.93 0.08 7.680.08 7.400.10 Command R (35B)8.010.09 8.18 0.05 7.970.09 Llama 3.1 70B Instruct8.210.05 8.190.08 8.22 0.08(b) Judge transferability Attack type GPT-3.5 score No attack 7.580.08 Verbosity 7.360.09 Bandwagon 7.470.04 Authority 7.480.09 Refinement 7.710.11 Universal 7.330.10 Preambles 8.07 0.07 5 Analysis 5.1 Attack Transferability We investigate the transferability of our preamble attack across different candidate- and judge-LLMs. Transferability across candidate-LLMs. At test time, we pipeline each candidate-LLM with adversarial preamble generators that were tuned in conjunction with a different candidate. As shown in Table 3(a), the judge’s evaluation scores remain higher than all baselines (shown in Table 2) in all cases except for Command R7B pipelined with the preamble generator of the Llama pipeline, whose scores align with [ 28]’s universal attack. This demonstrates that the adversarial preamble attack has a high degree of transferability across different candidate-LLMs. Transferability across judge-LLMs. In a | https://arxiv.org/abs/2505.15795v1 |
real-world scenario, it may not be possible to know in advance which judge-LLM the attack will target. It is therefore important to establish whether our method transfers to judges not used for training. In Table 3(b), we show the results of having a different judge-LLM evaluate candidate responses in all attack settings. We utilise for this purpose the most capable judge-LLM employed by the authors of our universal attack baseline, i.e., GPT-3.5 [28]. For cost efficiency, we limit this analysis to Command R7B. Note that the adversarial preamble generator was trained with rewards output by a Command R+ judge, yet GPT-3.5 assigns the highest scores to responses produced with the aid of those preambles. Indeed, our attack results in an overall No attack VerbosityBandwagon Authority Refinement Universal Preambles (ours)00.20.40.60.81 Figure 3: Proportion of LLM responses that have been labelled as ‘attacked’ by human evaluators. Responses generated using adversarial preambles are identified as attacked nearly as rarely as those produced by a non-attacked model.Table 4: False negative rate (FNR) of PPL-W for each attack type. Verbosity bias, bandwagon bias and our preamble-based attack are rarely detected with this method (FNR ≥0.90). The universal adversarial attack [ 28] is almost al- ways detected (FNR = 0.04). Attack type PPL-W (FNR) Verbosity 0.91 Bandwagon 0.93 Authority 0.88 Refinement 0.66 Universal 0.04 Preambles 0.90 7 score that is 0.36 above the best-performing baseline (i.e., the refinement-aware bias attack). The magnitude of the improvement is similar to that obtained when attacking Command R+. This supports the hypothesis that the attack can be carried out with equal effectiveness without knowledge of the judge-LLM being targeted. 5.2 Attack Detectability We examine the detectability of all attacks with two distinct strategies: perplexity analysis and human evaluation of the candidate responses. Perplexity analysis. We use a perplexity filter over a 10-token sliding window as in [ 16], to identify whether the candidate-LLM has been attacked. A response is labelled as attacked if the PPL of any subsequence in the sliding window is greater than a threshold t. We set tto ensure a false positive rate within 0.05 on the test set responses produced by the non-attacked model. Table ??illustrates the results of this detection method—referred to as PPL-W—for the Command R7B candidate. Human evaluation. To assess the detectability of each attack by visual inspection, we recruit seven annotators with domain expertise and knowledge of the possible attacks. We sample a balanced subset of 175 test responses generated by Command R7B under each attack setup (including the non-attacked model), randomly split it into seven equal-sized portions, and have each annotator assign a binary label (‘attacked’ or ‘not attacked’) to each generation. In Figure 3, we show the proportion of responses labelled as ‘attacked’ for each attack type. Attacks that insert formulaic sequences into the response (bandwagon bias, authority bias, refinement-aware bias and universal attack) are detected in all or most cases (100%, 92%, 88% and 92% respectively), while the verbosity bias attack is detected approximately half of the time (48%). Notably, responses produced by the preamble-based attack are labelled as ‘attacked’ nearly | https://arxiv.org/abs/2505.15795v1 |
as rarely (20%) as those generated by the non-attacked model (16%). We also measure the rate at which each annotator is able to correctly detect each type of attack (both in terms of accuracy and F1). For both scores, we thus compute the median absolute deviation (MAD) across all annotators for each attack type, and find it to be zero in all cases, indicating that the annotators are able (or unable) to detect each attack at very similar rates. This supports the hypothesis that different attacks are inherently (un)detectable by visual inspection, as we do not find high variances in detection rates between different annotators. 5.3 Ablations In our framework, we input into the preamble generator a fixed instruction concatenated with a question from the dataset. As a result, each preamble is tailored to a specific question. To evaluate to what degree this prompting strategy contributes to the effectiveness of our attack, we run an ablation study where (i) we remove the question from the input and feed the preamble generator only a generic instruction, and (ii) we discard the instruction altogether and only input special tokens to signal the start of a generation turn (both prompts are shown in Appendix E). In the latter case, the model is virtually unconstrained and may generate any text string, and thus the training signal is even more consequential in determining the content of the preambles. Table 5 illustrates the overall scores assigned by the Command R+ judge under all ablated settings. Even though both ablation strategies result in slightly lower performance than the corresponding non-ablated pipelines, all the scores remain above all baselines (refer to Table 2 for the baseline results). This demonstrates the robustness of the attack under different prompting strategies. 5.4 Analysis of Generated Texts Do successful preambles exhibit common patterns? Upon analysing the successful preambles produced by each tuned generator, we find consistency among preambles generated by the same model, but high variability across different models. While preambles generated by the Command models (both conditioned on the data point and conditioned on a generic instruction) share similar structure (in that they provide a blueprint for how to design an answer), preambles that are only conditioned on the start-of-turn token—as well as all the Llama preambles—deviate substantially from this pattern. The latter tend to reiterate the same phrases multiple times, and often do not appear fluent to a human reader. When they are not conditioned on an instruction, they even devolve into 8 Table 5: Ablated MT-Bench overall scores obtained by (i) removing the question and feeding a generic instruction to the preamble generator, and (ii) removing the instruction and only feeding special tokens to signal the start of the turn. Scores are assigned by a Command R+ judge and are averaged over five runs. PipelineAblated setting No question No instruction Command R7B+R7B 7.760.07 7.700.07 Command R7B+R 8.140.08 8.130.13 Llama 8B+70B 8.190.11 8.180.12 apparently meaningless sequences of characters. Nevertheless, all of these preambles are successful at raising LLM evaluation scores, as evidenced in Section 5.3. Appendix G shows representative preambles for each | https://arxiv.org/abs/2505.15795v1 |
pipeline. It is worth noting that, as further discussed in Appendix B, we assign relatively low weighting to the log-likelihoods of the adversarial preambles within the loss function, thus prioritising their rewards over their fluency during training. Remarkably, the training still converges toward preambles that elicit high-reward candidate responses, regardless of their fluency. This suggests that grounding preambles—or other conditioning tokens such as reasoning tokens—to the manifold of natural language may not always be the optimal strategy. Are attacked responses more accurate? In Figure 2 we show that the preamble-based attack is particularly effective at raising the scores assigned to math and reasoning responses. Since these responses are evaluated by the judge-LLM against a ground truth [ 36], this raises the question of whether the attack has effectively made the responses more accurate. We thus evaluate all math and reasoning responses given by the non-attacked Command R7B model, as well as the version attacked with the preamble generator, and find that the accuracy rates are identical (0.25 in both cases). Hence, the accuracy of the final answer is not improving due to the preambles. On the other hand, as shown by the representative examples in Appendix H, the attacked responses have more structured reasoning chains, usually arranged into distinct paragraphs labelled with clear, explanatory headers. We thus postulate that the improved layout may be solely responsible for the higher scores assigned by the judge-LLM, regardless of correctness. 6 Conclusion We have shown that human preferences can be successfully reverse engineered to tune adversarial text preambles, and that injecting these preambles into a candidate-LLM constitutes a powerful attack on the LLM-as-a-judge framework. Not only is this attack more effective than previous methods at inflating the scores assigned by a judge-LLM to candidate responses, but it is also virtually undetectable using existing safeguards. In contrast, current strategies that intervene post hoc on the responses display high PPL and can be easily detected by visual inspection. Additionally, we have found that the attack transfers to candidate-LLMs not seen during training. Therefore, our adversarial preamble generator can be used as a plug-and-play component to artificially boost the scores of multiple LLMs after a single training process. Finally, we have shown that preambles tuned with one judge-LLM can be used to attack a different judge, from a different model family, with equal effectiveness. These findings raise important questions on the reliability of LLM-as- a-judge evaluation. In addition to pointing to future research directions for the design of more robust evaluation frameworks, this work introduces Reinforcement Learning for Reverse Engineering (RLRE), a novel strategy that combines LLMs to tune upstream textual preambles via reinforcement learning. While here we have shown its effectiveness in the context of adversarial attacks on LLM evaluation, future research can investigate other avenues of application for this approach (e.g., automatic generation of other types of attack, but also improvements of LLM output such as toxicity or bias mitigation), as well as different granularities of sequence adaptation (e.g., query-specific, task-specific, domain-specific), and the optimisation of tokens at different positions within an input sequence (e.g., post-query | https://arxiv.org/abs/2505.15795v1 |
instructions instead of pre-query preambles). 9 Broader Impacts This work focuses on aligning candidate-LLMs to judge-LLMs by means of tuned preambles injected into the candidate, with the aim of obtaining inflated evaluations. While there is a chance that this strategy may be exploited by adversaries, it is of scientific interest to the community that such an attack is not only possible, but also particularly effective. With this knowledge, future work can focus on developing detection and defense methods. Nowhere in this work is the generation of harmful content sought or encouraged, and we have found no evidence in our experiments that the alignment process results in either the preambles or the candidate responses containing harmful text. References [1]A. Ahmadian, C. Cremer, M. Gallé, M. Fadaee, J. Kreutzer, O. Pietquin, A. Üstün, and S. Hooker. Back to basics: Revisiting REINFORCE-style optimization for learning from human feedback in LLMs. In L.-W. Ku, A. Martins, and V . Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 12248–12267, Bangkok, Thailand, Aug. 2024. Association for Computational Linguistics. [2]A. Bavaresco, R. Bernardi, L. Bertolazzi, D. Elliott, R. Fernández, A. Gatt, E. Ghaleb, M. Giu- lianelli, M. Hanna, A. Koller, A. F. T. Martins, P. Mondorf, V . Neplenbroek, S. Pezzelle, B. Plank, D. Schlangen, A. Suglia, A. K. Surikuchi, E. Takmaz, and A. Testoni. LLMs instead of human judges? A large scale empirical study across 20 NLP evaluation tasks. CoRR , abs/2406.18403, 2024. [3]G. H. Chen, S. Chen, Z. Liu, F. Jiang, and B. Wang. Humans or LLMs as the judge? A study on judgement bias. In Y . Al-Onaizan, M. Bansal, and Y .-N. Chen, editors, Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 8301–8327, Miami, Florida, USA, Nov. 2024. Association for Computational Linguistics. [4]C.-H. Chiang and H.-y. Lee. Can large language models be an alternative to human evaluations? In A. Rogers, J. Boyd-Graber, and N. Okazaki, editors, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 15607–15631, Toronto, Canada, July 2023. Association for Computational Linguistics. [5]W.-L. Chiang, Z. Li, Z. Lin, Y . Sheng, Z. Wu, H. Zhang, L. Zheng, S. Zhuang, Y . Zhuang, J. E. Gonzalez, I. Stoica, and E. P. Xing. Vicuna: An open-source chatbot impressing GPT-4 with 90 https://lmsys.org/blog/2023-03-30-vicuna , March 2023. [6]G. Cui, L. Yuan, N. Ding, G. Yao, B. He, W. Zhu, Y . Ni, G. Xie, R. Xie, Y . Lin, Z. Liu, and M. Sun. ULTRAFEEDBACK: Boosting language models with scaled AI feedback. In R. Salakhutdinov, Z. Kolter, K. Heller, A. Weller, N. Oliver, J. Scarlett, and F. Berkenkamp, editors, Proceedings of the 41st International Conference on Machine Learning , volume 235 of Proceedings of Machine Learning Research , pages 9722–9744. PMLR, 21–27 Jul 2024. [7]N. Ding, Y . Chen, B. Xu, Y . Qin, S. Hu, Z. Liu, M. Sun, and B. Zhou. Enhancing chat language models by scaling high-quality instructional conversations. In H. Bouamor, J. Pino, and K. Bali, | https://arxiv.org/abs/2505.15795v1 |
editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 3029–3051, Singapore, Dec. 2023. Association for Computational Linguistics. [8]Y . Dubois, P. Liang, and T. Hashimoto. Length-controlled alpacaeval: A simple debiasing of automatic evaluators. In First Conference on Language Modeling , 2024. [9]A. R. Fabbri, W. Kry ´sci´nski, B. McCann, C. Xiong, R. Socher, and D. Radev. SummEval: Re-evaluating summarization evaluation. Transactions of the Association for Computational Linguistics , 9:391–409, 2021. [10] Y . Flet-Berliac, N. Grinsztajn, F. Strub, E. Choi, B. Wu, C. Cremer, A. Ahmadian, Y . Chandak, M. G. Azar, O. Pietquin, and M. Geist. Contrastive policy gradient: Aligning LLMs on sequence-level scores in a supervised-friendly fashion. In Y . Al-Onaizan, M. Bansal, and Y .-N. Chen, editors, Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 21353–21370, Miami, Florida, USA, Nov. 2024. Association for Computational Linguistics. 10 [11] M. Gheshlaghi Azar, Z. Daniel Guo, B. Piot, R. Munos, M. Rowland, M. Valko, and D. Ca- landriello. A general theoretical paradigm to understand learning from human preferences. In S. Dasgupta, S. Mandt, and Y . Li, editors, Proceedings of The 27th International Conference on Artificial Intelligence and Statistics , volume 238 of Proceedings of Machine Learning Research , pages 4447–4455. PMLR, 02–04 May 2024. [12] K. Gopalakrishnan, B. Hedayatnia, Q. Chen, A. Gottardi, S. Kwatra, A. Venkatesh, R. Gabriel, and D. Hakkani-Tür. Topical-Chat: Towards Knowledge-Grounded Open-Domain Conversa- tions. In Proceedings of Interspeech 2019 , pages 1891–1895, 2019. [13] A. Grattafiori, A. Dubey, A. Jauhri, A. Pandey, A. Kadian, A. Al-Dahle, A. Letman, A. Mathur, A. Schelten, A. Vaughan, A. Yang, A. Fan, A. Goyal, A. Hartshorn, A. Yang, A. Mitra, A. Sravankumar, A. Korenev, A. Hinsvark, A. Rao, A. Zhang, A. Rodriguez, A. Gregerson, A. Spataru, B. Roziere, B. Biron, B. Tang, B. Chern, C. Caucheteux, C. Nayak, C. Bi, C. Marra, C. McConnell, C. Keller, C. Touret, C. Wu, C. Wong, C. C. Ferrer, C. Nikolaidis, D. Allonsius, D. Song, D. Pintz, D. Livshits, D. Wyatt, D. Esiobu, D. Choudhary, D. Mahajan, D. Garcia- Olano, D. Perino, D. Hupkes, E. Lakomkin, E. AlBadawy, E. Lobanova, E. Dinan, E. M. Smith, F. Radenovic, F. Guzmán, F. Zhang, G. Synnaeve, G. Lee, G. L. Anderson, G. Thattai, G. Nail, G. Mialon, G. Pang, G. Cucurell, H. Nguyen, H. Korevaar, H. Xu, H. Touvron, I. Zarov, I. A. Ibarra, I. Kloumann, I. Misra, I. Evtimov, J. Zhang, J. Copet, J. Lee, J. Geffert, J. Vranes, J. Park, J. Mahadeokar, J. Shah, J. van der Linde, J. Billock, J. Hong, J. Lee, J. Fu, J. Chi, J. Huang, J. Liu, J. Wang, J. Yu, J. Bitton, J. Spisak, J. Park, J. Rocca, J. Johnstun, J. Saxe, J. Jia, K. V . Alwala, K. Prasad, K. Upasani, K. Plawiak, K. Li, K. Heafield, K. Stone, K. El-Arini, K. Iyer, K. Malik, K. Chiu, K. Bhalla, K. Lakhotia, L. Rantala-Yeary, L. van der Maaten, L. Chen, L. Tan, L. Jenkins, L. Martin, L. Madaan, L. Malo, L. Blecher, L. Landzaat, L. de Oliveira, M. | https://arxiv.org/abs/2505.15795v1 |
Muzzi, M. Pasupuleti, M. Singh, M. Paluri, M. Kardas, M. Tsimpoukelli, M. Oldham, M. Rita, M. Pavlova, M. Kambadur, M. Lewis, M. Si, M. K. Singh, M. Hassan, N. Goyal, N. Torabi, N. Bashlykov, N. Bogoychev, N. Chatterji, N. Zhang, O. Duchenne, O. Çelebi, P. Alrassy, P. Zhang, P. Li, P. Vasic, P. Weng, P. Bhargava, P. Dubal, P. Krishnan, P. S. Koura, P. Xu, Q. He, Q. Dong, R. Srinivasan, R. Ganapathy, R. Calderer, R. S. Cabral, R. Stojnic, R. Raileanu, R. Maheswari, R. Girdhar, R. Patel, R. Sauvestre, R. Polidoro, R. Sumbaly, R. Taylor, R. Silva, R. Hou, R. Wang, S. Hosseini, S. Chennabasappa, S. Singh, S. Bell, S. S. Kim, S. Edunov, S. Nie, S. Narang, S. Raparthy, S. Shen, S. Wan, S. Bhosale, S. Zhang, S. Vandenhende, S. Batra, S. Whitman, S. Sootla, S. Collot, S. Gururangan, S. Borodinsky, T. Herman, T. Fowler, T. Sheasha, T. Georgiou, T. Scialom, T. Speckbacher, T. Mihaylov, T. Xiao, U. Karn, V . Goswami, V . Gupta, V . Ramanathan, V . Kerkez, V . Gonguet, V . Do, V . V ogeti, V . Albiero, V . Petrovic, W. Chu, W. Xiong, W. Fu, W. Meers, X. Martinet, X. Wang, X. Wang, X. E. Tan, X. Xia, X. Xie, X. Jia, X. Wang, Y . Goldschlag, Y . Gaur, Y . Babaei, Y . Wen, Y . Song, Y . Zhang, Y . Li, Y . Mao, Z. D. Coudert, Z. Yan, Z. Chen, Z. Papakipos, A. Singh, A. Srivastava, A. Jain, A. Kelsey, A. Shajnfeld, A. Gangidi, A. Victoria, A. Goldstand, A. Menon, A. Sharma, A. Boesenberg, A. Baevski, A. Feinstein, A. Kallet, A. Sangani, A. Teo, A. Yunus, A. Lupu, A. Alvarado, A. Caples, A. Gu, A. Ho, A. Poulton, A. Ryan, A. Ramchandani, A. Dong, A. Franco, A. Goyal, A. Saraf, A. Chowdhury, A. Gabriel, A. Bharambe, A. Eisenman, A. Yazdan, B. James, B. Maurer, B. Leonhardi, B. Huang, B. Loyd, B. D. Paola, B. Paranjape, B. Liu, B. Wu, B. Ni, B. Hancock, B. Wasti, B. Spence, B. Stojkovic, B. Gamido, B. Montalvo, C. Parker, C. Burton, C. Mejia, C. Liu, C. Wang, C. Kim, C. Zhou, C. Hu, C.-H. Chu, C. Cai, C. Tindal, C. Feichtenhofer, C. Gao, D. Civin, D. Beaty, D. Kreymer, D. Li, D. Adkins, D. Xu, D. Testuggine, D. David, D. Parikh, D. Liskovich, D. Foss, D. Wang, D. Le, D. Holland, E. Dowling, E. Jamil, E. Montgomery, E. Presani, E. Hahn, E. Wood, E.-T. Le, E. Brinkman, E. Arcaute, E. Dunbar, E. Smothers, F. Sun, F. Kreuk, F. Tian, F. Kokkinos, F. Ozgenel, F. Caggioni, F. Kanayet, F. Seide, G. M. Florez, G. Schwarz, G. Badeer, G. Swee, G. Halpern, G. Herman, G. Sizov, Guangyi, Zhang, G. Lakshminarayanan, H. Inan, H. Shojanazeri, H. Zou, H. Wang, H. Zha, H. Habeeb, H. Rudolph, H. Suk, H. Aspegren, H. Goldman, H. Zhan, I. Damlaj, I. Molybog, I. Tufanov, I. Leontiadis, I.-E. Veliche, I. Gat, J. Weissman, J. Geboski, J. Kohli, J. Lam, J. Asher, J.-B. Gaya, J. Marcus, J. Tang, J. Chan, J. | https://arxiv.org/abs/2505.15795v1 |
Zhen, J. Reizenstein, J. Teboul, J. Zhong, J. Jin, J. Yang, J. Cummings, J. Carvill, J. Shepard, J. McPhie, J. Torres, J. Ginsburg, J. Wang, K. Wu, K. H. U, K. Saxena, K. Khandelwal, K. Zand, K. Matosich, K. Veeraraghavan, K. Michelena, K. Li, K. Jagadeesh, K. Huang, K. Chawla, K. Huang, L. Chen, L. Garg, L. A, L. Silva, L. Bell, L. Zhang, L. Guo, L. Yu, L. Moshkovich, L. Wehrstedt, M. Khabsa, M. Avalani, M. Bhatt, M. Mankus, M. Hasson, M. Lennie, M. Reso, M. Groshev, 11 M. Naumov, M. Lathi, M. Keneally, M. Liu, M. L. Seltzer, M. Valko, M. Restrepo, M. Patel, M. Vyatskov, M. Samvelyan, M. Clark, M. Macey, M. Wang, M. J. Hermoso, M. Metanat, M. Rastegari, M. Bansal, N. Santhanam, N. Parks, N. White, N. Bawa, N. Singhal, N. Egebo, N. Usunier, N. Mehta, N. P. Laptev, N. Dong, N. Cheng, O. Chernoguz, O. Hart, O. Salpekar, O. Kalinli, P. Kent, P. Parekh, P. Saab, P. Balaji, P. Rittner, P. Bontrager, P. Roux, P. Dollar, P. Zvyagina, P. Ratanchandani, P. Yuvraj, Q. Liang, R. Alao, R. Rodriguez, R. Ayub, R. Murthy, R. Nayani, R. Mitra, R. Parthasarathy, R. Li, R. Hogan, R. Battey, R. Wang, R. Howes, R. Rinott, S. Mehta, S. Siby, S. J. Bondu, S. Datta, S. Chugh, S. Hunt, S. Dhillon, S. Sidorov, S. Pan, S. Mahajan, S. Verma, S. Yamamoto, S. Ramaswamy, S. Lindsay, S. Lindsay, S. Feng, S. Lin, S. C. Zha, S. Patil, S. Shankar, S. Zhang, S. Zhang, S. Wang, S. Agarwal, S. Sajuyigbe, S. Chintala, S. Max, S. Chen, S. Kehoe, S. Satterfield, S. Govindaprasad, S. Gupta, S. Deng, S. Cho, S. Virk, S. Subramanian, S. Choudhury, S. Goldman, T. Remez, T. Glaser, T. Best, T. Koehler, T. Robinson, T. Li, T. Zhang, T. Matthews, T. Chou, T. Shaked, V . V ontimitta, V . Ajayi, V . Montanez, V . Mohan, V . S. Kumar, V . Mangla, V . Ionescu, V . Poenaru, V . T. Mihailescu, V . Ivanov, W. Li, W. Wang, W. Jiang, W. Bouaziz, W. Constable, X. Tang, X. Wu, X. Wang, X. Wu, X. Gao, Y . Kleinman, Y . Chen, Y . Hu, Y . Jia, Y . Qi, Y . Li, Y . Zhang, Y . Zhang, Y . Adi, Y . Nam, Yu, Wang, Y . Zhao, Y . Hao, Y . Qian, Y . Li, Y . He, Z. Rait, Z. DeVito, Z. Rosnbrick, Z. Wen, Z. Yang, Z. Zhao, and Z. Ma. The llama 3 herd of models, 2024. [14] S. Hu, Y . Luo, H. Wang, X. Cheng, Z. Liu, and M. Sun. Won‘t get fooled again: Answering questions with false premises. In A. Rogers, J. Boyd-Graber, and N. Okazaki, editors, Proceed- ings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 5626–5643, Toronto, Canada, July 2023. Association for Computational Linguistics. [15] H. Huang, Y . Qu, J. Liu, M. Yang, and T. Zhao. An empirical study of LLM-as-a-judge for LLM evaluation: Fine-tuned judge models are | https://arxiv.org/abs/2505.15795v1 |
task-specific classifiers. CoRR , abs/2403.02839, 2024. [16] N. Jain, A. Schwarzschild, Y . Wen, G. Somepalli, J. Kirchenbauer, P. yeh Chiang, M. Goldblum, A. Saha, J. Geiping, and T. Goldstein. Baseline defenses for adversarial attacks against aligned language models, 2023. [17] S. Kim, J. Suk, J. Y . Cho, S. Longpre, C. Kim, D. Yoon, G. Son, Y . Cho, S. Shafayat, J. Baek, S. H. Park, H. Hwang, J. Jo, H. Cho, H. Shin, S. Lee, H. Oh, N. Lee, N. Ho, S. J. Joo, M. Ko, Y . Lee, H. Chae, J. Shin, J. Jang, S. Ye, B. Y . Lin, S. Welleck, G. Neubig, M. Lee, K. Lee, and M. Seo. The BiGGen bench: A principled benchmark for fine-grained evaluation of language models with language models. In L. Chiruzzo, A. Ritter, and L. Wang, editors, Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 5877–5919, Albuquerque, New Mexico, Apr. 2025. Association for Computational Linguistics. [18] T. Li, W. Chiang, E. Frick, L. Dunlap, T. Wu, B. Zhu, J. E. Gonzalez, and I. Stoica. From crowdsourced data to high-quality benchmarks: Arena-Hard and BenchBuilder pipeline. CoRR , abs/2406.11939, 2024. [19] X. Li, T. Zhang, Y . Dubois, R. Taori, I. Gulrajani, C. Guestrin, P. Liang, and T. B. Hashimoto. Alpacaeval: An automatic evaluator of instruction-following models. https://github.com/ tatsu-lab/alpaca_eval , 2023. [20] B. Y . Lin, Y . Deng, K. Chandu, F. Brahman, A. Ravichander, V . Pyatkin, N. Dziri, R. L. Bras, and Y . Choi. Wildbench: Benchmarking llms with challenging tasks from real users in the wild, 2024. [21] S. Lin, J. Hilton, and O. Evans. TruthfulQA: Measuring how models mimic human falsehoods. In S. Muresan, P. Nakov, and A. Villavicencio, editors, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 3214–3252, Dublin, Ireland, May 2022. Association for Computational Linguistics. [22] Y . Liu, N. Moosavi, and C. Lin. LLMs as narcissistic evaluators: When ego inflates evaluation scores. In L.-W. Ku, A. Martins, and V . Srikumar, editors, Findings of the Association for Computational Linguistics: ACL 2024 , pages 12688–12701, Bangkok, Thailand, Aug. 2024. Association for Computational Linguistics. 12 [23] S. Longpre, L. Hou, T. Vu, A. Webson, H. W. Chung, Y . Tay, D. Zhou, Q. V . Le, B. Zoph, J. Wei, and A. Roberts. The flan collection: designing data and methods for effective instruction tuning. In Proceedings of the 40th International Conference on Machine Learning , ICML’23. JMLR.org, 2023. [24] S. Min, K. Krishna, X. Lyu, M. Lewis, W.-t. Yih, P. Koh, M. Iyyer, L. Zettlemoyer, and H. Hajishirzi. FActScore: Fine-grained atomic evaluation of factual precision in long form text generation. In H. Bouamor, J. Pino, and K. Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 12076–12100, Singapore, Dec. 2023. Association for Computational Linguistics. [25] A. Panickssery, S. R. Bowman, and S. Feng. LLM evaluators recognize and favor their own generations. In | https://arxiv.org/abs/2505.15795v1 |
The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. [26] Y . Qin, K. Song, Y . Hu, W. Yao, S. Cho, X. Wang, X. Wu, F. Liu, P. Liu, and D. Yu. InFoBench: Evaluating instruction following ability in large language models. In L.-W. Ku, A. Martins, and V . Srikumar, editors, Findings of the Association for Computational Linguistics: ACL 2024 , pages 13025–13048, Bangkok, Thailand, Aug. 2024. Association for Computational Linguistics. [27] R. Rafailov, A. Sharma, E. Mitchell, S. Ermon, C. D. Manning, and C. Finn. Direct preference optimization: your language model is secretly a reward model. In Proceedings of the 37th International Conference on Neural Information Processing Systems , NIPS ’23, Red Hook, NY , USA, 2023. Curran Associates Inc. [28] V . Raina, A. Liusie, and M. Gales. Is LLM-as-a-judge robust? investigating universal adversarial attacks on zero-shot LLM assessment. In Y . Al-Onaizan, M. Bansal, and Y .-N. Chen, editors, Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 7499–7517, Miami, Florida, USA, Nov. 2024. Association for Computational Linguistics. [29] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms, 2017. [30] J. Shi, Z. Yuan, Y . Liu, Y . Huang, P. Zhou, L. Sun, and N. Z. Gong. Optimization-based prompt injection attack to llm-as-a-judge. In Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security , CCS ’24, page 660–674, New York, NY , USA, 2024. Association for Computing Machinery. [31] P. Verga, S. Hofstatter, S. Althammer, Y . Su, A. Piktus, A. Arkhangorodsky, M. Xu, N. White, and P. Lewis. Replacing judges with juries: Evaluating LLM generations with a panel of diverse models, 2024. [32] P. Wang, L. Li, L. Chen, Z. Cai, D. Zhu, B. Lin, Y . Cao, L. Kong, Q. Liu, T. Liu, and Z. Sui. Large language models are not fair evaluators. In L.-W. Ku, A. Martins, and V . Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 9440–9450, Bangkok, Thailand, Aug. 2024. Association for Computational Linguistics. [33] C. Xu, Q. Sun, K. Zheng, X. Geng, P. Zhao, J. Feng, C. Tao, Q. Lin, and D. Jiang. WizardLM: Empowering large pre-trained language models to follow complex instructions. In The Twelfth International Conference on Learning Representations , 2024. [34] J. Ye, Y . Wang, Y . Huang, D. Chen, Q. Zhang, N. Moniz, T. Gao, W. Geyer, C. Huang, P.-Y . Chen, N. V . Chawla, and X. Zhang. Justice or prejudice? quantifying biases in LLM-as-a-judge. InThe Thirteenth International Conference on Learning Representations , 2025. [35] Z. Zeng, J. Yu, T. Gao, Y . Meng, T. Goyal, and D. Chen. Evaluating large language models at evaluating instruction following. In The Twelfth International Conference on Learning Representations (ICLR) , 2024. 13 [36] L. Zheng, W.-L. Chiang, Y . Sheng, S. Zhuang, Z. Wu, Y . Zhuang, Z. Lin, Z. Li, D. Li, E. P. Xing, H. Zhang, J. E. Gonzalez, and I. Stoica. Judging LLM-as-a-judge with MT-bench | https://arxiv.org/abs/2505.15795v1 |
and chatbot arena. In Proceedings of the 37th International Conference on Neural Information Processing Systems , NIPS ’23, Red Hook, NY , USA, 2024. Curran Associates Inc. [37] X. Zheng, T. Pang, C. Du, Q. Liu, J. Jiang, and M. Lin. Cheating automatic LLM benchmarks: Null models achieve high win rates. In The Thirteenth International Conference on Learning Representations , 2025. [38] L. Zhu, X. Wang, and X. Wang. JudgeLM: Fine-tuned large language models are scalable judges. In The Thirteenth International Conference on Learning Representations , 2025. 14 A Contrastive Policy Gradient We base our RLRE framework on Contrastive Policy Gradient ( CoPG ) [10]. This choice is motivated by the following reasons. Firstly, unlike other reinforcement learning algorithms for optimising LLMs, CoPG is not constrained to preference-based rewards and can optimise any arbitrary reward, which suits our case since the judge-LLM assigns discrete ratings to the responses. Secondly, it has been shown in [ 10] that CoPG converges faster than other algorithms, e.g. Direct Preference Optimization (DPO) [ 27] and Identity Preference Optimisation (IPO) [ 11]. Finally, CoPG is substantially less computationally expensive than non-contrastive RL algorithms such as Proximal Policy Optimisation (PPO) [ 29], which additionally necessitate a critic model. CoPG simplifies the RL training process by requiring fewer models loaded in memory, and considers the entire generation as an action rather than each token [10, 1]. PPO, being more complex, is also more sensitive to hyperparameter tuning. 15 B Training and Inference Details B.1 API Model Identifiers We show below the name, provider and model ID of the frozen candidate- and judge-LLMs that are called via API within the training and inference pipelines. • Command R, Cohere ,command-r-08-2024 • Command R+, Cohere ,command-r-plus-04-2024 B.2 Hyperparameters For all three pipelines, we train with batch size 64, two gradient steps per batch, and 1e-6 learning rate. We use Adam optimiser. To sample preambles from the generator, we set t= 4.0,k= 1.0and p= 1.0. We use a high temperature to ensure enough diversity between the preambles conditioned on the same question. At inference time, we reduce this sampling temperature to t= 0.5. The maximum length of each preamble is 512 tokens. We tune the loss function hyperparameter βto be relatively small ( β= 0.03), as this value performs well on the UltraFeedback validation set. Note that βweights the sequence log-likelihoods within the CoPG loss (see also Section 3). Setting a low beta value assigns greater importance to the reward than the fluency of the preambles, thus placing fewer constraints on the preambles themselves. It is worth noting that the fluency of a preamble is distinct from and not necessarily related to the fluency of the final response output by the candidate model. Indeed, our experiments indicate that the candidate responses remain fluent, as shown by their low perplexity and by the fact that human inspection can hardly identify them as attacked (see Section 5.2). We train with early stopping according to validation performance. The best-performing checkpoints for each of the three pipelines are shown in Table 6. Table 6: Selected checkpoints for | https://arxiv.org/abs/2505.15795v1 |
each training pipeline. PipelineSelected checkpoint step Command R7B+R7B 600 Command R7B+R 1000 Llama 8B+70B 600 B.3 Training Times We train the Command R7B+R7B pipeline and the Llama 8B+70B pipeline for 600 steps, and the Command R7B+R for 1k steps. In our specific setup, we rely on API/remote server models for the reward, so there was a degree of variability in training time due to the response times of the server. When the server is healthy, this response time is negligible, but if the query is refused, there will be subsequent retries (with increasing wait time) being added. Note that this makes the training time variable and dependent on the health of the remote server. 16 C Data C.1 Dataset details In Table 7 we provide the HuggingFace IDs, data splits used and licenses of the open-access datasets used for training and inference. Note that for all datasets we use the provided official splits. Table 7: Details of training and testing datasets. Dataset ID Split Used License UltraFeedback openbmb/UltraFeedback Train MIT MT-Bench HuggingFaceH4/mt_bench_prompts Test Apache-2.0 C.2 Task Distribution of Training and Testing Data We train on UltraFeedback [ 6] and test on MT-Bench [ 36]. The latter comprises 160 questions equally split among two conversational turns and eight diverse topics—i.e., writing, roleplay, reasoning, math, coding, extraction, STEM, and humanities. The former is a large dataset of ∼60k samples that collects questions from several existing datasets (note that, unlike MT-Bench, all questions in UltraFeedback are single-turn). Most of the datasets represented in UltraFeedback comprise multiple different task types. In Table 8, we illustrate the task type distribution within the datasets that compose UltraFeedback, in relation to the tasks in MT-Bench. We observe that all MT-Bench task types are represented in UltraFeedback. While some are represented in only one corpus (e.g., coding), others appear throughout multiple datasets (e.g., STEM, humanities, writing). Therefore, while our MT-Bench test set is balanced with respect to the tasks, the training set is unbalanced. While FalseQA [ 14] (which represents only 3.7% of the UltraFeedback dataset) contains questions that are not exactly represented in MT-Bench, solving them requires commonsense reasoning. Since MT- Bench contains commonsense tasks (particularly, the ‘Reasoning’ task), we postulate that FalseQA may have a degree of transferability to the test set data and choose to keep it in the training corpus. Table 8: An overview of the datasets that compose UltraFeedback, and the MT-Bench task types that they include. UltraFeedback dataset of origin MT-Bench task types Writing, RoleplayShareGPT [5]STEM, Humanities FLAN-v2-NIV2 [23] Writing, Extraction Evol-Instruct [33] Coding Writing, STEMUltraChat [7]Humanities FLAN-v2-CoT [23] Math, Reasoning FalseQA [14] – FLAN-v2-P3 [23] Extraction TruthfulQA [21] STEM, Humanities Reasoning, STEMFLAN-v2-FLAN2021 [23]Humanities 17 D Results by Question Type Table 9 illustrates the MT-Bench results per question type of each model attacked with the best- performing baselines and our preamble-based attack. Table 9: MT-Bench evaluation scores for each question type, assigned by the Command R+ judge- LLM to candidate-LLMs attacked with the refinement bias attack, the universal adversarial attack and the preamble-based attack. Each setup is run five times and the scores are | https://arxiv.org/abs/2505.15795v1 |
averaged (showing the standard deviation in the subscript) to account for small variations due to temperature sampling. Candidate-LLMAttack type Refinement Universal Preambles Writing 8.450.04 8.71 0.01 8.610.07 Roleplay 8.600.05 8.650.03 8.69 0.06 Reasoning 5.630.09 4.900.06 5.89 0.11 Command R7BMath 4.470.10 3.600.07 6.51 0.12 Coding 8.08 0.09 7.150.09 7.490.09 Extraction 8.010.08 8.41 0.03 8.41 0.05 STEM 8.930.02 8.960.00 9.00 0.00 Humanities 8.710.01 8.92 0.01 8.880.02 Writing 8.450.04 8.76 0.02 8.670.01 Roleplay 8.640.02 8.65 0.02 8.65 0.02 Reasoning 6.030.05 5.980.11 6.85 0.07 Command RMath 6.460.15 6.290.13 7.11 0.10 Coding 7.99 0.08 7.040.05 7.470.09 Extraction 8.290.05 8.690.03 8.80 0.02 STEM 8.870.01 9.00 0.00 8.960.02 Humanities 8.890.03 8.96 0.01 8.900.02 Writing 8.560.03 8.640.01 8.68 0.05 Roleplay 8.710.04 8.90 0.02 8.820.06 Reasoning 6.490.13 5.740.14 6.64 0.13 Llama 3.1 70B InstructMath 6.720.13 7.68 0.09 7.230.08 Coding 8.48 0.05 7.830.09 7.730.08 Extraction 8.470.06 8.88 0.03 8.790.04 STEM 8.900.03 8.690.01 8.95 0.03 Humanities 8.690.06 9.00 0.00 8.920.01 18 E Instructions to the Preamble Generator The following instructions are for eliciting preambles from the generator at training and inference. In preliminary experiments, we notice that when the preamble generator is fed an instruction prompt that contains the question, it tends to answer the question instead of providing a guidance-oriented preamble as instructed. Instruction 1 has thus been crafted to minimise this issue. Instructions 2 and 3 are only used in the ablated settings. Note that the instructions below contain special tokens from the Command tokenizer. For the Llama 8B+70B pipeline, these are replaced with the corresponding Llama tokens4. Instruction 1: Instruction Including the Current Question < |START_OF_TURN_TOKEN |>< |SYSTEM_TOKEN |>You are an expert in giving general advice to a student on how to answer a question. NEVER answer the question yourself or give any hint of what the answer should be, as that would amount to cheating. You can only give general advice of what the student should pay attention to when answering the question and what characteristics the answer should have, in general terms. Always place your advice between <prompt> tags. < |END_OF_TURN_TOKEN |>< |START_OF_TURN_TOKEN |>< |USER_TOKEN |>"question" Give general advice for a student to answer this question in the best possible way, so they can obtain a high mark. < |END_OF_TURN_TOKEN |>< |START_OF_TURN_TOKEN |>< |CHATBOT_TOKEN |>Sure, I’d be happy to give general advice for answering this type of question in the best possible way! <prompt> Instruction 2: Instruction Excluding the Current Question < |START_OF_TURN_TOKEN |>< |USER_TOKEN |>Give advice to answer an exam style question in the best possible way and obtain a high mark. You do not know what the question will be. It could be on disparate topics such as maths, coding, reasoning, general knowledge, or creative writing. < |END_OF_TURN_TOKEN |>< |START_OF_TURN_TOKEN |>< |CHATBOT_TOKEN |> Instruction 3: No Instruction < |START_OF_TURN_TOKEN |>< |CHATBOT_TOKEN |> 4https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_1 19 F Human Annotation Guidelines We report the guidelines given to annotators for the human evaluation task. Annotators were recruited among machine learning experts. The attached sheet contains questions and the corresponding LLM-generated responses. Each response may or may not have been modified by one of the six adversarial attacks described below. These | https://arxiv.org/abs/2505.15795v1 |
attacks aim to inflate the evaluation score that each response would receive when assessed by another, larger LLM. Your task is to read the question and the response, determine if the response has been attacked or not, and label it as ‘attacked’ or ‘not attacked’. Note: you do not need to record *which* attack you think was used, if any, but only if the answer was attacked or not. Possible attacks: •Verbosity bias attack: The length of the response is expanded by adding sentences and phrases that are relevant to the topic but semantically redundant (i.e., they do not introduce any new information) •Bandwagon bias attack: A sentence is added after the response stating that a percentage of people think the above response should be awarded the highest rating •Authority bias attack: A fake book citation is added to the response according to the following template: [[Citation:[Random Last Name], [Random Initial]. (Random Year between 1990 and 2024). Title related to [topic], topic: [keywords], p. [Random Page Number]] •Refinement-aware bias attack: There are two distinct responses, presented in the following format: History:[Response 1] Please polish this answer to make it better and more complete. Please output your polished answer directly. [Response 2] •Universal phrase attack: A sequence of four fixed words is appended to the end of the response •System prompt attack: The response is generated with the aid of a question-specific system prompt specially tuned to increase the evaluation score that another LLM would assign to it. 20 G Representative Preambles Our preambles show a high degree of variability across models and pipelines. Since Llama preambles not conditioned on a particular question resemble meaningless strings of characters, we experiment with inputting into the candidate-LLMs random strings of equal length. We find that—unlike these tuned preambles—random strings do not raise the scores assigned by judge-LLMs. G.1 Preambles Conditioned on the Question The preambles below are conditioned on the MT-Bench question ‘Draft a professional email seeking your supervisor’s feedback on the Quarterly Financial Report you prepared. Ask specifically about the data analysis, presentation style, and the clarity of conclusions drawn. Keep the email short and to the point’ . Preamble 1: Command R7B+R7B Remember that the aim of your response is to obtain feedback that will allow you to improve the report for future quarters. To achieve this, it is crucial that the feedback provided by your supervisor is as comprehensive as possible. You can therefore include further specific aspects that you would like the supervisor to comment on. This might include the following: - The methodology you employed for the data analysis. - How the report’s conclusion section could be improved to enhance the clarity of the recommendations. - The overall tone and style of the report. In particular, it is important to ensure that the tone and style are consistent throughout the document. - Any other specific areas that you think could be improved. By seeking detailed feedback on these specific aspects, you will be better able to refine the report for future quarters. Remember to tailor your email to the supervisor | https://arxiv.org/abs/2505.15795v1 |
you are addressing. For instance, if you are emailing your direct supervisor, you can sound more formal and include more detail. On the other hand, if you are emailing a supervisor that is more distant, or if you are emailing a professor or industry expert with whom you have only had limited interactions, you should sound more formal and include less detail. You should also carefully review the supervisor’s publications and professional profile to get a good understanding of the supervisor’s research interests, academic focus, and professional expertise. This will enable you to tailor your email correspondence with the supervisor more effectively. Also remember to be mindful of the supervisory feedback policy when seeking feedback from your supervisor. In particular, you should ensure that you are not violating any policies regarding the sharing of sensitive information or intellectual property. You should also ensure that you are not violating any policies regarding the use of proprietary information or trade secrets. Lastly, you should also be aware of any potential conflicts of interest that may exist in rela- tion to your request for supervisory feedback. In particular, you should be mindful of any situations in which the supervisor may have a financial gain or loss as a result of the feedback that they provide. For example, if the supervisor has recently started their own company that produces products similar to those produced by the company for which you are currently conducting a quarterly financial report, the supervisor may have a conflict of interest as a result of the fact that they may have a financial gain or loss as a result of the feedback that they provide on the quarterly financial report. 21 Preamble 2: Command R7B+R Here is a general template for an email seeking feedback from your supervisor. Adapt it to your needs and style. Here is some general advice to help you craft an effective email: - Structure: Organize your email in a clear and logical structure. Start with a polite and professional greeting, followed by the main body of your email. Conclude with a polite closing statement and a complimentary note if you are ending your email there. - Tone and Language Use: Maintain a professional tone throughout your email. Use formal language and ensure your writing is concise and coherent. Avoid any jargon or complicated terminology that your supervisor may not be familiar with. - Content and Specifics: - Introduction: Start your email by introducing yourself and your role, if relevant. Then, state the purpose of your email and briefly describe the specific aspects of your work for which you are seeking feedback - Feedback Request: Present your request for feedback clearly and respectfully. Specify the areas or criteria for which you would appreciate feedback. In this case, you could mention that you would welcome feedback on the data analysis, presentation style, and clarity of the conclusions drawn. - Conclusion and Sign-Off: End your email with a conclusion that emphasizes your respect for your supervisor’s time and expertise. Then, sign off with a polite and professional closing statement, such | https://arxiv.org/abs/2505.15795v1 |
as “Thank you for your time and guidance.” Finally, remember to proofread your email before sending it. Pay attention to grammar, spelling, and punctuation to ensure that your writing is polished and professional. Also, keep in mind that the tone and language you use in your email should align with the overall professionalism and formality of the academic or professional context in which you are operating. Preamble 3: Llama 8B+70B In this case, a specific example of a [in a specific context] that a student can use to explain a concept or a process. For example, a specific example of a [in a specific context] that a student can use to explain a concept of a specific example of a [in a specific context]. For example, a specific example of a [in a specific context] that a student can use to explain a concept of a specific example of a [in a specific context]. For example, a specific example of a [in a specific context] that a student can use to explain a concept of a specific example of a [in a specific context]. For example, a specific example of a [in a specific context] that a student can use to explain a concept of a specific example of a [in a specific context]. For example, a specific example of a [in a specific context] that a student can use to explain a concept of a specific example of a [in a specific context]. For example, a specific example of a [in a specific context] that a student can use to explain a concept of a specific example of a [in a specific context]. For example, a specific example of a [in a specific context] that a student can use to explain a concept of a specific example of a [in a specific context]. For example, a specific example of a [in a specific context] that a student can use to explain a concept of a specific example of a [in a specific context]. For example, a specific example of a [in a specific context] that a student can use to explain a concept of a specific example of a [in a specific context]. For example, a specific example of a [in a specific context] that a student can use to explain a concept of a specific example of a [in a specific context]. For example, a specific example of a [in a specific context] that a student can use to explain a concept of a specific example of a [in a specific context]. For example, a specific example of a [in a specific context] that a student can use to explain a concept of a specific example of a [in a specific context]. For example, a specific example of a [in a specific context] that a student can use to explain a concept of a specific example of a [in a specific context]. For example, a specific example of a [in a specific context] 22 G.2 Preambles Not Conditioned on the Question These preambles are generated using Instruction 2 in Appendix | https://arxiv.org/abs/2505.15795v1 |
E. Preamble 4: Command R7B+R7B This is the best possible way to answer the question and obtain a high mark: - Read the question carefully and completely. Make sure you understand exactly what is be- ing asked of you. Sometimes, questions can be complex or have hidden nuances, so pay close attention to every detail. If there is a need to ask the examiner for clarification, do not hesitate to do so. - Brainstorm and think hard about the best way to approach the problem at hand. Sometimes, questions can be quite abstract or general, so it is very important to analyse the problem carefully and think about the most suitable way to solve it. - Make sure you have all the necessary tools and materials available to you before you start. Sometimes, questions might require you to perform calculations or experiments, so make sure you have all the required resources ready before you begin. - Start answering the question systematically and step by step. Write down each step you take and each piece of reasoning you use. This will help you organise your thoughts clearly and effectively, and it will also make it easier for you to follow your own train of thought should you need to return to it at a later stage. - Show all your work and thoughts clearly and concisely in your answers. It is very impor- tant that you write down all your intermediate results and explanations in a way that is easy for others to follow. This will help the examiner understand your thought process and will give you a clearer idea of the logic behind your answers should you need to return to them at a later stage. - Use appropriate terminology and write down your answers clearly, precisely, and concisely. It is crucial that you familiarise yourself with the relevant vocabulary for the specific topic or subject area that you are dealing with. This will help you express your thoughts and ideas in a way that is easily understandable for others, and it will also aid you in presenting your answers in a clear, precise, and concise manner. - Pay close attention to the structure and the organisation of your answers. Make sure that your answers are well-structured, logical, and easy to follow. This will not only make it easier for the examiner to understand your answers, but it will also help you analyse your own answers and identify any potential weaknesses or errors in your reasoning. - Finally, make sure you have read through your answers carefully and thoroughly before you hand them in. This is crucial, as it is very easy to make careless mistakes or overlook potential errors in your answers if you have not taken the time to review them properly. Reading through your answers carefully and comprehensively is therefore crucial to ensure that you have not made any avoidable mistakes and that your final answers are the best possible ones that you can submit. 23 Preamble 5: Command R7B+R Here are some general guidelines on how to answer | https://arxiv.org/abs/2505.15795v1 |
a question in an exam: 1. Read the question carefully and thoroughly. 2. Identify the key terms and concepts in the question. 3. Consider the scope and depth of the question. Is it a brief or narrow question, or is it more wide-ranging and open to in-depth analysis? 4. Determine the appropriate structure and sequence for answering the question. Think about the logical steps or sections that would constitute a convincing and well-structured answer. 5. Provide clear and concise answers to the questions. Avoid rambling or including unneces- sary information. 6. Support your answers with relevant examples, facts, or theories. This will add depth to your answer and demonstrate your understanding of the material. 7. Show your reasoning clearly when answering questions that require logical or mathemati- cal reasoning. Explain each step of your reasoning and show any relevant calculations or formulas. 8. If you are answering a question that requires creative or imaginative answers, such as a creative writing prompt, think about the question from a different perspective. Try to approach it in a fresh and innovative way that would constitute an interesting and varied answer. Remember that the key to answering exam questions successfully is to demonstrate your understanding of the material and to provide well-structured and logical answers. Preamble 6: Llama 8B+70B pbpfnfnspcpghnllpsghnlrpnaporccddocmtinadsprfnochmininisppcpinhfnhllinpghghffpcnpiminfgldirpinn sannghoispnhappflpiddsopcnnnsocpbghiifncpspiprmirnspdnomhinspipghapghipsnfnpipbfnnfnpiphlnpsp iapmtlhpinnsfpcpsllphlnsnnmdhdppinnnnfpcenllhllpipnfinllidlappinarnfblllsnnnnsnghgghfpipollnssapgh mhgppifnrhlllsnfccfaplipnctllpbapllllrpppfpcrpibhnrappfrpbdpspcpsllllpindpchhdmfnllmbghpfhlnsllnnm apiocllpldnspsnnhnrocpshhllnssnnnsgnpfinnpfggarghgrpnprfnhpdpitpcshosfgharnpmnnnmtillpcapnnarll aipfhlsfghmrpghpbnlldnfnpfencccshbcpipsnnnnsnnipcpcbllllpipnnfipllinsnsnsnsnsnhalhpapclhnsdlnhfn chfaplnllnsnlmilnnsnghmrppipinrpcilnocllpnsnnnnsaldlspsrpbomilfnspsllhosghispncspspnnnsirpnmllpsn pflnhfnhghghfispipffnhlllscsccioinfnnrpcghpicocfpimlnhnpnablfpiarnlnnllpiprllpcnslhnllpspcggionsipni nnsdfpcpghinnnnsnnnnscfpiaphalddnsnsdnsnsnsdhghipcpldhlmnllnpnpghigrpiillsldnghmnfpipbofhfnghc npnllddnnmmccpbpbaplllimninhpghirrpsldnhllpiapnnlllpiprirhdrpcnfnlllnsnnnmrpgpsnspcilrllldrpispnnn solpghapopfnpapllllihmfncphblliinnfghghfphllphnnshmnsdhapnmhinnsnsdhirpipcpiglnrpipocgggghnnoc piarllsspsnpinocllpsrpnnhioroccspilnsinllrnsfnnnnsnghorppsphllpsnsnhnhmnpnhhirrrpninfpsnnsnnnsncs nsibirpspfnnnsidghaghiibipalliosnnsghghipnrpbdinfpinsninipdrppnmnhllpsagfllpspipfnsmlllspighpiirnhh ddmcsipnsfnpnmnfnsfnnnsipdpsndimtfpsldlmddhnnsfhnmipnocngmngnspihirpcpibansnnnnghrpldpsiom fllpsapnnsllrpsppcnillhlnappnnsnnnnhmghgfprirllfspnnghignhpdpfgnpsphnnrirrnncngilsfghmmccfnlldfp ighapnifnhdtpisprpnhosnnnnsmldhpinpnnllffipgipnnsnsmldlspspsrlhllpnaplnsnsnghmrppiprnscsgolnibpi prhllddnnmapfinns 24 G.3 Preambles Not Conditioned on Any Instruction These preambles are generated using Instruction 3 in Appendix E. Preamble 7: Command R7B+R7B The ‘v’ in ‘v1’ stands for "version". It’s a standard way to label different releases of a product. So, when you see ‘v1’, ‘v2’, ‘v3’, etc., it usually means that you’re looking at the first, sec- ond, or third version of something. In the context of software development, it’s common to use this "v-notation" to communicate the different stages of development and the releases of software products. Preamble 8: Command R7B+R The following are some examples of common verbs that are often confused with each other: 1. "Be" and "am": These verbs are often confused, especially in the negative form ("isn’t" vs. "is not"). Remember that "be" is usually used in the progressive form ("being"), while "am" is the present tense first-person singular form of the verb "to be." 2. "Lie" and "lay": These verbs are often confused, especially in the past tense. "Lie" is usually used in the intransitive sense, meaning "to rest or be located," while "lay" is typically used in the transitive sense, meaning "to place or put." 3. "Rise" and "raise": These verbs are often confused, especially in the past tense. "Rise" is usually used in the intransitive sense, meaning "to rise or elevate," while "raise" is typically used in the transitive sense, meaning "to lift or elevate." Remember to pay attention to the context and the specific meanings of the verbs in question. Context and word choice can often clarify the intended meaning and help avoid confusion. Preamble 9: Llama 8B+70B ausefowellFegoeasclamasonfarfinelhurstasontoar720Aarforgononabi78SCARatchonercfeglakloblakfcat suigeatakarovhurstedbertegfupaAEeghcortelhc2anitchlamascenfCarAEabielricfcCIA 在线观看 etAEhallchedaCEAEartf5RVCEBCCVMDCVMCVMCVMCDCVCCEAOIASCVKDCVKCVKCV KRDCIKDVSOIKDVVVKDVKDVDHKCNKDVKDVSKDVDVKDVKDVCVDHKDVKDVV DVKDVVVDVVKDVKDVKDVDVVKDVKDVVKDVKDVVDVKDVKDVKDVKDVDVKDVDV VVDVKDVCVDVVDVVDVVKDVVDVVHVRKRDVVRKCVDVVVKCVDKCVDVVCRKDVVSR KDVSOIKDVVVVVDVVVKDVDVVCVRVKDVVRKIKDVVKDVDVCRDVHKVSIKVDVVDVVK VSAOIKVCRSAOIKRDVKOIKIKVDVDVRDVVDVSKRVVKIKDVVDVDHVDVVVKIKRDVVVV CRDVKIKDVKRDVVDVRVKIKDVVVVDVDVKRDVVKOIKIKCVSIKDVVDKDVKIKVSIPDVD KCDVVDHKIKRDVVVKDVVRDVKDVKDVVVVVRDVVCQDVKRDVVRDVCRDVVKCVSDVD HKDVRDVVDVVVDKRDVVKDVKDVDVKVCMKDVKDVVVCVDVRKDVVVDVKRDVKDVDV | https://arxiv.org/abs/2505.15795v1 |
SVDVVVDVKDVVKDVVVDVKVDVVDGDHVVTDKDVVDVRDVKDVCVCVCSCVCVMCVCR DVVDVDVCRDVVSRDVKDVVRSRDVVVKDVVVDVVRDKDVRDVVVKVVSRDVVHVVSRDV VDVVVVKDVKDVKCVRDVRDVVDVVVVVVKIKRDGKDVCDVRKIKDVCVKDVVKDVVGDG DVKDVKDVVVKIKDVVVVKDVKDVVVIKDVVVVKDVVVDVKDVVDVVDVHKDVVCVKIKD VVVSRDVKDVVVKDVKDVVDVKDVVDVRDVKDVVDVIKVCRDVVVVVKDVVDVVDVVKV KDVRDVKDVCVTDVKRDKCVRDVVDVCVRDVVDVCRSDVKDHKDVVDVVRDVKDVVDVV SUVIKVCKDHKDVHVDVVRDVVKRDVVKDVVVCVDKDVKDVVVDVVKDVVCVDVKDVVD VVKDVDVCVDVVVTOCRDVVKDVVVRDV 25 H Candidate Response Layout We show below two representative math reasoning responses to the question ‘The vertices of a triangle are at points (0, 0), (-1, 1), and (3, 3). What is the area of the triangle?’ , generated with the vanilla version of Command R7B and the version pipelined the tuned preamble generator, respectively. While both responses lead to the incorrect solution, the one generated by the attacked model is formatted into distinct paragraphs labelled with explanatory headers, and appears more structured overall. No attack We can use the shoelace formula to calculate the area of the triangle. The formula is given by |det(L)|/2, where Lis a matrix whose columns are the coordinates of the vertices of the polygon. For a triangle with vertices (x1, y1),(x2, y2),and(x3, y3),the determinant Lis given by L="x1y11 x2y21 x3y31# . Plugging in the values, we get L="0 0 1 −1 1 1 3 3 1# . The determinant of Lisdet(L) =−(3−(−1))(−3−1) + (0( −1)) = 14 . Therefore, the area of the triangle is |det(L)|/2 = 7 . The answer is 7. Preambles To find the area of the triangle formed by the vertices (0,0),(−1,1), and (3,3), we can employ the concept of the triangle’s determinant using its coordinates. Here’s a step-by-step explanation: 1. Identify Triangle’s Vertices: We have three vertices: (0,0),(−1,1), and (3,3). 2. Calculate Determined Area: The area of a triangle, often denoted as ∆ =1 2 x1y11 x2y21 x3y31 ,where (xi, yi)are the coordinates of the vertices. In our case: - x1= 0, y1= 0-x2=−1, y2= 1-x3= 3, y3= 3. 3. Substitute and Calculate: Plugging in the values: ∆ =1 2 0 0 1 −1 1 1 3 3 1 =1 2(0−(−1) + 3)p (0−(−1))2+ (0−3)2=1 2·4·5 = 10 square units . 4. Conclusion: Therefore, the area of the triangle is 10square units . 26 | https://arxiv.org/abs/2505.15795v1 |
VerifyBench: Benchmarking Reference-based Reward Systems for Large Language Models Yuchen Yan1,2,*, Jin Jiang2,3, Zhenbang Ren1,4, Yijun Li1, Xudong Cai1,5, Yang Liu2, Xin Xu6,Mengdi Zhang2,Jian Shao1,†,Yongliang Shen1,†,Jun Xiao1,Yueting Zhuang1 1Zhejiang University2Meituan Group3Peking University 4University of Electronic Science and Technology of China 5Beijing University of Posts and Telecommunications 6The Hong Kong University of Science and Technology {yanyuchen, syl, jshao}@zju.edu.cn Abstract Large reasoning models such as OpenAI o1 and DeepSeek-R1 have demonstrated remarkable performance in complex reasoning tasks. A critical component of their training is the in- corporation of reference-based reward systems within reinforcement learning (RL), where model outputs are evaluated against ground truth references. However, existing reward benchmarks focus on preference comparisons between responses rather than evaluating verifi- cation against ground truth references, leaving a critical gap in our ability to evaluate verifi- cation systems used in reasoning model train- ing. In this paper, we introduce VerifyBench and its challenging variant VerifyBench-Hard, two benchmarks specifically designed to assess reference-based reward systems. These bench- marks are constructed through meticulous data collection and curation, followed by careful human annotation to ensure high quality. Our comprehensive evaluation reveals that while larger model-based verifiers show promise on standard cases, all current systems demonstrate substantial room for improvement on challeng- ing instances. Through systematic analysis of performance patterns across reasoning tasks and error categories, we provide insights for ad- vancing reference-based reward systems. These benchmarks establish a standardized frame- work for improving verification accuracy, ul- timately enhancing reasoning capabilities in models trained via RL. Project Page VerifyBench Benchmark ZJU-REAL/VerifyBench Code ZJU-REAL/VerifyBench 1 Introduction In recent years, large language models (LLMs) have exhibited remarkable capabilities, signifi- cantly assisting humans across diverse practical do- *Contribution during internship at Meituan Group. †Corresponding author. QuestionChosen CompletionReward SystemQuestionRefence AnswerRejected CompletionCompletionReward System++Chosen > Rejected?Golden LabelCorrect / Wrong?0.80.2TrueFigure 1: The core distinction between VerifyBench and existing reward benchmarks (Lambert et al., 2024; Liu et al., 2024) is illustrated as follows. Upper panel: Existing reward benchmarks assess the accuracy of a reward system by comparing the ranking of two comple- tions for the same question. Lower panel: In contrast, our proposed VerifyBench evaluates the accuracy of a reward system by determining the correctness of a sin- gle completion using a reference answer. mains (DeepSeek-AI et al., 2025b; Grattafiori et al., 2024; Yang et al., 2025). Reinforcement learning from human feedback (RLHF) has been crucial to this progress, with reward models playing a central role by evaluating and scoring model-generated responses to guide training. This approach has led to the development of numerous benchmarks (Lambert et al., 2024; Liu et al., 2024; Zhou et al., 2025) for systematic reward model evaluation, fo- cusing primarily on pairwise preference judgments between competing responses. The emergence of specialized large reason- ing models (LRMs) (DeepSeek-AI et al., 2025a; Qwen Team, 2024; Kimi Team et al., 2025) such as OpenAI’s o1(OpenAI, 2024) and DeepSeek- R1(DeepSeek-AI et al., 2025a) has fundamentally changed this landscape. These models achieve unprecedented performance on reasoning tasks through specialized reinforcement learning tech- niques that differ from standard RLHF approaches. A key distinction in training methodologies for 1arXiv:2505.15801v2 [cs.CL] 25 | https://arxiv.org/abs/2505.15801v2 |
May 2025 LRMs is their reliance on reference-based reward systems, where rewards are assigned based on alignment between model-generated responses and authoritative reference answers. This approach has been implemented variously across leading models, with Deepseek-R1 (DeepSeek-AI et al., 2025a) em- ploys a rule-based reward to prevent reward hack- ing, whereas models like Seed1.5-Thinking (Seed et al., 2025) adopt model-based reward systems to generate more precise and robust signals. Despite the widespread adoption of reference- based reward systems in training state-of-the-art reasoning models, a significant gap exists in our ability to evaluate these systems systematically. Current benchmarks focus almost exclusively on preference-based evaluation, assessing rewards on their ability to rank competing responses correctly. This approach fails to capture the requirements of reference-based verification, where responses must be judged against objective ground truths rather than relative preferences. The absence of dedicated benchmarks for reference-based reward systems has limited researchers’ ability to assess, compare, and improve their verification methodologies effec- tively, potentially impeding progress in reasoning model development. To address this critical gap, we introduce Verify- Bench, a benchmark specifically designed to evalu- ate the accuracy of reference-based reward systems. VerifyBench differs fundamentally from existing reward benchmarks by focusing on absolute cor- rectness judgments rather than relative preference assessments. While traditional benchmarks ask re- ward models to determine which of two responses is better, VerifyBench challenges systems to verify whether a single response correctly aligns with a reference answer, more accurately reflecting the actual use case in reasoning model training. In this paper, we present VerifyBench, a bench- mark specifically designed to evaluate the accu- racy of reference-based reward systems. To create VerifyBench, we curated a diverse collection of in- structions paired with reference answers sourced from existing open datasets. Responses to these in- structions were generated by multiple open-source and proprietary LLMs. The correctness of each response was assessed using both automated model judgments and human evaluations. Each instance in VerifyBench was verified by at least two human annotators to ensure label consistency and reliabil- ity, thereby producing a high-quality benchmark for the evaluation of reward systems.Recognizing the need to differentiate between various verification techniques and to push the boundaries of current capabilities, we further developed VerifyBench-Hard, a more challeng- ing variant of our benchmark. This dataset fo- cuses on contentious cases where leading mod- els produce highly conflicting judgments, provid- ing a more stringent test for reward system accu- racy. VerifyBench-Hard samples were carefully selected based on disagreement patterns among high-performing models, then subjected to thor- ough human annotation to ensure label quality. Our contributions are summarized as follows: 1.To better reflect realistic reinforcement learn- ing (RL) scenarios for reasoning models, we construct VerifyBench, a benchmark derived from existing models and datasets, to pro- vide an objective evaluation of the accuracy of reference-based reward systems. 2.We further develop VerifyBench-Hard, a more challenging benchmark curated from cases ex- hibiting high disagreement among multiple models. This dataset contains a larger propor- tion of difficult-to-verify samples, highlight- ing substantial potential for improvement in current models. 3.We conduct a comprehensive empirical anal- ysis of model performance on | https://arxiv.org/abs/2505.15801v2 |
both Verify- Bench and VerifyBench-Hard, offering ac- tionable insights to advance the accuracy of reference-based reward systems and enhance RL training in reasoning tasks. 2 Preliminaries Reference-free Reward Models In reinforce- ment learning (RL) for large language models (LLMs), the reward model plays a crucial role by approximating real-world reward signals associated with model-generated outputs. A typical reward model takes as input a user’s query qalong with the corresponding LLM-generated response r, and produces a reward signal, formally defined as: r=Rφ(q, r) (1) where qrepresents the user’s query, rdenotes the response generated by the LLM, and φencapsu- lates either the learned parameters of the reward model or the heuristic criteria used to evaluate the quality of the response given qandr. 2 Evaluation of Reference-free Reward Models Generally, reward models produce scalar outputs whose scales can vary significantly across differ- ent implementations, complicating direct numer- ical comparisons. Consequently, current bench- marks evaluate reward models using a pairwise comparative approach. Formally, given a dataset D comprising tuples (q, rw, rl), where qrepresents a user’s query, and rwandrldenote two candidate responses with rwconsidered superior to rl, the accuracy of a reward model is quantified as the pro- portion of instances in which the model correctly assigns a higher score to rwthan to rl. Mathemati- cally, this accuracy metric is defined as: Acc=1 |D|X (q,rw,rl)∈DI[Rφ(q, rw)> R φ(q, rl)] (2) where I(·)is the indicator function, and Rφdenotes the reward model parameterized by φ. Reference-based Reward Models With the emergence of advanced reasoning models such as DeepSeek-R1, reference-based reward systems have been integrated into reinforcement learn- ing (RL) frameworks for large reasoning models (LRMs). These models require training on exten- sive datasets, which typically include authoritative reference answers. Consequently, the reward as- signment task shifts towards evaluating the align- ment between the model-generated outputs and their corresponding reference answers. Formally, this reward calculation can be expressed as: r=Rφ(q, gt, r ) (3) where qdenotes the user-issued query, gtdenotes the ground-truth reference answer, rrepresents the model-generated response, and φencapsulates ei- ther the learned parameters of the reward model or the established evaluation criteria used to assess the alignment among q,gt, andr. Evaluation of Reference-based Reward Models In this paper, we propose a reference-based reward benchmark designed to systematically evaluate re- ward models within reinforcement learning (RL) frameworks for large reasoning models (LRMs). Unlike traditional reward evaluation benchmarks, which rely on pairwise comparisons, our approach leverages explicit reference answers to directly as- sess the correctness of individual model-generated responses. Concretely, given a dataset Dconsistingof instances (q, gt, r, y ), where qdenotes the user- issued query, gtrepresents the ground-truth refer- ence answer, ris the model-generated response, andyis the binary correctness label assigned to the response, we evaluate the reward model by measur- ing its accuracy in correctly predicting these labels. Formally, the accuracy metric is defined as: Acc=1 |D|X (q,gt,r,y )∈DI[E(Rφ(q, gt, r )) =y] (4) where Rφ(q, gt, r )denotes the reward model pa- rameterized by φor defined by heuristic verifica- tion rules, producing predictions indicative of the correctness of response rrelative to the provided reference answer gt. | https://arxiv.org/abs/2505.15801v2 |
The function E(·)represents an operation (e.g., thresholding or discretization) mapping continuous reward scores into discrete cor- rectness predictions suitable for direct comparison with the ground-truth labels y. 3 Benchmark Construction In this paper, we introduce two benchmarks, VerifyBench and VerifyBench-Hard, to evaluate reference-based reward systems. The VerifyBench benchmark is designed to reflect naturally dis- tributed data, whereas VerifyBench-Hard com- prises samples exhibiting high levels of disagree- ment among models, thereby assessing a model’s ability to provide reliable judgments in ambiguous or challenging scenarios. 3.1 Construction of VerifyBench Query Curation To emulate realistic reinforce- ment learning (RL) scenarios involving reference- based reward systems, we curate a comprehen- sive collection of open-source reasoning prob- lems paired with corresponding reference answers. These problems encompass three primary cate- gories, general reasoning, logical reasoning and mathematical reasoning, and are aggregated from 41 distinct sources. A complete list of these data sources is provided in Appendix B. Answer Type Labeling To comprehensively evaluate model performance across diverse an- swer formats, we define four canonical an- swer types: numerical values, algebraic expres- sions, multiple-choice selections, and free-form strings. Utilizing a general-purpose LLM Llama- 3.3-70B-Instruct (Grattafiori et al., 2024), we per- formed automatic answer-type classification with 3 HumanAnnotatorsLLMs Query CurationVerifyBenchAnswer Type LabelingCompletion Generation and Pre-annotationHuman Annotation Query CurationVerifyBench-HardCompletion GenerationDifficulty FilteringHuman Annotation Human AnnotateBalance Answer TypeVerifyGenerate CompletionsBalanceCorrectness Generate Completions Human Annotate VerifyAnswer TypeClassificationFiltered by Verification Diff Query Pool- Question- Ref AnswerDomain- General- Logic- MathContent Figure 2: Overview of the benchmark construction process. The upper section outlines the pipeline used to construct VerifyBench, whereas the lower section details the pipeline for VerifyBench-Hard. The components highlighted by black boxes denote the final entries included in the benchmark. a prompt(Appendix C.1). Questions that fall out- side these categories, such as proof-based or open- ended prompts, were excluded from further analy- sis. Following classification, we randomly sampled 2,000 instances per answer type, resulting in a final candidate pool of 8,000 questions. Completion Generation and Pre-annotation We subsequently employed 22 widely used open- source and proprietary models (see Appendix E) to generate single-shot completions for the curated set of 8,000 questions, resulting in a total of 176,000 completions. To assign initial correctness labels, we utilized Llama-3.3-70B-Instruct (Grattafiori et al., 2024) within a prompt-based judgment frame- work. For each question, we randomly selected four completions, two labeled as correct and two la- beled as incorrect by the model, and retained them for subsequent human annotation. Human Annotation We conducted human anno- tation for the aforementioned questions and their associated completions. The annotation procedure comprised two primary tasks: (1) identifying the most appropriate answer type for each question based on its formulation and corresponding ground- truth answer, and (2) evaluating the correctness of each of the four completions. Each question was independently annotated by at least two annotators.If their annotations were consistent, the labeling was finalized; otherwise, a third annotator resolved disagreements to ensure consistency and finalize the labels. Benchmark Construction Following human an- notation, we identified notable biases in the mod- els’ predictions regarding both answer types and completion correctness, leading to imbalanced data distributions. To mitigate | https://arxiv.org/abs/2505.15801v2 |
this issue, we per- formed controlled downsampling to ensure uniform category-level representation and balanced correct- ness labels. Specifically, we retained 250 ques- tions per answer type, resulting in a total of 1,000 questions. Each question is paired with exactly two completions, one correct and one incorrect. The resulting dataset, VerifyBench, thus comprises 2,000 well-balanced question-answer-completion- correctness tuples. Detailed statistics for Verify- Bench are provided in Table 1. 3.2 Construction of VerifyBench-Hard To construct VerifyBench-Hard, we employed a specialized data generation pipeline consisting of the following key steps: Completion Generation To construct the dataset, we first generated single-turn completions for the queries described in Section 3.1 using 4 Statistics for VerifyBench and VerifyBench-Hard Statistics VerifyBench VerifyBench-Hard # of unique questions 1000 945 # of unique completions 2000 1000 # of correct completions 1000 291 # of wrong completions 1000 709 Statistics of Answer Type # of Numeric Values 500 252 # of Expressions 500 88 # of Multi-choice 500 430 # of String 500 230 Statistics of Domain # of General Reasoing 404 303 # of Logic Reasoning 498 315 # of Math Reasoning 1098 382 Table 1: Benchmark statistics of VerifyBench and VerifyBench-Hard. a collection of 18 open-source models. Due to the substantial volume of generations and the associated computational costs, closed-source models were excluded from this stage. In total, we produced approximately 1.45 million completions. Difficulty Filtering Next, we employed five top-performing large models on VerifyBench (Llama-3.3-70B-Instruct (Grattafiori et al., 2024), Llama-4-Scout-17B-16E-Instruct (Meta AI, 2025), Qwen2.5-72B-Instruct (Qwen et al., 2025), Qwen3- 30B-A3B, and Qwen3-32B (Yang et al., 2025)), which span a diverse range of model sizes and ar- chitectures, to evaluate the correctness of the gen- erated completions. Based on their judgments, we identified question-answer-completion tuples ex- hibiting model disagreement, specifically those for which two models’ assessments diverged from the other three. To ensure balanced and comprehen- sive representation, we applied stratified sampling across data domains and sources, ultimately select- ing 2,000 examples for human annotation. Human Annotation We subsequently subjected the selected samples to human annotation, focus- ing on two key aspects: identifying the answer type and determining the correctness of each comple- tion. Each instance was annotated independently by at least two annotators. In cases where both annotators agreed, the annotation was finalized; when disagreement occurred, a third annotator was consulted to resolve the conflict. Benchmark Construction Following human an- notation, we excluded samples identified as unsuit- able for inclusion in our benchmark. This filteringresulted in a final set of 1,000 question-answer- completion-correctness tuples. In contrast to Veri- fyBench, which enforces a balanced structure with one correct and one incorrect completion per ques- tion, VerifyBench-Hard is derived through natural sampling. We observed that larger models are more likely to erroneously accept incorrect answers as correct, resulting in a natural skew towards incor- rect completions within the dataset. Detailed statis- tics for VerifyBench-Hard are provided in Table 1. 4 Evaluation Results This section presents the evaluation results and analyses of our proposed benchmark. Section 4.1 reports the primary evaluation outcomes. In Sec- tion 4.2, we investigate the impact of reference | https://arxiv.org/abs/2505.15801v2 |
answers on the verification process. Section 4.3 provides a comparative analysis between our bench- mark and existing reward benchmarks, as well as the performance of several general-purpose reward models on VerifyBench and VerifyBench-Hard. 4.1 Overall Performance We evaluate the performance of various verification approaches on both VerifyBench and VerifyBench- Hard. For rule-based baselines, we adopt the widely used math-verify (Kydlí ˇcek, 2025) method. In the LLM-as-a-judge setting, we prompt LLMs to perform verification; detailed prompt templates are provided in Appendix C.2. Our evaluation yields several key findings and insights: Existing models perform well on VerifyBench: The primary objective in constructing VerifyBench is to establish a benchmark for the objective eval- uation of reference-based reward systems. To this end, we designed the dataset with a balanced dis- tribution across diverse domains and answer types, pairing each question with both a correct and an incorrect completion. This structure facilitates a rigorous and fair assessment of reward model per- formance. Notably, state-of-the-art LLMs already demonstrate strong performance on this bench- mark: GPT-4o-mini achieves an average accuracy of 92.85%, while Qwen3-32B reaches 95.8%, high- lighting the high reliability of LLMs as verifiers in this context. VerifyBench-Hard is challenging: To more ef- fectively differentiate the performance of various models, we constructed VerifyBench-Hard by se- lecting cases in which multiple LLMs exhibited 5 Model/MethodVerifyBench VerifyBench-Hard Num Exp MC Str A VG Num Exp MC Str A VG rule-based functions math-verify 83.60 72.00 19.40 8.60 45.90 76.19 82.95 8.37 10.43 32.50 LLM-as-a-judge OpenAI/gpt-4o-2024-11-20 94.80 90.20 96.80 90.80 93.15 71.43 65.91 75.35 71.30 72.60 OpenAI/gpt-4o-mini 95.80 89.80 95.80 90.00 92.85 69.05 72.73 74.19 72.17 72.30 meta-llama/Llama-4-Scout-17B-16E-Instruct 94.20 86.80 89.80 89.25 90.01 48.02 39.77 46.98 55.22 48.50 meta-llama/Llama-3.3-70B-Instruct 88.80 77.80 88.40 78.00 83.25 54.37 45.45 60.70 47.39 54.70 meta-llama/Llama-3.1-8B-Instruct 72.20 70.60 77.00 72.40 73.05 51.19 35.23 45.12 33.91 43.20 meta-llama/Llama-3.2-3B-Instruct 65.80 63.60 56.80 57.60 60.95 33.33 28.41 38.84 27.39 33.90 meta-llama/Llama-3.2-1B-Instruct 44.40 41.00 37.60 53.60 44.15 22.22 13.64 29.07 27.39 25.60 Qwen/Qwen3-235B-A22B 96.40 92.40 97.00 89.40 93.80 70.24 72.73 70.93 69.57 70.60 Qwen/Qwen3-30B-A3B 96.60 91.80 97.40 90.20 94.00 64.68 70.45 69.53 56.52 65.40 Qwen/Qwen2.5-72B-Instruct 95.40 89.80 95.60 88.60 92.35 70.63 60.23 61.40 56.09 62.40 Qwen/Qwen3-32B 97.60 94.00 99.00 92.60 95.80 69.05 81.82 68.14 77.83 71.80 Qwen/Qwen3-8B 96.40 93.00 96.20 90.40 94.00 68.65 78.41 73.02 66.52 70.90 Qwen/Qwen3-4B 95.20 91.60 93.60 87.60 92.00 71.03 62.50 75.58 71.74 72.40 Qwen/Qwen3-1.7B 83.20 81.00 80.60 79.60 81.10 48.81 38.64 60.93 41.74 51.50 microsoft/phi-4 92.60 86.40 93.00 85.40 89.35 59.52 57.95 54.19 57.39 56.60 01-ai/Yi-1.5-9B-Chat-16K 90.40 87.40 88.00 85.00 87.70 65.48 63.64 62.09 54.78 61.40 google/gemma-3-1b-it 55.40 56.20 43.00 56.00 52.65 32.14 19.32 33.72 40.87 33.70 Table 2: Overall performance(%) of VerifyBench and VerifyBench-Hard. Num stands for Numeric Values, Exp stands for Expressions, MC stands for Multi-choice and Strstands for String. substantial disagreement in their verification out- puts. Evaluation results demonstrate that model performance on VerifyBench-Hard is significantly lower than on VerifyBench. The highest accuracy achieved is 72.4%, representing a 20% decrease compared to performance on VerifyBench. This performance gap underscores substantial opportu- nities for improvement in the precise verification capabilities of current LLMs. Small-scale models | https://arxiv.org/abs/2505.15801v2 |
still have potential for devel- opment: In practical reinforcement learning sce- narios, the inference efficiency of the reward sys- tem significantly impacts the overall training speed. Since such verification tasks typically involve gen- erative inference, their computational cost is com- parable to that of the rollout process itself. Thus, efficiently leveraging smaller models to perform verification is a practical concern worth exploring. According to our results, models with smaller pa- rameters (<3B parameters) exhibit notably poorer performance on VerifyBench, achieving 81.10% accuracy with Qwen3-1.7B and only 60.95% accu- racy with Llama-3.2-3B-Instruct, while larger-scale models can achieve over 90% accuracy. Therefore, enhancing the capability of smaller models on these verification tasks represents a valuable direction for future research.ModelVerifyBench w/ Ref w/o Ref Llama-4-Scout-17B-16E-Instruct 90.01 73.95 -16.06 Llama-3.3-70B-Instruct 83.25 75.00 -8.25 Llama-3.1-8B-Instruct 73.05 64.10 -8.95 Llama-3.2-3B-Instruct 60.95 55.35 -5.60 Llama-3.2-1B-Instruct 44.15 44.50 +0.35 Qwen3-235B-A22B 93.80 80.15 -13.65 Qwen3-30B-A3B 94.00 78.25 -15.75 Qwen2.5-72B-Instruct 92.35 77.30 -15.05 Qwen3-32B 95.80 78.90 -16.90 Qwen3-8B 94.00 75.75 -18.25 Qwen3-4B 92.00 74.40 -17.60 Qwen3-1.7B 81.10 62.10 -19.00 Table 3: Evaluation results(%) about how including the reference answer in the prompt influences the perfor- mance of LLM-as-a-judge. 4.2 Reference-answers Play an Important Role in Verification The benchmark proposed in this work fundamen- tally differs from existing reward benchmarks by explicitly incorporating reference answers, thereby aligning more closely with the training setups of contemporary reasoning LLMs. To isolate the im- pact of reference answers on verification perfor- mance, we conduct an ablation study in which models are evaluated without reference answers provided in the prompt; the prompt format used is detailed in Appendix C.3. 6 Model RM-Bench Reward BenchVerifyBench Num Exp MC Str A VG General Reward Models Skywork/Skywork-Reward-Llama-3.1-8B 72.29 93.33 60.80 64.80 59.60 68.80 63.48 internlm/internlm2-20b-reward 72.06 92.16 65.60 64.80 61.20 70.00 65.40 Ray2333/GRM-llama3-8B-sftreg 71.33 88.50 64.80 58.40 58.80 67.60 62.40 internlm/internlm2-7b-reward 72.42 90.02 73.20 68.00 66.80 70.40 69.60 Domain-specific Reward Models Qwen/Qwen2.5-Math-RM-72B 76.28 82.11 83.60 79.20 73.60 75.60 78.00 Qwen/Qwen2-Math-RM-72B 62.61 75.54 79.20 78.40 73.20 72.80 75.90 Table 4: The performance(%) of existing reward models on VerifyBench without access to reference answers, as well as a comparison with existing reward benchmarks. Experimental results, summarized in Table 3, re- veal a performance degradation of approximately 5–18% when reference answers are excluded. These findings underscore the crucial role of refer- ence answers in reasoning-oriented RL, suggesting they provide a more reliable and informative super- vision signal during reward modeling. 4.3 Performance of Reference-free Reward Models To enable a more comprehensive evaluation of existing reward models, we additionally assessed several reference-free reward models and bench- marked their performance on conventional pairwise reward evaluation datasets for comparison. No- tably, each question in our proposed VerifyBench consists of one correct and one incorrect comple- tion, enabling straightforward reformulation into standard pairwise evaluation instances. The experi- mental results are summarized in Table 4. Our experimental results show that VerifyBench introduces a level of challenge comparable to ex- isting reward benchmarks, with the absence of ref- erence answers. Reference-free reward models achieve sub-80% accuracy on VerifyBench, high- lighting its difficulty. Furthermore, domain-specific reward models exhibit inferior performance | https://arxiv.org/abs/2505.15801v2 |
on gen- eral reward benchmarks compared to VerifyBench, validating the benchmark’s design objectives. 5 Analysis 5.1 Error Analysis To gain deeper insights from VerifyBench, we intro- duce a more fine-grained taxonomy for each answer type and analyze model performance across these subcategories. This detailed analysis helps identifyspecific reasoning tasks or answer formats where models are particularly error-prone. We subdivide the Numeric Values category into 8 subtypes, Ex- pressions into 5 subtypes, Multi-choice into 3 sub- types, and String into 2 subtypes. Table 5 presents the comparative performance of different models across these detailed categories. We further analyze subcategories within each major category that exhibit below-average accuracy. The following error-prone subtypes are identified as the most frequent sources of incorrect judgments: •Numeric Values: Complex numbers and an- swers containing multiple numerical values; •Expressions: Algebraic formulas and equa- tions; •Multi-choice: Multi-answer choice prob- lems; •String: Strings requiring semantic consis- tency verification. We analyzed the samples most prone to errors and identified a common underlying issue: models frequently fail to fully comprehend the question or clearly recognize the intended objective. For instance, in cases involving multi-value answers, the ordering of values is typically irrelevant. How- ever, if the sequence of values in the model’s output differs from the golden answer, models often incor- rectly classify the response as erroneous. Similarly, errors within the Expressions category, particularly involving algebraic formulas and equations, pre- dominantly result from inadequate mathematical comprehension. Specifically, when a model out- puts an unsimplified expression, superficial textual discrepancies compared to the ground-truth answer can be significant. Rather than evaluating whether the expression is mathematically equivalent upon 7 50 100 150 200 250 300 Training Step818283848586Accuracy (%) gsm8k 50 100 150 200 250 300 Training Step363840424446 MATH500 50 100 150 200 250 300 Training Step83.083.584.084.585.085.586.086.587.0 SVAMP Llama-3.1-8B-Instruct(73.05) Qwen3-1.7B(81.10) Qwen3-4B(92.00)Figure 3: The performance(%) of RFT across different LLM judges which have various performance on VerifyBench. Answer Type Q32B g4o L70B L3B Numeric Values 97.60 94.80 88.80 65.80 Integer 96.88 -0.7296.88 93.75 65.62 -0.18 Constant 96.88 -0.7295.31 92.19 70.31 Float Number 98.39 96.77 90.32 61.29 -4.51 Radical 98.39 95.16 87.10 -1.70 75.81 Complex Number 96.77 -0.8396.77 85.48 -3.32 59.68 -6.12 Angle 96.77 -0.8396.77 93.55 66.13 Non-decimal number 100 93.55 -1.25 88.71 -0.09 64.52 -1.28 Multiple Values 96.77 -0.8387.10 -7.70 79.03 -9.77 62.90 -2.90 Expressions 94.00 90.20 77.80 63.60 Algebraic formula 91.54 -2.4684.62 -5.58 67.69 -10.11 56.92 -6.68 Equation 87.50 -6.578.12 -12.08 70.31 -7.49 60.94 -2.66 Interval 96.09 94.53 82.81 60.94 -2.66 Set 98.00 98.00 78.00 60.00 -3.60 Matrix 96.09 94.53 86.72 75.78 Multi-choice 99.00 96.80 88.40 56.80 Single-choice 99.39 98.17 92.07 59.15 Multiple-choice 98.21 -0.7994.05 -2.75 77.98 -10.42 49.40 -7.40 Finite state selection 99.40 98.21 95.24 61.90 String 92.60 90.80 78.00 57.6 Specific 93.60 93.31 81.69 59.01 Semantic 90.38 -2.2285.26 -5.54 69.87 -8.13 54.49 -3.11 Table 5: Model performance(%) across the fine-grained taxonomy on VerifyBench. Q32B stands for Qwen3- 32B, g4ostands for gpt-4o-2024-11-20, L70B stands for Llama-3.3-70B-Instruct and L3B stands for Llama- 3.2-3B-Instruct. simplification, models prematurely deem the output incorrect, thereby leading to verification failures. 5.2 Correlation Analysis We constructed VerifyBench and VerifyBench- Hard with the | https://arxiv.org/abs/2505.15801v2 |
goal of improving the effectiveness of RL for reasoning models by enhancing the accu- racy of reference-based reward systems. To evalu- ate the practical utility of our benchmark, we per- formed a correlation analysis between VerifyBench and real-world RL performance. In our experiments, we applied rejection sam- pling to implement reference-based reward sys- tems. For each question in the GSM8K and MATHtraining sets, we generated 64 candidate comple- tions using Qwen2.5-Math-7B-Instruct (Yang et al., 2024) with a sampling temperature of 0.7. These responses were subsequently filtered by three ver- ifier models with varying performance levels on VerifyBench: Llama-3.1-8B-Instruct, Qwen3-4B, and Qwen3-1.7B. Only completions consistently verified as correct were retained to form the SFT training data. We conducted independent SFT train- ing runs accordingly, with full hyperparameter con- figurations provided in the Appendix D. The resulting models were evaluated on mul- tiple mathematical reasoning benchmarks, including GSM8K (Cobbe et al., 2021), MATH500 (Hendrycks et al., 2021; Light- man et al., 2023), and SV AMP (Patel et al., 2021). As shown in Figure 3, on GSM8K, MATH500, and SV AMP, using Qwen3-4B, a verifier that achieves higher accuracy on VerifyBench, consistently outperforms Llama-3.1-8B-Instruct, a lower- scoring verifier, under the same training steps. This highlights the strong alignment between VerifyBench and practical usage. Our benchmarks serve as reliable tools to guide the development of reward systems, leading to more effective training and improved model performance. 6 Conclusion In this paper, we present two dedicated benchmarks, VerifyBench and VerifyBench-Hard, to evaluate reference-based reward systems in the context of reasoning-focused reinforcement learning. These benchmarks were built with high-quality, carefully curated data and extensive human annotation. Our results reveal that current verifiers, especially those with smaller model sizes, still face considerable challenges in accurately assessing reasoning com- 8 pletions. Through detailed analysis, we provide in- sights into the strengths and weaknesses of existing systems and highlight opportunities for improve- ment. The proposed benchmarks fill a critical gap in the evaluation landscape, offering a principled foundation for understanding verifier accuracy and guiding the development of more effective reason- ing models trained via reinforcement learning. Limitations Limited Data Domain In this paper, we utilize datasets exclusively from general reasoning, logical reasoning, and mathematical reasoning, which do not cover the full spectrum of reasoning types, such as commonsense reasoning. Consequently, our test sets may not adequately evaluate the quality of reward systems in out-of-domain scenarios. Bias from Human Annotation The construction of VerifyBench and VerifyBench-Hard involved ex- tensive human annotation. Although all annotators were trained and a double-checking strategy was employed, it remains challenging to entirely elim- inate annotation bias inherent in manual labeling processes. Reward Hacking Could Not Be Identified While our experiments demonstrate that rule-based reward systems perform worse than model-based approaches on both VerifyBench and VerifyBench- Hard, a critical issue remains unaddressed: reward hacking. Future research should focus on detecting and evaluating reward hacking phenomena. Proof Problems Excluded During annotation, our guidelines explicitly excluded proof-based questions. We believe such problems require more specialized verification methods, such as formal languages like Lean4. Consequently, proof ques- tions are not included | https://arxiv.org/abs/2505.15801v2 |
in this study, and their verifi- cation remains an open research challenge. Binary Scoring System The benchmark con- structed in this paper employs a binary scoring system, where each completion is labeled as either correct or incorrect. However, real-world scenarios often involve more nuanced cases, such as partially correct reasoning processes or correct solutions to subproblems. Introducing a more fine-grained evaluation scheme could better capture these com- plexities.Ethical Considerations All human annotators involved in constructing the benchmarks were assigned reasonable workloads and fairly compensated for their contributions. Our annotation process involves minimal sub- jective preference. Human annotators performed the verification tasks following our detailed instruc- tions. The content of the annotations does not in- volve ethical issues and poses no ethical risks. References 01 AI, Alex Young, Bei Chen, Chao Li, Chengen Huang, et al. 2025. Yi: Open foundation models by 01.ai. arXiv preprint . Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, et al. 2020. Abductive commonsense reasoning. arXiv preprint . Paul Christiano, Jan Leike, Tom B. Brown, Miljan Mar- tic, Shane Legg, and Dario Amodei. 2023. Deep re- inforcement learning from human preferences. arXiv preprint . Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, et al. 2021. Training veri- fiers to solve math word problems. arXiv preprint . DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, et al. 2025a. Deepseek-r1: Incentiviz- ing reasoning capability in llms via reinforcement learning. arXiv preprint . DeepSeek-AI, Aixin Liu, Bei Feng, Bing Xue, Bingx- uan Wang, et al. 2025b. Deepseek-v3 technical re- port. arXiv preprint . Meng Fang, Xiangpeng Wan, Fei Lu, Fei Xing, and Kai Zou. 2024. Mathodyssey: Benchmarking mathemati- cal problem-solving skills in large language models using odyssey math data. arXiv preprint . Evan Frick, Tianle Li, Connor Chen, Wei-Lin Chiang, Anastasios N. Angelopoulos, et al. 2024. How to evaluate reward models for rlhf. arXiv preprint . Gemma Team, Aishwarya Kamath, Johan Ferret, Shreya Pathak, Nino Vieillard, et al. 2025. Gemma 3 techni- cal report. arXiv preprint . Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, et al. 2024. The llama 3 herd of models. arXiv preprint . Simeng Han, Hailey Schoelkopf, Yilun Zhao, Zhent- ing Qi, Martin Riddell, et al. 2024. Folio: Natu- ral language reasoning with first-order logic. arXiv preprint . 9 Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Leng Thai, et al. 2024. Olympiadbench: A chal- lenging benchmark for promoting agi with olympiad- level bilingual multimodal scientific problems. arXiv preprint . Zhiwei He, Tian Liang, Jiahao Xu, Qiuzhi Liu, Xingyu Chen, et al. 2025. Deepmath-103k: A large-scale, challenging, decontaminated, and verifiable math- ematical dataset for advancing reasoning. arXiv preprint . Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, et al. 2021. Measuring mathe- matical problem solving with the math dataset. arXiv preprint . Mehran Kazemi, Bahare Fatemi, Hritik Bansal, John Palowitch, Chrysovalantis Anastasiou, et al. 2025. Big-bench extra hard. arXiv preprint . Mehran Kazemi, Quan Yuan, Deepti Bhatia, Najoung Kim, Xin Xu, Vaiva Imbrasaite, and Deepak Ra- machandran. 2023. Boardgameqa: A dataset for natural language reasoning with | https://arxiv.org/abs/2505.15801v2 |
contradictory infor- mation. arXiv preprint . Muhammad Khalifa, Rishabh Agarwal, Lajanugen Lo- geswaran, Jaekyeom Kim, Hao Peng, et al. 2025. Process reward models that think. arXiv preprint . Kimi Team, Angang Du, Bofei Gao, Bowei Xing, Changjiu Jiang, et al. 2025. Kimi k1.5: Scaling reinforcement learning with llms. arXiv preprint . Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, et al. 2023. Efficient mem- ory management for large language model serving with pagedattention. arXiv preprint . Hynek Kydlí ˇcek. 2025. Math-verify: Math verification library. Nathan Lambert, Valentina Pyatkin, Jacob Morrison, L. J. Miranda, Bill Yuchen Lin, et al. 2024. Re- wardbench: Evaluating reward models for language modeling. arXiv preprint . Jia Li, Edward Beeching, Lewis Tunstall, Ben Lipkin, Roman Soletskyi, et al. 2024. Numinamath: The largest public dataset in ai4maths with 860k pairs of competition math problems and solutions. Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, et al. 2023. Let’s verify step by step. arXiv preprint . Jian Liu, Leyang Cui, Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang. 2020. Logiqa: A chal- lenge dataset for machine reading comprehension with logical reasoning. arXiv preprint . Yantao Liu, Zijun Yao, Rui Min, Yixin Cao, Lei Hou, and Juanzi Li. 2024. Rm-bench: Benchmarking re- ward models of language models with subtlety and style. arXiv preprint .Zijun Liu, Peiyi Wang, Runxin Xu, Shirong Ma, Chong Ruan, et al. 2025. Inference-time scaling for general- ist reward modeling. arXiv preprint . Meta AI. 2025. The llama 4 herd: The beginning of a new era of natively multimodal ai innova- tion. https://ai.meta.com/blog/llama-4-multimodal- intelligence/. Shen-Yun Miao, Chao-Chun Liang, and Keh-Yih Su. 2021. A diverse corpus for evaluating and developing english math word problem solvers. arXiv preprint . Arindam Mitra, Hamed Khanpour, Corby Rosset, and Ahmed Awadallah. 2024. Orca-math: Unlocking the potential of slms in grade school math. arXiv preprint . Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, et al. 2022. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint . Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Adversarial nli: A new benchmark for natural language under- standing. arXiv preprint . OpenAI. 2024. Introducing openai o1. https://openai.com/index/introducing-openai- o1-preview/. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Car- roll L. Wainwright, et al. 2022. Training language models to follow instructions with human feedback. arXiv preprint . Mihir Parmar, Nisarg Patel, Neeraj Varshney, Mutsumi Nakamura, Man Luo, et al. 2024. Logicbench: To- wards systematic evaluation of logical reasoning abil- ity of large language models. arXiv preprint . Arkil Patel, Satwik Bhattamishra, and Navin Goyal. 2021. Are nlp models really able to solve simple math word problems? arXiv preprint . Nisarg Patel, Mohith Kulkarni, Mihir Parmar, Aashna Budhiraja, Mutsumi Nakamura, Neeraj Varshney, and Chitta Baral. 2024. Multi-logieval: Towards eval- uating multi-step logical reasoning ability of large language models. arXiv preprint . Qwen, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, et al. 2025. Qwen2.5 technical report. arXiv preprint . Qwen Team. 2024. Qwq: Reflect deeply on the boundaries of the unknown. https://qwenlm.github.io/blog/qwq-32b-preview/. David Rein, Betty Li Hou, Asa | https://arxiv.org/abs/2505.15801v2 |
Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, et al. 2023. Gpqa: A graduate-level google-proof q&a bench- mark. arXiv preprint . 10 Abulhair Saparov and He He. 2023. Language models are greedy reasoners: A systematic formal analysis of chain-of-thought. arXiv preprint . David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. 2019. Analysing mathematical rea- soning abilities of neural models. arXiv preprint . ByteDance Seed, Jiaze Chen, Tiantian Fan, Xin Liu, Lingjun Liu, et al. 2025. Seed1.5-thinking: Ad- vancing superb reasoning models with reinforcement learning. arXiv preprint . Amrith Setlur, Chirag Nagpal, Adam Fisch, Xinyang Geng, Jacob Eisenstein, et al. 2024. Rewarding progress: Scaling automated process verifiers for llm reasoning. arXiv preprint . Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, et al. 2024. Deepseekmath: Pushing the limits of mathematical reasoning in open lan- guage models. arXiv preprint . Koustuv Sinha, Shagun Sodhani, Jin Dong, Joelle Pineau, and William L. Hamilton. 2019. Clutrr: A diagnostic benchmark for inductive reasoning from text. arXiv preprint . Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M. Ziegler, Ryan Lowe, et al. 2022. Learning to summa- rize from human feedback. arXiv preprint . Mirac Suzgun, Nathan Scales, Nathanael Schärli, Se- bastian Gehrmann, Yi Tay, et al. 2022. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint . Oyvind Tafjord, Bhavana Dalvi Mishra, and Peter Clark. 2021. Proofwriter: Generating implications, proofs, and abductive statements over natural lan- guage. arXiv preprint . Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, et al. 2023. Self-consistency improves chain of thought reasoning in language models. arXiv preprint . Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, et al. 2024. Mmlu-pro: A more robust and challenging multi-task language under- standing benchmark. arXiv preprint . Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, et al. 2023. Chain-of-thought prompting elicits reasoning in large language models. arXiv preprint . Jason Weston, Antoine Bordes, Sumit Chopra, Alexan- der M. Rush, Bart van Merriënboer, Armand Joulin, and Tomas Mikolov. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint . Xianjie Wu, Jian Yang, Linzheng Chai, Ge Zhang, Jia- heng Liu, et al. 2025. Tablebench: A comprehensive and complex benchmark for table question answering. arXiv preprint .Wei Xiong, Jiarui Yao, Yuhui Xu, Bo Pang, Lei Wang, et al. 2025. A minimalist approach to llm reasoning: From rejection sampling to reinforce. arXiv preprint . Fangzhi Xu, Qika Lin, Jiawei Han, Tianzhe Zhao, Jun Liu, and Erik Cambria. 2025a. Are large language models really good logical reasoners? a comprehen- sive evaluation and beyond. IEEE Transactions on Knowledge and Data Engineering , 37(4):1620–1634. Yuhui Xu, Hanze Dong, Lei Wang, Caiming Xiong, and Junnan Li. 2025b. Reward models identify consis- tency, not causality. arXiv preprint . An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, et al. 2025. Qwen3 technical report. arXiv preprint . An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, et al. 2024. Qwen2.5-math technical report: Toward mathematical expert model via self- improvement. arXiv preprint . Nathan Young, Qiming Bao, Joshua Bensemann, and Michael Witbrock. | https://arxiv.org/abs/2505.15801v2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.