text
string
source
string
benchmarks, using Pass@1 (P@1) and Maj@16 (M@16) as evaluation metrics. Additionally, we compare against two open-source models: DeepSeek-R1-Distill- Qwen-7B [ 12], which is trained on an unreleased set of 800k instructions, and OpenR1-Qwen-7B [ 9], which uses 94k instructions from the OpenR1-Math dataset. We evaluate four baseline selection strategies: Random , which samples instructions uniformly from the pool; Diverse , which performs clustering and balanced sampling based on category metadata from OpenR1-Math; Longest , which selects instructions with the longest reasoning traces; and Difficult , which chooses top-ranked samples according to a difficulty quantifier. SELECT 2REASON outperforms all baselines on most datasets, consistently achieving higher P@1 scores on competition-level benchmarks and matching the strongest baselines on the Maj@16 metric. Furthermore, the model trained on the subset filtered bySELECT 2REASON surpasses both Full-pool models and open-source models on MATH-500. Notably, our method also maintains a strong lead over Full-pool training on nearly all comprehensive math benchmarks, with only a slight performance drop against the Full-pool on OlympiadBench. This may reflect a limitation in the generalization ability of Full-pool , whereas fine-tuning with a smaller, high-quality subset yields superior performance on Chinese benchmarks. 0.00 1.00 0.25 0.50 0.75 Weighting Factor0.420.430.440.450.460.470.480.490.50Average Pass@1Competition-Level Benchmark 0.00 1.00 0.25 0.50 0.75 Weighting Factor0.680.690.700.710.720.730.740.75Comprehensive Benchmark Figure 7: Average Pass@1 by adjusting the weight- ing factor of joint ranker in S ELECT 2REASON .Performance of SELECT 2REASON under dif- ferent hyperparameter settings. Figure 6 presents a statistical analysis of performance variation for both the baselines and SE- LECT 2REASON across different subset sizes (2%, 5%, and 10%) on AIME 24, AIME 25, and AMC 23, using Pass@1 as the evaluation metric. Across all sizes, SELECT 2REASON consistently maintains a leading advantage. Moreover, as the subset size increases, the performance of SE- LECT 2REASON generally improves in a stable manner, with the exception of a slight drop on AMC 23 when fine-tuned on the 10% subset compared to the 5% subset. Another critical hyperpa- rameter affecting SELECT 2REASON ’s performance is the weighted factor w∈[0,1]used in the joint ranker. Figure 7 shows the results of this weight sensitivity analysis. When w= 0, the joint ranker degenerates to the length-based ranker; when w= 1, it becomes equivalent to the difficulty-based ranker. Intermediate values of ware set to 0.25, 0.5, and 0.75. The best performance is achieved at w= 0.25, where the model fine-tuned on the top 10% subset selected by SELECT 2REASON reaches its highest accuracy, as reported in Table 1. This indicates that the joint ranker achieves an effective balance in controlling the trade-off between the two ranking metrics. SELECT 2REASON improves long-CoT reasoning efficiency by sampling high-quality data. The relationship between performance and output tokens on AIME 25 is illustrated in Figure 8. In contrast to the increasing response lengths observed during conducting pure RL on pre-trained models [ 12], SFT exhibits a different distribution: models with stronger performance tend to generate shorter outputs. This suggests that when long-CoT reasoning is effectively activated via SFT, models can produce more efficient exploratory solutions. For example, DeepSeek-R1-Distill-Qwen-7B achieves the best performance with
https://arxiv.org/abs/2505.17266v2
fewer than 10k tokens. Conversely, using a small fraction of randomly sampled data leads the model to acquire only shallow reasoning patterns, resulting in repetitive 8 10000 11000 12000 13000 14000 Average length of output tokens0.200.250.300.350.400.450.500.55Total avg@16R1-Distill-Qwen Full PoolOpenR1-Qwen Random-2%Random-5%Random-10% Diverse-2%Diverse-5%Diverse-10% Longest-2%Longest-5%Longest-10% Difficult-2%Difficult-5%Difficult-10% Select2Reason-2%Select2Reason-5%Select2Reason-10%Figure 8: Relationship between performance on AIME 25 and output length. 2% 5% 10% 100% Proportion (%)0.320.340.360.380.400.420.440.46Frequency of Rethinking Tokens Random Diverse Longest Difficulty Select2Reason OpenR1-QwenFigure 9: Statistics of rethinking tokens on AIME 25 in different proportion. outputs and failure to backtrack. Further statistical analysis is presented in Figure 9, which shows the frequency of rethinking tokens used by fine-tuned models on AIME 25. The model trained on subsets selected by SELECT 2REASON consistently uses fewer rethinking tokens across all data sizes, supporting our hypothesis that it enables more efficient reasoning. In contrast, vanilla selection methods lead to noisier reasoning behaviors due to lower data quality, resulting in a higher proportion of ineffective rethinking. A case study is provided in Figure 10, where an LRM fine-tuned on limited and low-quality instructions attempts to use a large number of rethinking tokens during inference, but exhibits limited effective reflection. In comparison, a model trained on higher-quality instructions corrects its reasoning path and reaches the correct solution with fewer rethinking steps. Key Finding: Despite being trained on selected long responses with numerous rethinking tokens, the model produces shorter responses with fewer rethinking tokens. The trained model can answer correctly without thinking overhead. Table 2: Comparison between SELECT 2REASON and baselines on the Chinese-DeepSeek-R1-Distill- data pool through evaluation across nine benchmarks using Pass@1 and Maj@16 as metrics. Target Model Data AIME 24 AIME 25 AMC 23 MATH Olympiad Kaoyan GK 23 GK-Math GK 24 QWEN 2.5-M ATH-7B Size P@1 M@16 P@1 M@16 P@1 M@16 P@1 P@1 P@1 P@1 P@1 P@1 BASE MODEL - 0.090 0.133 0.106 0.200 0.558 0.650 0.842 0.394 0.472 0.649 0.781 0.637 FULL-POOL 110K 0.181 0.267 0.158 0.300 0.633 0.800 0.798 0.367 0.412 0.668 0.718 0.626 RANDOM 10%0.181 0.200 0.140 0.167 0.620 0.725 0.840 0.431 0.372 0.701 0.775 0.626 DIVERSE 0.176 0.233 0.144 0.233 0.618 0.750 0.822 0.416 0.377 0.692 0.770 0.641 LONGEST 0.221 0.367 0.173 0.233 0.656 0.850 0.846 0.459 0.457 0.688 0.821 0.703 DIFFICULT 0.258 0.400 0.194 0.267 0.627 0.800 0.848 0.412 0.462 0.691 0.795 0.681 SELECT 2REASON 0.242 0.400 0.206 0.367 0.689 0.825 0.860 0.450 0.462 0.699 0.840 0.703 SELECT 2REASON demonstrates strong generalization capabilities by enabling low-cost transfer to other Long-CoT reasoning instruction pools. We evaluate it on Chinese-DeepSeek-R1-Distill- data, a open-source dataset containing 110k Chinese instructions spanning mathematics, STEM, and general domains, with Long-CoT responses generated by DeepSeek-R1. To assess the generalizability ofSELECT 2REASON , we directly apply the joint ranker trained on OpenR1-Math-220k for data selection. Results in Table 2 show that fine-tuning Qwen2.5-Math-7B-Instruct on only the top 10% subset selected by SELECT 2REASON outperforms random ,diversity ,longest , and difficult baselines. Notably, since this data pool contains a large proportion of generic, non-reasoning instructions, we conclude that this dilutes the model’s ability to acquire strong reasoning capabilities. Case studies of joint ranking are
https://arxiv.org/abs/2505.17266v2
presented in Appendix A.3, further underscoring the necessity of our selection method. The difficulty quantifier trained on OpenR1-Math-220k assigns a high score to a competition-level Chinese math problem, and the corresponding reasoning trace is also relatively long, leading the joint ranker to give it a high ranking. In contrast, simpler math problems receive medium rankings, while general knowledge questions are ranked the lowest, which aligns with expectations. Despite not being trained on this specific instruction pool, the question-response joint ranker still successfully identifies high-quality reasoning instructions, demonstrating the notable generalizability of S ELECT 2REASON . 9 7 Conclusion In summary, while recent LRMs exhibit remarkable long-CoT reasoning abilities, effective instruction selection remains an underexplored challenge. Our study identifies reasoning trace length and problem difficulty as strong, quantifiable heuristics for high-quality data selection. Building on these insights, we introduce SELECT 2REASON , an novel and efficient instruction-tuning data selection framework for long-CoT reasoning. Extensive empirical validation demonstrates that models trained on our selected subsets achieve superior reasoning performance using significantly less data, paving the way for cost-effective and high-quality instruction tuning in long-CoT tasks. Limitations Despite the promising results of SELECT 2REASON in enhancing long-CoT reasoning capabilities, several limitations remain. First, due to constraints in computational resources and training costs, our experiments are primarily conducted on medium-scale models, and the scalability of our method to larger models remains to be explored. Second, the current study relies on existing instruction datasets, while automated instruction evolution strategies to improve data quality are yet to be developed. Finally, although our analysis reveals correlations between reasoning trace length, problem difficulty, and rethinking behaviors, the interpretability of how long-CoT capabilities are activated and how reflective reasoning emerges during SFT remains an open question for future work. Broader Impact Our work aims to improve the efficiency of instruction tuning for long-CoT reasoning by selecting high-quality data subsets. This approach significantly reduces the need for large-scale supervised fine- tuning on massive instruction datasets, thereby lowering computational cost, energy consumption, and dependency on high-end hardware. By enabling stronger performance using only a fraction of training data, SELECT 2REASON contributes to the development of more sustainable and environmentally friendly AI systems. References [1]Bukharin, A. and Zhao, T. Data diversity matters for robust instruction tuning. arXiv preprint arXiv:2311.14736 , 2023. [2]Cao, Y ., Kang, Y ., Wang, C., and Sun, L. Instruction mining: Instruction data selection for tuning large language models. arXiv preprint arXiv:2307.06290 , 2023. [3]Chen, L., Li, S., Yan, J., Wang, H., Gunaratna, K., Yadav, V ., Tang, Z., Srinivasan, V ., Zhou, T., Huang, H., et al. Alpagasus: Training a better alpaca with fewer data. arXiv preprint arXiv:2307.08701 , 2023. [4]Chen, Q., Qin, L., Liu, J., Peng, D., Guan, J., Wang, P., Hu, M., Zhou, Y ., Gao, T., and Che, W. Towards reasoning era: A survey of long chain-of-thought for reasoning large language models. arXiv preprint arXiv:2503.09567 , 2025. [5]Chen, Y ., Li, Y ., Hu, K., Ma, Z., Ye, H., and Chen, K. Mig: Automatic data selection for instruction tuning by maximizing information gain in semantic space. arXiv
https://arxiv.org/abs/2505.17266v2
preprint arXiv:2504.13835 , 2025. [6] Dao, T. Flashattention-2: Faster attention with better parallelism and work partitioning. arXiv preprint arXiv:2307.08691 , 2023. [7]DeepMind, G. Gemini thinking - use thinking models. https://ai.google. dev/gemini-api/docs/thinking#python , 2025. URL https://ai.google.dev/ gemini-api/docs/thinking#python . Accessed: 2025-05-08. [8]Du, Q., Zong, C., and Zhang, J. Mods: Model-oriented data selection for instruction tuning. arXiv preprint arXiv:2311.15653 , 2023. 10 [9]Face, H. Open r1: A fully open reproduction of deepseek-r1, January 2025. URL https: //github.com/huggingface/open-r1 . [10] Ge, Y ., Liu, Y ., Hu, C., Meng, W., Tao, S., Zhao, X., Ma, H., Zhang, L., Chen, B., Yang, H., et al. Clustering and ranking: Diversity-preserved instruction selection through expert-aligned quality estimation. arXiv preprint arXiv:2402.18191 , 2024. [11] Gu, J., Jiang, X., Shi, Z., Tan, H., Zhai, X., Xu, C., Li, W., Shen, Y ., Ma, S., Liu, H., et al. A survey on llm-as-a-judge. arXiv preprint arXiv:2411.15594 , 2024. [12] Guo, D., Yang, D., Zhang, H., Song, J., Zhang, R., Xu, R., Zhu, Q., Ma, S., Wang, P., Bi, X., et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [13] He, C., Luo, R., Bai, Y ., Hu, S., Thai, Z. L., Shen, J., Hu, J., Han, X., Huang, Y ., Zhang, Y ., et al. Olympiadbench: A challenging benchmark for promoting agi with olympiad-level bilingual multimodal scientific problems. arXiv preprint arXiv:2402.14008 , 2024. [14] Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart, S., Tang, E., Song, D., and Stein- hardt, J. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874 , 2021. [15] Hu, E. J., Shen, Y ., Wallis, P., Allen-Zhu, Z., Li, Y ., Wang, S., Wang, L., Chen, W., et al. Lora: Low-rank adaptation of large language models. ICLR , 1(2):3, 2022. [16] Huang, J. and Chang, K. C.-C. Towards reasoning in large language models: A survey. arXiv preprint arXiv:2212.10403 , 2022. [17] Jain, N., Han, K., Gu, A., Li, W.-D., Yan, F., Zhang, T., Wang, S., Solar-Lezama, A., Sen, K., and Stoica, I. Livecodebench: Holistic and contamination free evaluation of large language models for code. arXiv preprint arXiv:2403.07974 , 2024. [18] Kim, S., Joo, S. J., Kim, D., Jang, J., Ye, S., Shin, J., and Seo, M. The cot collection: Improving zero-shot and few-shot learning of language models via chain-of-thought fine-tuning. arXiv preprint arXiv:2305.14045 , 2023. [19] Kwon, W., Li, Z., Zhuang, S., Sheng, Y ., Zheng, L., Yu, C. H., Gonzalez, J. E., Zhang, H., and Stoica, I. Efficient memory management for large language model serving with pagedattention. InProceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles , 2023. [20] Li, J., Beeching, E., Tunstall, L., Lipkin, B., Soletskyi, R., Huang, S., Rasul, K., Yu, L., Jiang, A. Q., Shen, Z., et al. Numinamath: The largest public dataset in ai4maths with 860k pairs of competition math problems and solutions. Hugging Face repository , 13:9, 2024. [21] Li, M., Zhang, Y ., Li, Z., Chen, J., Chen, L., Cheng, N., Wang, J., Zhou, T., and Xiao, J. From quantity to quality: Boosting llm performance with self-guided
https://arxiv.org/abs/2505.17266v2
data selection for instruction tuning. arXiv preprint arXiv:2308.12032 , 2023. [22] Li, M., Zhang, Y ., He, S., Li, Z., Zhao, H., Wang, J., Cheng, N., and Zhou, T. Superfiltering: Weak-to-strong data filtering for fast instruction-tuning. arXiv preprint arXiv:2402.00530 , 2024. [23] Li, X., Zou, H., and Liu, P. Limr: Less is more for rl scaling. arXiv preprint arXiv:2502.11886 , 2025. [24] Li, Y ., Hui, B., Xia, X., Yang, J., Yang, M., Zhang, L., Si, S., Chen, L.-H., Liu, J., Liu, T., et al. One-shot learning as instruction data prospector for large language models. arXiv preprint arXiv:2312.10302 , 2023. [25] Li, Y ., Yang, L., Shen, W., Zhou, P., Wan, Y ., Lin, W., and Chen, D. Crowdselect: Synthetic instruction data selection with multi-llm wisdom. arXiv preprint arXiv:2503.01836 , 2025. [26] Li, Z.-Z., Zhang, D., Zhang, M.-L., Zhang, J., Liu, Z., Yao, Y ., Xu, H., Zheng, J., Wang, P.-J., Chen, X., et al. From system 1 to system 2: A survey of reasoning large language models. arXiv preprint arXiv:2502.17419 , 2025. 11 [27] Lightman, H., Kosaraju, V ., Burda, Y ., Edwards, H., Baker, B., Lee, T., Leike, J., Schulman, J., Sutskever, I., and Cobbe, K. Let’s verify step by step. In The Twelfth International Conference on Learning Representations , 2023. [28] Liu, C., Wang, Z., Shen, S., Peng, J., Zhang, X., Du, Z., and Wang, Y . The chinese dataset distilled from deepseek-r1-671b. https://huggingface.co/datasets/Congliu/ Chinese-DeepSeek-R1-Distill-data-110k , 2025. [29] Liu, H., Teng, Z., Cui, L., Zhang, C., Zhou, Q., and Zhang, Y . Logicot: Logical chain-of-thought instruction-tuning. arXiv preprint arXiv:2305.12147 , 2023. [30] Liu, L., Liu, X., Wong, D. F., Li, D., Wang, Z., Hu, B., and Zhang, M. Selectit: Selective instruction tuning for llms via uncertainty-aware self-reflection. Advances in Neural Information Processing Systems , 37:97800–97825, 2024. [31] Liu, W., Zeng, W., He, K., Jiang, Y ., and He, J. What makes good data for alignment? a compre- hensive study of automatic data selection in instruction tuning. arXiv preprint arXiv:2312.15685 , 2023. [32] Lu, K., Yuan, H., Yuan, Z., Lin, R., Lin, J., Tan, C., Zhou, C., and Zhou, J. # instag: Instruc- tion tagging for analyzing supervised fine-tuning of large language models. arXiv preprint arXiv:2308.07074 , 2023. [33] Mekala, D., Nguyen, A., and Shang, J. Smaller language models are capable of selecting instruction-tuning training data for larger language models. arXiv preprint arXiv:2402.10430 , 2024. [34] Muennighoff, N., Yang, Z., Shi, W., Li, X. L., Fei-Fei, L., Hajishirzi, H., Zettlemoyer, L., Liang, P., Candès, E., and Hashimoto, T. s1: Simple test-time scaling. arXiv preprint arXiv:2501.19393 , 2025. [35] OpenAI. Learning to reason with llms, 2024. URL https://openai.com/index/ learning-to-reason-with-llms/ . Accessed: 2025-04-24. [36] Pan, X., Huang, L., Kang, L., Liu, Z., Lu, Y ., and Cheng, S. G-dig: Towards gradient-based diverse and high-quality instruction data selection for machine translation. arXiv preprint arXiv:2405.12915 , 2024. [37] Qwen Team. Qwq-32b: Embracing the power of reinforcement learning, March 2025. URL https://qwenlm.github.io/blog/qwq-32b/ . Accessed: 2025-04-24. [38] Rajbhandari, S., Rasley, J., Ruwase, O., and He, Y . Zero: Memory optimizations toward training trillion parameter models. In
https://arxiv.org/abs/2505.17266v2
SC20: International Conference for High Performance Computing, Networking, Storage and Analysis , pp. 1–16. IEEE, 2020. [39] Rein, D., Hou, B. L., Stickland, A. C., Petty, J., Pang, R. Y ., Dirani, J., Michael, J., and Bowman, S. R. Gpqa: A graduate-level google-proof q&a benchmark. In First Conference on Language Modeling , 2024. [40] Roziere, B., Gehring, J., Gloeckle, F., Sootla, S., Gat, I., Tan, X. E., Adi, Y ., Liu, J., Sauvestre, R., Remez, T., et al. Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950 , 2023. [41] Shao, Z., Wang, P., Zhu, Q., Xu, R., Song, J., Bi, X., Zhang, H., Zhang, M., Li, Y ., Wu, Y ., et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. [42] Snell, C., Lee, J., Xu, K., and Kumar, A. Scaling llm test-time compute optimally can be more effective than scaling model parameters. arXiv preprint arXiv:2408.03314 , 2024. [43] Song, J., Liu, S., Zhu, B., and Rao, Y . Iterselecttune: An iterative training framework for efficient instruction-tuning data selection. arXiv preprint arXiv:2410.13464 , 2024. [44] Su, J., Ahmed, M., Lu, Y ., Pan, S., Bo, W., and Liu, Y . Roformer: Enhanced transformer with rotary position embedding. Neurocomputing , 568:127063, 2024. 12 [45] Team, K., Du, A., Gao, B., Xing, B., Jiang, C., Chen, C., Li, C., Xiao, C., Du, C., Liao, C., et al. Kimi k1. 5: Scaling reinforcement learning with llms. arXiv preprint arXiv:2501.12599 , 2025. [46] Team, O. T. Open Thoughts, January 2025. [47] Wang, P., Li, L., Shao, Z., Xu, R., Dai, D., Li, Y ., Chen, D., Wu, Y ., and Sui, Z. Math- shepherd: Verify and reinforce llms step-by-step without human annotations. arXiv preprint arXiv:2312.08935 , 2023. [48] Wang, X., Wei, J., Schuurmans, D., Le, Q., Chi, E., Narang, S., Chowdhery, A., and Zhou, D. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171 , 2022. [49] Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., Le, Q. V ., Zhou, D., et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. [50] Xia, M., Malladi, S., Gururangan, S., Arora, S., and Chen, D. Less: Selecting influential data for targeted instruction tuning. arXiv preprint arXiv:2402.04333 , 2024. [51] Xie, T., Gao, Z., Ren, Q., Luo, H., Hong, Y ., Dai, B., Zhou, J., Qiu, K., Wu, Z., and Luo, C. Logic-rl: Unleashing llm reasoning with rule-based reinforcement learning. arXiv preprint arXiv:2502.14768 , 2025. [52] Yang, A., Zhang, B., Hui, B., Gao, B., Yu, B., Li, C., Liu, D., Tu, J., Zhou, J., Lin, J., et al. Qwen2. 5-math technical report: Toward mathematical expert model via self-improvement. arXiv preprint arXiv:2409.12122 , 2024. [53] Yang, X., Nie, S., Liu, L., Gururangan, S., Karn, U., Hou, R., Khabsa, M., and Mao, Y . Diversity-driven data selection for language model tuning through sparse autoencoder. arXiv preprint arXiv:2502.14050 , 2025. [54] Yang, Y ., Mishra, S., Chiang, J., and Mirzasoleiman, B. Smalltolarge (s2l): Scalable data
https://arxiv.org/abs/2505.17266v2
selection for fine-tuning large language models by summarizing training trajectories of small models. Advances in Neural Information Processing Systems , 37:83465–83496, 2024. [55] Ye, Y ., Huang, Z., Xiao, Y ., Chern, E., Xia, S., and Liu, P. Limo: Less is more for reasoning. arXiv preprint arXiv:2502.03387 , 2025. [56] Yeo, E., Tong, Y ., Niu, M., Neubig, G., and Yue, X. Demystifying long chain-of-thought reasoning in llms. arXiv preprint arXiv:2502.03373 , 2025. [57] Yu, L., Jiang, W., Shi, H., Yu, J., Liu, Z., Zhang, Y ., Kwok, J. T., Li, Z., Weller, A., and Liu, W. Metamath: Bootstrap your own mathematical questions for large language models. arXiv preprint arXiv:2309.12284 , 2023. [58] Yuan, Z., Yuan, H., Li, C., Dong, G., Lu, K., Tan, C., Zhou, C., and Zhou, J. Scaling relationship on learning mathematical reasoning with large language models. arXiv preprint arXiv:2308.01825 , 2023. [59] Zeng, W., Huang, Y ., Liu, Q., Liu, W., He, K., Ma, Z., and He, J. Simplerl-zoo: Investigating and taming zero reinforcement learning for open base models in the wild. arXiv preprint arXiv:2503.18892 , 2025. [60] Zhang, J., Qin, Y ., Pi, R., Zhang, W., Pan, R., and Zhang, T. Tagcos: Task-agnostic gradient clustered coreset selection for instruction tuning data. arXiv preprint arXiv:2407.15235 , 2024. [61] Zhang, Q., Zhang, Y ., Wang, H., and Zhao, J. Recost: External knowledge guided data-efficient instruction tuning. arXiv preprint arXiv:2402.17355 , 2024. [62] Zhang, Y ., Luo, Y ., Yuan, Y ., and Yao, A. C. Autonomous data selection with language models for mathematical texts. In ICLR 2024 Workshop on Navigating and Addressing Data Problems for Foundation Models , 2024. 13 [63] Zhao, H., Andriushchenko, M., Croce, F., and Flammarion, N. Long is more for alignment: A simple but tough-to-beat baseline for instruction fine-tuning. arXiv preprint arXiv:2402.04833 , 2024. [64] Zheng, Y ., Zhang, R., Zhang, J., Ye, Y ., Luo, Z., Feng, Z., and Ma, Y . Llamafactory: Unified efficient fine-tuning of 100+ language models. arXiv preprint arXiv:2403.13372 , 2024. [65] Zhou, C., Liu, P., Xu, P., Iyer, S., Sun, J., Mao, Y ., Ma, X., Efrat, A., Yu, P., Yu, L., et al. Lima: Less is more for alignment. Advances in Neural Information Processing Systems , 36: 55006–55021, 2023. 14 A Appendix A.1 Case study of Efficient Exploration in Reasoning Case Study of Efficient Exploration in Reasoning Question: Suppose $ \triangle ABC $ has angles $ \angle BAC = 84^ \circ $, $ \angle ABC = 60^ \circ $, and $ \angle ACB = 36^ \circ $. Let $ D, E, $ and $ F $ be the midpoints of sides $ \overline{BC} $, $ \overline{AC} $, and $ \overline{AB} $, respectively. The circumcircle of $ \triangle DEF $ intersects $ \overline{BD} $, $ \overline{AE} $, and $ \overline{AF} $ at points $ G, H, $ and $ J $, respectively. The points $ G, D, E, H, J, $ and $ F $ divide the circumcircle of $ \triangle DEF $ into six minor arcs, as shown. Find $ \widehat {DE} + 2 \cdot \widehat {HJ} +
https://arxiv.org/abs/2505.17266v2
3 \cdot \widehat {FG} $, where the arcs are measured in degrees. Long -CoT Reasoning with Low -Efficiency: <think> Okay, so I need to find the sum of some arcs on the circumcircle of triangle DEF. Let me start by understanding the problem step by ste p. … Alternatively, consider that 84 °+ 60°+ 36°= 180°, but the combination DE + 2*HJ + 3*FG equals 312 °, which is 84 °*4 - 12°, not helpful. Alternatively, maybe the answer is 3*(84 °+ 60°+ 36°) -120°, but that's 432 -120 = 312, again not helpful. Alternatively, perhaps the answer is 2*(180 °) -180°+ 36°, which is 36 °, but that doesn't align. … Alternatively, maybe I made a miscalculation. Let's check: DE = 84°, HJ = 60°, FG = 36°. DE + 2*HJ + 3*FG = 84 + 2*60 + 3*36 = 84 + 120 + 108 = 312 °. Yes, tha t's correct. However, 312 °is equal to 360 °-48°, which might relate to the orthic triangle or something else, but I don't see the connection . Alternatively, maybe the answer is 180 °, considering that in some configurations, the sum of arcs can be 180 °, but I can't see how . … Given that, after careful consideration and multiple approaches, I think the answer is 312 °, but since this is a competition problem, it's likely that the answer is a nice number related to the original angles. However, given the combination 84 + 2*60 + 3*36, which yields 312, I think that's the correct answer. </think> Long -CoT Reasoning with High -Efficiency: <think>Okay, so I'm trying to solve this geometry problem about triangle ABC with given angles, midpoints, and intersections on its circumcircle. … Alternatively , perhaps recognizing that in triangle ABC, the centroid G divides the median in ratio 2:1, so the power of point G with respect to the nine -point circle might be useful . Alternatively, perhaps using spiral similarity . But perhaps stepping back, given the approximate measures, the total is around 334.5 °, which is close to 360 °. But the problem expects an exact value, likely an integer degree measure . Given the initial angles are multiples of 6 °, the answer is likely multiple of 6 °, perhaps 360 °-something . But wait, 334.5 °is approximately 334.5 °, which is 360 -25.5°, but not particularly meaningful . Alternatively, maybe the exact values are 72 °,24°,72°, leading to 72+48+216=336 °, which is 336 °, close to our approximate 334.5 °, but not exact . Alternatively, perhaps there's a miscalculation in the approximate measures . Wait, let me re -examine the central angles . For arc DE, between D and E . Vectors v_D=(0.2498,0.3445), v_E=(-0.2502,0.3445 ). … If that's the case, then : measure(arc DE)=2*angle at C=72 ° measure(arc HJ)= angle at B -angle at C=24 ° measure(arc FG)=2*angle at C=72 ° Thus, the expression : measure (arc DE) + 2*measure(arc HJ) +3*measure(arc FG)=72 °+2*24°+3*72°=72 +48 +216=336 ° So, 336°, which is 360 °-24°, which is angle at B . So, maybe that's the
https://arxiv.org/abs/2505.17266v2
exact value . Thus, boxed, it's \boxed{336} Figure 10: A case study of efficient exploration in reasoning on AIME 25. The emergence of more rethinking tokens in long-CoT reasoning does not lead to improved efficiency. In contrast, stronger LRMs reach the correct reasoning trace more efficiently. 15 A.2 Diversity of Instruction 0.100.150.200.250.30Pass@1 aime24 0.100.150.200.25 aime25 0.550.600.650.700.75 amc23 2% 5% 10% Subset Size (%)0.150.200.250.300.350.400.450.500.550.600.65Maj@16 2% 5% 10% Subset Size (%)0.200.250.300.350.40 2% 5% 10% Subset Size (%)0.650.700.750.800.850.900.95 Random Diverse Original Figure 11: Performance across three expert-level benchmarks. Subset size refers to the proportion selected from data pool by length reasoning trace, either diverse or random. 16 A.3 Case study of data selection by joint ranker in Chinese-DeepSeek-R1-Distill-data. Case study of joint ranking in Chinese -DeepSeek -R1-Distill -data Instruction: (Difficulty Score : 0.7549) 已知数列 \$\left{a \_{n}\right}, \left{b \_{n}\right} \$ 均为各项都不相等的数列 , \$S\_{n}\$ 为\$\left{a \_{n}\right} \$ 的前 \$n\$ 项和 , \$a\_{n+1} b\_{n}=S \_{n}+1 \left(n \in N^{*} \right )\$. (1) 若\$a\_{1}=1, b \_{n}= \frac{n}{2} \$, 求\$a\_{4}\$ 的值 ; (2)…(3)… Given two sequences \${a\_n}\$ and \${b\_n}\$ where all terms are distinct, and \$S\_n\$ is the sum of the first \$n\$ terms of \${a\_n}\$, with the relationship \$a\_{n+1} b \_n = S \_n + 1 \$ for all \$n \in \mathbb {N^*}\$. (1) If \$a\_1 = 1 \$ and \$b\_n = \frac{n}{2} \$, find the value of \$a\_4\$;(2)…(3)… Response: (Length of Reasoning Trace: 6427) <think >好,我现在要解决这三个题目,先从第一题开始吧 。 Alright , I’ll start solving these three problems, beginning with the first one. ...n=3时, S3=1+4+6=11 ,右边 11+1=12,左边 a4\*(3/2)=12 →a4=8 ,正确。没问题。所以( 1)的结果是 a4=8。接下来是题目( 2) ...When n=3, \$S\_3 = 1 + 4 + 6 = 11 \$, the right -hand side is \$11 + 1 = 12 \$, the left -hand side is \$a\_4 \cdot \frac{3}{2} = 12 \Rightarrow a\_4 = 8 \$, correct. So the answer to (1) is \$a\_4 = 8 \$. Next, problem (2)... ...不过需要 注意首项是否与 n有关,显然这里的表达式对于每个 n来说,除了 q^n外,都是常数,所以确实构成等比数列。因此,题目( 2)得证。 ...Note that the first term’s dependency on \$n\$ needs attention. Clearly, for each \$n\$, aside from \$q^n\$, the rest is constant, so the sequence is geometric. Thus, problem (2) is proven. ...然而,这样的方法可能需要较多的计算,或许有更简洁的方法。另一种思路是利用数列的递推关系。 ...However, this method involves a lot of computation; perhaps a simpler way exists. Another approach is to use the recurrenc e relation. ...综上,三个问题的解答如上 :(1)a4=8;(2)证明 {b\_n +1/(1−q)} 是公比为 1/q的等比数列 ;(3)充要条件为 d=1/2。 In summary, the solutions to the three problems are as follows : (1) \$a\_4 = 8 \$;(2) Prove that \${b\_n + \frac{1}{1 -q}}\$ is a geometric sequence with common ratio \$1/q \$; (3) The necessary and sufficient condition is \$d = \frac{1}{2} \$. </think> (1) 由已知条件 \$a\_{n+1}b \_n = S \_n + 1 \$,...故充要条件为 \$\boxed{d = \frac{1}{2}} \$。 (1) From the given condition \$a\_{n+1}b \_n = S \_n + 1 \$, ... thus the necessary and sufficient condition is \$\boxed{d = \frac{1}{2 }}\$. Instruction : (Difficulty Score: 0.1824 ) Nina想用她的零用钱买一个新的视频游戏。游戏的价格是 50美元。 Nina还了解到有 10%的销售税。她每周收到 10美元的零用钱,并认为她可以 存一半。 Nina需要多少周才能用她的储蓄买到新的视频游戏 ? Nina wants to buy a new video game with her allowance. The game costs \$50. She also found out there’s a 10% sales tax. She receives \$10 per week and plans to save half of it. How many weeks will it take
https://arxiv.org/abs/2505.17266v2
for Nina to save enough to buy the game ? Response: (Length ofReasoning Trace: 340) <think >嗯,好的,我现在得帮 Nina算算她需要存多少周才能买到那个视频游戏。 Alright, let's calculate how many weeks Nina needs to save to buy that video game. ...接下来,她每周存多少钱。零用钱是 10美元,存一半的话,就是 10除以 2,等于 5美元每周 。她每周存 5美元,总需要的钱是 55美元,所以需 要多少周呢? ...Then, how much does she save weekly? She gets \$10 and saves half, so \$10 ÷2 = \$5 per week. The total needed is \$55, how many weeks is that? ...不过我得再检查一下步骤有没有错。首先计算总成本是否正确。 ...But I should double -check the steps. First, let’s verify if the total cost is calculated correctly. </think> Nina需要计算购买含税后的游戏总成本,并确定需要多少周的储蓄才能达到这个金额。 ...最终, Nina需要 \$\boxed{11} \$周的储蓄才能购买游戏。 Nina needs to calculate the total cost including tax and determine how many weeks of savings are required to reach that amoun t. ... In the end, Nina needs \$\boxed{11} \$ weeks of saving to buy the game. Instruction : (Difficulty Score: 0.0420 ) 绿豆稀饭的颜色是怎样的 ? What is the color of mung bean porridge? Response: (Length of Reasoning Trace: 501) <think >嗯,用户问的是绿豆稀饭的颜色是怎样的。首先,我需要确认自己对绿豆稀饭的了解。 The user is asking what color mung bean porridge is. First, I need to confirm my understanding of it. ...还有可能加入其他食材,比如冰糖或百合,但用户的问题没有提到这些,所以暂时不考虑。 ...It may also include other ingredients like rock sugar or lily bulbs, but since the user didn’t mention them, we’ll set the m aside for now. ...总结一下,绿豆稀饭的颜色通常是浅黄绿色或浅褐色,具体取决于绿豆的烹饪程度、水质和烹饪方法 ...To summarize, the color of mung bean porridge is usually light yellow -green or light brown, depending on how it's cooked, the water quality, and the method used. </think> 绿豆稀饭 的颜色通常呈现为 **浅黄绿色或浅褐色 **,具体取决于以下几个因素 :…若追求颜色鲜绿,可缩短熬煮时间或调节水的 酸碱度。 The color of mung bean porridge is typically **light yellow -green or light brown**, depending on several factors: … 4. **Additives**: Sugar or dried fruits can affect the final color. To retain a green hue, shorten cooking time or slightly acidify the water Figure 12: Case study of data selection by joint ranker in Chinese-DeepSeek-R1-Distill-data. 17
https://arxiv.org/abs/2505.17266v2
arXiv:2505.17267v1 [cs.CL] 22 May 2025GREEK BARBENCH : A Challenging Benchmark for Free-Text Legal Reasoning and Citations Odysseas S. Chlapanis1,2Dimitrios Galanis3Nikolaos Aletras4Ion Androutsopoulos1,2 1Department of Informatics, Athens University of Economics and Business, Greece 2Archimedes, Athena Research Center, Greece 3Athena Research Center, Greece 4University of Sheffield, United Kingdom Abstract We introduce GREEK BARBENCH , a bench- mark that evaluates LLMs on legal questions across five different legal areas from the Greek Bar exams, requiring citations to statutory ar- ticles and case facts. To tackle the challenges of free-text evaluation, we propose a three- dimensional scoring system combined with an LLM-as-a-judge approach. We also develop a meta-evaluation benchmark to assess the cor- relation between LLM-judges and human ex- pert evaluations, revealing that simple, span- based rubrics improve their alignment. Our sys- tematic evaluation of 13proprietary and open- weight LLMs shows that even though the best models outperform average expert scores, they fall short of the 95th percentile of experts. 1 Introduction As legal AI assistants become increasingly preva- lent, the need for realistic legal LLM benchmarks has never been more imperative.1Most widely used legal Natural Language Processing (NLP) benchmarks (Chalkidis et al., 2022; Niklaus et al., 2023) focus on classification tasks, e.g., legal judge- ment prediction (Aletras et al., 2016), which have been criticized (Medvedeva and Mcbride, 2023) for being more constrained and less representative than real-world tasks. Even more recent LLM-focused legal benchmarks (Guha et al., 2023; Fei et al., 2024; Joshi et al., 2024) do not go beyond closed- form questions (e.g., multiple-choice questions), failing to capture the true complexity of legal rea- soning in practice, which involves identifying, an- alyzing and synthesizing relevant information to reach a conclusion. Unfortunately, most existing benchmarks with challenging legal questions and 1https://www.abajournal.com/web/article/aba- tech-report-finds-that-ai-adoption-is-growing- but-some-are-hesitantFacts [1]Antonis visited his dermatologist, Ioannis, to remove facial skin tags. [2]Ioannis’s assistant, Penelope, accidentally applied pure acetic acid due to a mislabeled bottle, causing burns on Anto- nis’s face. [3]He needed plastic surgery costing C2,500 and is now seeking these costs plus C75,000 for moral damages. Question Which individuals are liable for the injury? Relevant Legal Context Civil Code 914: Anyone who unlawfully causes damage must compensate the victim. Civil Code 922: An employer is liable for unlawful damages caused by their employee during work. Ground Truth Answer Ioannis is responsible vicariously for Penelope’s actions [2](Civil Code 922 )and Penelope is directly liable for her negligence [2](Civil Code 914 ).Hence, both are liable and must compensate Antonis. Table 1: Cropped example (English translation) from GREEK BARBENCH . The answer requires multi-hop reasoning and citing legal articles and case facts. The spans corresponding to the scoring dimensions are high- lighted in color: Facts (green), Cited Articles (blue) and Analysis (orange). Important spans are marked in bold and cited facts are denoted by square brackets. The complete example is presented in Appendix C. free-text responses are proprietary and thus inac- cessible to the research community.2 Another challenge is that realistic benchmarks often require costly manual evaluation by legal ex- perts, which limits scalability (Magesh et al., 2025; Martin et al., 2024). Automatic evaluation, us-
https://arxiv.org/abs/2505.17267v1
ing the LLM-as-a-judge framework (Zheng et al., 2023), is a promising alternative; however, its reli- ability has not been extensively assessed in legal reasoning (Bhambhoria et al., 2024; Li et al., 2025). To address these issues, we present the G REEK - BARBENCH , a benchmark that evaluates the rea- 2https://www.vals.ai/benchmarks soning capabilities of LLMs on challenging legal questions across five legal areas. The questions are taken from the Greek Bar exams and require open- ended answers with citations to statutory articles and case facts. In addition, we introduce an ac- companying benchmark for LLM-judges, designed to measure how well their scores correlate with those of human experts. GREEK BARBENCH is the only Greek dataset for legal reasoning. Our main contributions are the following: •GREEK BARBENCH : a challenging legal rea- soning benchmark that requires free-text an- swers citing case facts and statutory articles. •GBB-JME : an accompanying dataset with human-evaluated answers from five different LLMs, to assess the quality of candidate LLM- judges in G REEK BARBENCH . •A three-dimensional scoring system and anLLM-judge framework based on span- rubrics per dimension ( Facts ,Cited Articles , Analysis ), which aligns well with human ex- pert evaluation. •A systematic evaluation of 13 frontier and open-weight LLMs onGREEK BARBENCH , using the best LLM-judge at GBB-JME. Top models surpass average expert perfor- mance , but not the 95th percentile of experts. All resources including the two benchmarks (except for a small semi-private test set) and the prompts are publicly available.3 2 G REEK BARBENCH (GBB) 2.1 Greek Bar Exams Law graduates in Greece must pass the Greek Bar exam to become licensed attorneys. The exam eval- uates candidates through practical legal questions across five key areas of law: Civil Law, Crimi- nal Law, Commercial Law, Public Law, Lawyers’ Code. Greece’s legal system is statutory, meaning that laws are derived from legal code documents (statutes), rather than from judicial precedents (case law). The exams are open-book; candidates have access to legal code documents and are expected to cite statutory articles from them in their answers. The available documents include: the Civil Code and Civil Procedure Code, the Criminal Code and 3The URL of the GitHub repository will be provided in the camera-ready version.Benchmark Lang Citations Multi-Hop Free-Text Judge Eval LegalBench en ✗ ✓ ✗ ✗ LexEval (5.4) zh ✗ ✗ ✓ ✗ CaseGen zh ✗ ✓ ✓ ✗ OAB-Bench por ✗ ✓ ✓ ✓ LLeQA fr ✓ ✓ ✓ ✗ GBB (Ours) el ✓ ✓ ✓ ✓ Table 2: Comparison of legal benchmarks. GREEK BAR- BENCH uniquely encompasses all challenging features essential for evaluation in realistic and practical sce- narios. ‘Lang’: language of dataset. ‘Citations’: legal articles. ‘Multi-hop’: reasoning using multiple sources. ‘Free-text’: open-ended responses. ‘Judge Eval’: man- ual evaluations to compare LLM-judges. Criminal Procedure Code, eight Commercial Law codes, eleven Public Law codes, as well as the Lawyers’ Code and the Code of Ethics for Legal Practice (see Table 3). Candidate lawyers typically approach the exam by first studying the case facts to identify the relevant legal issues. They then navigate the legal
https://arxiv.org/abs/2505.17267v1
code documents to find the rele- vant chapter and pinpoint the exact statutory article within it to cite in support of their arguments. 2.2 Task Each instance in GREEK BARBENCH is taken from Greek Bar exam papers. The input consists of (1) the case facts, (2) the legal question, and (3) a col- lection of potentially relevant chapters of statutory articles. The desired output is the correct free-text answer to a legal question, providing an analysis with citations to the case facts and the applicable statutory articles . The primary challenges are to discern significant facts, to identify the applicable articles and, finally, to analyze the outcomes of the application of the articles to answer the question (see the example in Table 1). These attributes make GREEK BARBENCH unique compared to other le- gal benchmarks (Table 2). 2.3 Dataset Statistics We collect a total of 65 exam papers; 13 exam pa- pers from each of the five aforementioned areas. The papers and suggested solutions are publicly available in a booklet in PDF format, spanning from 2015 to 2024.4The booklet is converted to text format and further processed (§ 2.5) to prepare the dataset. Each exam paper includes 4.7 ques- tions on average, resulting in a total of 310 samples. 4The booklet is available at https://www.lawspot.gr/ nomika-nea/panellinios-diagonismos-ypopsifion- dikigoron-themata-exetaseon-kai-endeiktikes-3 . The authors permit its distribution for academic research only. Law Areas Samples Legal Total Cited Context Codes Articles Articles (tokens) Civil 71 2 3,264 286 87k Criminal 53 2 1,253 186 58k Commercial 58 8 4,177 159 29k Public 71 11 2,912 118 67k Lawyers 57 2 4,476 182 66k Total 310 25 16,082 931 62k Table 3: Summary of dataset statistics. ‘Legal Codes’ indicates the number of distinct legal code documents in each area. ‘Cited Articles’ is the total number of citations to legal code articles. ‘Context’ denotes the av- erage token count of the relevant legal context (chapters of legal code) provided in the input of candidate LLMs. We keep the questions from 2024 (22 in total) as a semi-private test set, to avoid data contamination.5 The remaining 288 samples comprise the public test set. The semi-private set will be updated each year with two more exam papers from each legal area (there are two examinations per year), and made publicly available. Answering exam questions requires citing ar- ticles from 25 legal code documents, which we collect from the same source website as the ex- ams.6Detailed statistics for these documents are presented in Table 3 per legal area. Articles are cited 931 times in total, across all exam questions. The articles within each legal code document are grouped thematically into chapters. The total num- ber of citable articles is more than 16 thousand. 2.4 Relevant Legal Context As mentioned in Section 2.1, the Greek Bar ex- ams are open-book, allowing candidate lawyers to navigate legal code documents to identify rel- evant statutory articles for the presented case. Simulating this setup presents several challenges. One approach would be implementing a standard Retrieval-Augmented Generation (RAG) pipeline, using sparse (e.g., BM25) or dense retrievers (Karpukhin
https://arxiv.org/abs/2505.17267v1
et al., 2020) to select the kmost ‘rel- evant’ articles for inclusion in the LLM’s input. However, this approach suffers from three signifi- cant limitations: a) candidate lawyers taking the ex- ams do not have access to such retrieval tools, mak- ing direct comparisons with human performance problematic; b) retrievers are prone to errors, creat- ing a substantial risk that even with large values of k, the ground truth articles might not appear among 5With ‘semi-private’ we mean that the test set is not public, but the raw data sources are available. 6www.lawspot.grthe top retrieved articles; and c) as demonstrated by Krishna et al. (2025), benchmarking RAG systems requires testing multiple configurations with vary- ing values of kand, ideally, different retriever mod- els, complicating fast integration of new LLMs. Instead, we adopt a simplified yet challenging approach. For each legal case, we automatically collect all the ground truth articles cited across all questions pertaining to that case (using regular ex- pressions), and identify the chapters containing the articles. We hypothesize that all the articles of the identified chapters have substantial relevance for all questions of the particular case. Therefore, we provide the entire text (all articles) of the identi- fied chapters as the relevant legal context in the LLM input for every question of the particular case. Hence, for each question, the input context contains all the ground truth articles of the question, along with all the other articles from their chapters, as particularly close distractors. It also includes additional distractor articles from other chapters related to other questions of the same case, which increase the difficulty of citing the correct articles. The total length of the legal context fed to the LLM per question is 62k tokens on average (Ta- ble 3). This makes the task manageable for recent LLMs with context windows exceeding 100k to- kens, while still presenting a significant challenge, as LLMs often struggle to extract crucial informa- tion from extended contexts (Liu et al., 2024). 2.5 Fact Segmentation To enhance the evaluation process (both manual and automatic), we require citations to facts in can- didate answers, though this is not mandatory in of- ficial exams. Fact-citations help legal professionals in practical applications by enabling more efficient verification of answer accuracy. To facilitate fact- citation, we present the case facts as a numbered list of sentences (as shown in Table 1). For segmen- tation, we employ the Segment-Any-Text neural model (Frohmann et al., 2024). 2.6 Three-Dimensional Scoring System The official evaluation committee of the Greek Bar Exams grades candidate answers on a scale of 1 to 10. Without explicit guidelines, they grade by com- paring answers to the ground truth based on their discretion. Drawing inspiration from established le- gal research and evaluation practices (Clark and De- Sanctis, 2013), and guided by our legal expert anno- tators (§ 5.2), we develop a novel three-dimensional scoring system to improve the evaluation process for the benchmark. The proposed approach as- sesses legal reasoning across three dimensions: the Facts , the Cited Articles , and the Analysis . Each dimension is
https://arxiv.org/abs/2505.17267v1
rated on a scale of 1 to 10, and the fi- nal score is the average of these three. This system allows explainability through detection of specific shortcomings in the reasoning abilities of LLMs. The Facts score measures understanding of case facts; the Cited Articles evaluates the accuracy and interpretation of cited legal articles; and the Anal- ysisevaluates the ability to apply legal articles to the facts and reach conclusions. For instance, a low Facts score indicates hallucinations, a low Cited Articles score shows difficulty in identifying ap- plicable articles, and a low Analysis score reveals weakness in legal reasoning. 3 Automatic Evaluation To address the evaluation of free-text answers with- out the prohibitive cost of manual annotations, we use the LLM-as-a-judge framework (Zheng et al., 2023). LLM-judges can be categorized into two pri- mary types: (a) pairwise LLM-judges, which eval- uate two candidate answers and determine which is preferred (or declare a tie), and (b) grading LLM- judges, which assign an integer score to each indi- vidual candidate answer (Zheng et al., 2023). In our work, we focus on grading LLM-judges to allow cost-effective integration of new participant LLMs without the overhead of quadratically increasing pairwise comparisons. To improve the alignment of LLM-judges with human expert annotators, we propose novel span- based rubrics; i.e., evaluator instructions in the form of annotated spans per question. These spans will guide the LLM-Judge in what to assess in the candidate answers. However, even with these question-specific rubrics, replicating the nuanced evaluation of human experts, especially in complex tasks like legal writing, cannot be guaranteed. For this reason, we also include a framework to meta- evaluate whether LLM-judges are suitable proxies for human evaluation on G REEK BARBENCH . 3.1 Simple LLM-Judge As an initial approach, we designed a straightfor- ward prompt for a simple LLM-judge. The prompt outlines the evaluation task and explicitly defines the criteria for the Facts ,Cited Articles , and Anal- ysisscores. All necessary contextual informationis provided; the facts of the case, the specific legal question, the ground truth answer with the cited articles and the candidate answer to be evaluated. This context mirrors the information provided to the human annotators for the manual evaluations (§ 5.2). The required output format is clearly spec- ified: the model must provide an explanation for each score, followed by the integer score. The com- plete prompt is presented in Appendix B (Fig. 5). 3.2 Span LLM-Judge According to Clark and DeSanctis (2013), rubrics, i.e., instructions that break down an assignment in identifiable components, can significantly im- prove the consistency of legal writing evaluation. To construct rubrics for our benchmark, our legal expert annotators marked reference spans (colored text in Figure 1) in the ground-truth answer for each score ( Facts ,Cited Articles ,Analysis ). These spans contain the information pertinent to their re- spective dimension. Each span is then annotated with important span subsets (usually a few words within the span) that are crucial for an answer to be considered correct (bold text in Table 1). Missing this crucial information should result in a
https://arxiv.org/abs/2505.17267v1
lower score. We opted not to assign specific point val- ues or ‘costs’ to each important subset, as previous work with rubrics (Starace et al., 2025; Pires et al., 2025), to minimize the annotation burden. The pro- cess involves simply highlighting the three scoring dimensions using different colors and then marking the important subsets within those highlighted sec- tions. The LLM-judge is instructed to determine whether the candidate answer covers the informa- tion in the spans and then use this assessment to evaluate each scoring dimension. The complete prompt is presented in Appendix B (Fig. 6). 4 Meta Evaluation Meta-evaluation of grading LLM-judges aims to quantify the alignment between LLM-generated scores and human expert annotations. Previous research has predominantly relied on Pearson’s or Spearman’s correlation coefficients as primary meta-metrics (Bavaresco et al., 2024; Niklaus et al., 2025), often without substantial justification. However, advancements in meta-evaluation have emerged from the machine translation domain, par- ticularly through the WMT Metrics Shared Task (Freitag et al., 2024, 2023), where automatic eval- uation frameworks have been systematically com- pared and refined. The task aims to identify optimal metrics for translation quality assessment by com- paring system outputs against references. Recent findings demonstrate that state-of-the-art metrics are increasingly LLM-based. The task has revealed that Pearson’s correlation coefficient exhibits vul- nerability to outliers, while Spearman’s ρdisre- gards the magnitude of ranking errors, applying uniform penalties. To address these limitations, WMT has adopted Soft Pairwise Accuracy (SPA) (Thompson et al., 2024), a metric that assigns par- tial credit for nearly correct rankings, thereby pro- viding an evaluation framework that better reflects the alignment of metrics with human experts. 4.1 Soft Pairwise Accuracy (SPA) SPA measures the degree of alignment in evalua- tionconfidence between human experts and LLM- judges (or any other automatic metric). For exam- ple, if a human expert is confident that one system (e.g., a candidate LLM from GREEK BARBENCH ) outperforms another, but the LLM-judge is uncer- tain, SPA penalizes the judge—even if the rank- ing happened to be correct. To do this, SPA ap- proximates the confidence of each judge (human or LLM) on each pairwise comparison between systems using p-values of appropriate permutation tests (Fisher, 1935), as detailed below. We use the original implementation.7Formally, SPA between a metric mand human experts his defined as: SPA(m, h) = N 2!−1N−1X i=0N−1X j=i+1 1− ph ij−pm ij  where Nis the number of systems being evaluated, ph ijis the p-value for the hypothesis that system i is better than system jaccording to human scores, andpm ijis the corresponding p-value according to the metric under evaluation. The termN 2−1nor- malizes the summation by the total number of sys- tems under comparison. SPA permutation tests: To estimate confidence of an evaluator (either human or automatic) in a pairwise system comparison, SPA uses permutation tests to calculate the expected mean difference un- der the null hypothesis that the systems are of equal quality. Specifically, a number of mock systems (1,000 in our experiments, following the original 7https://github.com/google-research/mt- metrics-eval Manual Evaluations on GreekBarBench Top-5% Experts 8.87 Human Experts 7.78 Claude-3.7-Sonnet
https://arxiv.org/abs/2505.17267v1
7.55 OpenAI-01 7.52 Gemini-2.0-Flash 7.25 GPT-4o 7.14 Pass 6.0 Llama-3.1-70B4.55 0 1 2 3 4 5 6 7 8 910Figure 1: Manual evaluation by legal expert annotators on the semi-private test set of the 2024 exams. paper) are constructed as follows: for each ques- tion in the benchmark, the mock system is assigned either the score of system ior system jat random. The p-value is then computed as the proportion of mock systems for which the differences are greater than or equal to the mean difference between sys- temsiandj, as scored by the evaluator. 5 Experiments 5.1 Models Our experiments evaluate a diverse range of LLMs, comprising proprietary models (from Ope- nAI, Google, Anthropic) and open-weight models; Deepseek-R1 (DeepSeek-AI et al., 2025), Gemma- 3 (Team et al., 2025), and Llama-Krikri-8B8, a model specifically pretrained for the Greek lan- guage. We accessed proprietary models and the large open-weight Deepseek-R1 through Applica- tion Programming Interfaces (APIs) provided by OpenAI, Google, and AWS. The remaining open- weight models were deployed on a cluster of eight A100 GPUs using the vLLM framework (Kwon et al., 2023). Due to limited resources, we only evaluate a single run for each model. We used the default parameter configurations as specified by each model’s provider. Generation prompt: To generate responses from LLM candidates, we designed a system and user prompt for the questions of the benchmark. The system prompt instructs the LLM to answer with ci- tations to Greek statutory articles. The user prompt is structured to first describe the overall task, in- cluding clear instructions on the expected output format. Then it provides the numbered facts of the 8https://huggingface.co/ilsp/Llama-Krikri-8B- Instruct Model Simple-J Span-J Cost (SPA) (SPA) GPT-4.1-mini 0.723 0.856 /d⌢llar-sign/d⌢llar-sign GPT-4.1 0.807 0.855 /d⌢llar-sign/d⌢llar-sign/d⌢llar-sign Gemini-2.0-F 0.751 0.794 /d⌢llar-sign L-Krikri-8B 0.747 0.751 - Gemma-3-27B 0.819 0.749 - Gemini-2.0-L-F 0.695 0.708 /d⌢llar-sign GPT-4.1-nano 0.542 0.372 /d⌢llar-sign Table 4: Comparison of LLMs-judges on GBB-JME, using Simple-Judge and Span-Judge. Cost for input tokens per 1M is indicated as follows: /d⌢llar-sign(less than $0.3), /d⌢llar-sign/d⌢llar-sign(less than $1), and /d⌢llar-sign/d⌢llar-sign/d⌢llar-sign(less than $3). case, the question and the relevant legal context . The original prompts are available in Appendix B. 5.2 Manual Evaluation by Legal Experts In this section we present the manual evaluations that we collected for GBB-JME (§ 5.3), our Judge Meta-Evaluation benchmark for assessing LLM- judges on GREEK BARBENCH . We obtain ground truth evaluations ( Facts ,Cited Articles ,Analysis scores on a scale of 1 to 10) from two expert le- gal annotators—licensed Greek lawyers with law degrees and practical experience. The annotators were compensated for their time and expertise. They evaluated five LLMs on 87 questions drawn from three exam sessions (2024-A, 2023-A, and 2023-B), resulting in a total of 1,305 annotated samples. The models evaluated on all three exams were Claude-3.7-Sonnet, OpenAI-o1, GPT-4o, and Gemini-2.0-Flash. For the 2024 exam, we included the open-source Llama-3.1-70B; however, due to its poor performance and generation failures on sev- eral questions, we replaced it with Deepseek-R1 for the 2023-A and 2023-B exams. Annotations were managed with the open-source platform doccano .9 The average Krippendorff’s α(Krippendorff,
https://arxiv.org/abs/2505.17267v1
2011) between the two annotators on the three- dimensional scores was 0.74, and the SPA was 0.85, both indicating a substantial level of inter- annotator agreement (Artstein and Poesio, 2008). For the SPA calculation, we treated one annotator’s scores as ground truth and compared the other an- notator’s scores against them. This differs from Section 5.3, where SPA measures the correlation between LLM-generated scores and the aggregated scores of human annotators. The results for the 2024 exam are shown in Fig- 9https://doccano.prio.org Automatic Evaluation on GreekBarBench Top-5% Experts 8.87 Gemini-2.5-Flash 8.4 GPT-4.1 8.32 Human Experts 7.78 OpenAI-01 7.77 Claude-3.7-Sonnet 7.71 GPT-4.1-mini 7.63 Gemini-2.0-Flash 7.49 GPT-4o 7.44 Deepseek-R1 6.89 Gemma-3-27B 6.33 L-Krikri-8B 6.24 Gemma-3-12B 6.01 Pass 6.0 GPT-4.1-nano5.51 Gemma-3-4B4.41 0 1 2 3 4 5 6 7 8 910Figure 2: Comparison of closed and open-weight LLMs onGREEK BARBENCH with GPT-4.1-mini Span-Judge. ure 1. All LLMs except Llama-3.1-70B pass the exam. However, all LLMs lag behind the aver- age human expert performance and the 95th per- centile (‘top-5%’). The best-performing models are Claude-3.7-Sonnet (7.55) and OpenAI-o1 (7.52). 5.3 Judge Meta-Evaluation (GBB-JME) We evaluate seven LLMs as judges on our bench- mark for meta-evaluation, GBB-JME, using both theSimple-Judge prompt (§3.1) and the Span- Judge prompt with span-based rubrics (§3.2). The results are presented in Table 4. Generally, leading models (GPT-4.1, Gemini-2.0-Flash and GPT-4.1- mini) showed significant improvement when uti- lizing the Span-Judge prompt, while weaker mod- els (Llama-Krikri-8B, Gemma-3-27B, Gemini-2.0- Lite-Flash, GPT-4.1-nano) struggled with its com- plexity. GPT-4.1-mini and GPT-4.1 achieved the best performance at 0.856 SPA, and 0.855 SPA, respectively. The competitive performance of the smaller model, aligns with observations reported elsewhere (Niklaus et al., 2025). Due to its strong performance and lower cost, we adopted GPT-4.1-mini as the judge for all sub- sequent evaluations. The total cost for all evalu- ations using GPT-4.1-mini was under $60. The open-weight Gemma-3-27B model paired with the Simple-Judge prompt serves as a cost-effective al- ternative, but we also encourage researchers to eval- uate new LLMs as judges on the publicly available GBB-JME benchmark. Models Civil Law Criminal Law Commercial Law Public Law Lawyers’ Code Overall f c a avg f c a avg f c a avg f c a avg f c a avg f c a avg Top-5% - - - 9.00 - - - 10.00 - - - 10.00 - - - 9.20 - - - 9.18 - - - 8.87 Experts - - - 6.80 - - - 8.29 - - - 8.70 - - - 7.70 - - - 7.39 - - - 7.78 Gemini-2.5-F 8.8 8.4 8.4 8.53 8.4 7.9 8.5 8.28 8.6 8.08.3 8.27 8.7 8.0 8.1 8.28 8.9 8.5 8.5 8.62 8.7 8.2 8.4 8.40 GPT-4.1 8.8 8.18.4 8.44 8.5 8.2 8.2 8.28 8.48.1 8.3 8.27 8.6 7.7 8.1 8.14 8.8 8.2 8.4 8.48 8.6 8.0 8.3 8.32 Claude-3-7 8.5 7.2 7.4 7.72 8.2 6.9 7.0 7.37 7.6 6.9 7.4 7.31 8.6 7.2 7.6 7.79 8.5 8.2 8.2 8.29 8.3 7.3 7.5 7.71 GPT-4.1-mini 8.3 7.1 7.3 7.57 7.7 6.4 6.9 7.01 8.4 7.4 7.4 7.76 8.4 7.3 7.6
https://arxiv.org/abs/2505.17267v1
7.75 8.5 7.6 7.9 7.98 8.3 7.2 7.4 7.63 Gemma-3-27B 7.8 5.5 5.9 6.39 6.7 4.9 4.9 5.51 6.8 5.2 5.6 5.88 8.1 5.9 6.1 6.68 7.8 6.6 6.6 7.01 7.5 5.7 5.8 6.33 L-Krikri-8B 7.0 5.3 5.6 5.95 7.1 5.3 5.1 5.84 6.7 5.3 5.4 5.79 7.9 6.3 6.2 6.78 7.4 6.4 6.4 6.74 7.2 5.7 5.8 6.24 Table 5: Fine-grained comparison of proprietary and small open-weight LLMs on different legal areas: ‘Civil’, ‘Criminal’, ‘Commercial’, ‘Public’, ‘Lawyers’; and for different scoring dimensions: ‘Facts’ (f), ‘Cited Articles’ (c), ‘Analysis’ (a). ‘Experts’/‘Top-5%’ is the average/95th percentile score of candidate lawyers. Best LLM scores are in bold, failed scores are highlighted in red, and scores outperforming experts are highlighted in green. 5.4 Results on G REEK BARBENCH We conduct an extensive automatic evaluation of13LLMs on GREEK BARBENCH (Figure 2). We use GPT-4.1-mini as the LLM-Judge, em- ploying the ‘Span LLM-Judge’ prompt (§ 3.2). The evaluation includes proprietary models such as GPT-4o, the GPT-4.1 family (GPT-4.1-mini, GPT-4.1-nano), Gemini-2.0-flash, and Claude-3.7- Sonnet (with reasoning disabled), along with the reasoning models OpenAI-o1 and Gemini- 2.5-Flash. The open-weight models include the Gemma-3 family(Gemma-3-27B, Gemma-3-12B, and Gemma-3-4B), the specialized Greek model Llama-Krikri-8B-Instruct (Krikri-8B), and the rea- soning model DeepSeek-R1. The experimental results (Figure 2) reveal that Gemini-2.5-Flash (8.4) and GPT-4.1 (8.32) demon- strate the strongest performance on GREEK BAR- BENCH . They surpass the average legal expert score (7.78), though they still fall short of the 95th percentile (top-5%) of experts (8.87). OpenAI-o1 (7.77) and Claude-3.7-Sonnet (7.71), perform com- parably to the average human expert (7.78). The fact that Gemini-2.5-Flash and OpenAI-o1, two reasoning models, are among the top performers, shows that leveraging inference-time reasoning is a key factor for this benchmark. The smallest models, GPT-4.1-nano (5.51) and Gemma-3-4B (4.41) are the only models that fail the exams (passing score: 6.00). The 8B Krikri model surpasses Gemma-12B and achieves performance comparable to the signif- icantly larger Gemma-27B, highlighting the benefit of language-specific pretraining. Table 5 presents a fine-grained comparison for four of the best proprietary LLMs (Gemini-2.5- Flash, GPT-4.1, Claude-3.7-Sonnet , GPT-4.1- mini) and two small open-weight LLMs (Gemma- 3-27B, L-Krikri-8B), reporting their scores acrossfive different legal areas ( Civil ,Criminal ,Commer- cial,Public ,Lawyers ) and three scoring dimen- sions ( Facts ,Cited Articles ,Analysis ). The fine-grained comparison shows that LLMs exhibit consistent performance across all legal ar- eas and scoring dimensions. Notably, in the areas of ‘Civil Law‘, ‘Public Law‘, and ‘Lawyers’ Code‘, four of the LLMs achieve scores surpassing the average human expert performance highlighted in green in Table 5. In contrast, the smaller open models, Gemma-3-27B and L-Krikri-8B (indicated by red), struggle in certain areas, failing to meet the passing grade threshold of 6.00. The second- best model, GPT-4.1, matches the top performer, Gemini-2.5-Flash, in ‘Criminal’ and ‘Commer- cial Law’, but Gemini-2.5-Flash achieves slightly higher scores in the remaining three areas. Inter- estingly, even though ‘Civil Law‘ seems to present the greatest challenge for human experts (average score of 6.80), this is not the case for LLMs. Analyzing performance by dimension provides valuable insights into
https://arxiv.org/abs/2505.17267v1
model capabilities. A key finding is that all models struggle most with the ‘Cited Articles‘ dimension. The higher overall scores of Gemini-2.5-Flash and GPT-4.1 are largely attributable to their stronger performance in the Cited Articles andAnalysis dimensions, when com- pared to Claude-3.7-Sonnet and GPT-4.1-mini. Fur- thermore, Gemma-3-27B’s stronger performance compared to L-Krikri-8B is primarily attributable to its higher scores in the Facts dimension, while they get identical scores in the other dimensions. 6 Related Work Legal domain: In the legal domain, LexGLUE (Chalkidis et al., 2022) and LEXTREME (Niklaus et al., 2023) are established benchmarks for legal classification tasks. LegalBench (Guha et al., 2023) is the standard for evaluating LLMs on legal rea- soning via multiple-choice questions. More closely related to our work, task 5.4 of LexEval (Li et al., 2024) uses a similar legal examination for Chinese instead of Greek, but, unlike our approach, LexEval does not provide citations or use LLM-as-a-judge, instead evaluating with the less reliable, overlap- based ROUGE metric (Cohan and Goharian, 2016). LLeQA (Louis et al., 2024) collects pairs of ev- eryday legal questions and answers, including cita- tions to French statutory law articles, but they eval- uate using the METEOR metric without measuring its correlation with human experts. CaseGen (Li et al., 2025) on the other hand, assesses document drafting and legal judgment generation in Chinese using the LLM-as-a-judge approach. While they do measure agreement between human and LLM evaluations, they do not compare different prompts or models. Concurrent work, OAB-Bench (Pires et al., 2025), uses data from the Brazilian Bar Ex- amination and also provides the official guidelines as rubrics for the LLM-judges. They do evalu- ate different LLM-judges, but they only do it for three samples and they do not provide citations to statutory articles. Notably, the complexity of their rubrics necessitates the use of the expensive OpenAI-o1 model for evaluation, thus significantly increasing the overall cost, amounting to approxi- mately $50 for each LLM evaluated. LLM-as-a-judge: LLM-as-a-judge was intro- duced by Zheng et al. (2023), who meta-evaluated its performance against human preferences for multi-turn chat assistant dialogues. A compre- hensive overview of LLM-as-a-judge and meta- evaluation resources can be found in the survey by Gu et al. (2024). Taking this concept further, JudgeBench (Bavaresco et al., 2024) introduced a general-purpose benchmark specifically for the meta-evaluation of LLM-judges. In line with our approach, other studies similarly develop separate benchmarks to meta-evaluate judges on specific tasks (Starace et al., 2025; Niklaus et al., 2025). Evaluation Rubrics: Legal research has for long focused on creating rubrics for consistent (human) evaluation of legal writing (Clark and DeSanctis, 2013). The Brazilian Bar exams have made their rubrics for human evaluation available, so the afore- mentioned OAB-Bench (Pires et al., 2025) provides them to their LLM-judges. Their rubrics consist of a manually annotated ground truth answer with comments and a table with score distributions foreach element of the answer. A proprietary bench- mark, BigLawBench10, describes a scoring system that uses two dimensions: the ‘source’ and ‘answer’ scores, which are analogous to our Cited Articles andAnalysis . They rely on
https://arxiv.org/abs/2505.17267v1
detailed instructions per question that specify explicitly the attributes that would contribute positively and negatively to the fi- nal score of candidate answers. Constructing from scratch either of these approaches is prohibitively expensive, in contrast to our simple, span-based rubrics that only require minimal annotation effort. Greek NLP: Important Natural Language Pro- cessing resources for the Greek language include classification models (Koutsikakis et al., 2020; Saketos et al., 2024), alongside more recent LLMs pretrained on Greek like Meltemi11(V oukoutis et al., 2024) and Llama-Krikri12, which we tested in our experiments (§ 5.4). Existing Greek legal datasets cover only classification and summariza- tion tasks (Angelidis et al., 2018; Papaloukas et al., 2021; Koniaris et al., 2023). Although Greek LLM benchmarks exist for other domains, such as fi- nance (Peng et al., 2025) and medicine (V oukoutis et al., 2024), the legal domain currently lacks one. 7 Conclusions In this work, we introduced GREEK BARBENCH , a benchmark evaluating LLMs on legal questions re- quiring citations to statutory articles and case facts. We use a comprehensive scoring system and an LLM-judge for automatic evaluation. To ensure judge alignment with human experts, we devel- oped an accompanying meta-evaluation benchmark (GBB-JME) using Soft-Pairwise Accuracy as the meta-metric. The results show that our span-based rubrics specifically designed for this benchmark im- prove the LLM-judges. The extensive evaluation of 13 LLMs on GREEK BARBENCH revealed that Gemini-2.5-Flash and GPT-4.1 achieved the best performance, surpassing the typical human expert, but also highlighted areas for future improvement. Limitations Our benchmark, GREEK BARBENCH , assumes the availability of the relevant legal code chapters for 10https://www.harvey.ai/blog/introducing- biglaw-bench 11Meltemi was excluded from our experiments, because of its relatively small context length of 8 billion tokens. 12https://huggingface.co/ilsp/Llama-Krikri-8B- Instruct theRelevant Legal Context component (§ 2.4). We did not evaluate the performance of retrieval mod- els on this task, which is a critical step in real-world legal applications and could pose a significant chal- lenge not addressed by our current setup. A notable limitation is the cost associated with evaluating models using our framework due to the primary LLM-judge being a proprietary model (GPT-4.1-mini). To mitigate this cost, we sug- gest utilizing Simple-Judge with the open-weight model Gemma-3-27B. While no currently avail- able open-weight model achieves meta-evaluation performance (SPA scores on GBB-JME) on par with GPT-4.1-mini, our public release of the bench- mark and meta-evaluation dataset will allow future research to test and use more accurate and cost- effective LLM-judges. Finally, the reported legal expert performance figures (average and 95th percentile) in our compar- isons (§ 1, § 2) should be interpreted as illustrative baselines rather than rigorous head-to-head compar- isons under identical evaluation conditions. This is due to inherent limitations in the available human data: detailed participant statistics are not avail- able across all exam papers, and critically, the hu- man scores were determined by the official Greek Bar Examination grading committee, not by our developed LLM-judge framework. Nevertheless, we believe these figures provide valuable intuition regarding the current performance gap between state-of-the-art LLMs and candidate lawyers. Ethical Considerations The development and application of legal
https://arxiv.org/abs/2505.17267v1
NLP benchmarks carry significant ethical implications and potential societal impact, particularly concern- ing fairness, access to justice, and responsible au- tomation (Tsarapatsanis and Aletras, 2021). There- fore, careful consideration of their design and po- tential uses is essential. Our research contributes to the development of tools that could potentially assist various types of users, including legal professionals (such as judges and lawyers), students, and individuals seek- ing to understand legal concepts. It is crucial to emphasize that performance on this benchmark, or any similar research benchmark, should never be considered sufficient justification for deploying automated systems that substitute human experts. We strongly caution against the uncritical reliance on models evaluated solely on benchmark perfor-mance for automating legal tasks, making legal decisions, or providing legal advice. Despite our efforts to make GreekBarBench re- alistic, as a research benchmark, it overlooks two critical aspects for the safe and reliable deployment of legal AI applications in practice: •Data Realism : Real-world legal problems are far more complex and nuanced than the structured, often simplified scenarios found in exam questions (Medvedeva and Mcbride, 2023). They often demand significant legal in- terpretation, ethical judgment and persuasion, particularly when the law does not provide an explicit answer for a given situation. •Safety: Real-world applications must ensure that the AI system handles adversarial attacks effectively. Issues like guiding the decisions of the LLMs with malicious prompting (e.g., jailbreaking), and providing confident, incor- rect information when asked legally unanswer- able queries are unacceptable (see discussions on AI safety principles13). Furthermore, the primary ethical purpose of this work is not to provide a system ready for deploy- ment, but to advance the state of legal NLP eval- uation itself. By developing a benchmark that re- quires free-text generation, incorporates a multi- dimensional scoring system, and uses LLM-judges with explicit evaluation criteria, we aim to encour- age the development of more transparent and ex- plainable legal AI models. These features pro- vide greater insight into how models arrive at their answers, moving beyond simple classification or multiple-choice and offering components of ex- plainability which are crucial for gaining trust in AI applications (Medvedeva and Mcbride, 2023). As already mentioned (§ 2.1), the authors of the solutions of the exam papers have given approval for the public reproduction of this work, with re- spect to the original and strictly for academic re- search use. Our ground truth answers are based on the year that each exam was published. This means that if the relevant laws changed in the meantime, the solutions are no longer valid. All cases in the Greek Bar exams are fictional, created solely for educational purposes, and bear no relation to real individuals or actual legal cases. 13https://www.anthropic.com/news/core-views-on- ai-safety Acknowledgments We are grateful to our legal expert annotators, Na- sia Makridou and Irene Vlachou, for their diligent work, expertise, and insightful discussions, which were invaluable to this project. This work was partially supported by project MIS 5154714 of the National Recovery and Re- silience Plan Greece 2.0 funded by the Euro- pean Union under the NextGenerationEU Program. AWS resources were
https://arxiv.org/abs/2505.17267v1
provided by the National In- frastructures for Research and Technology (GR- NET), with support from the EU Recovery and Resilience Facility. References Nikolaos Aletras, Dimitrios Tsarapatsanis, Daniel Preotiuc-Pietro, and Vasileios Lampos. 2016. Pre- dicting judicial decisions of the european court of human rights: a natural language processing perspec- tive. PeerJ Comput. Sci. , 2:e93. Iosif Angelidis, Ilias Chalkidis, and Manolis Koubarakis. 2018. Named entity recognition, linking and genera- tion for greek legislation. In Legal Knowledge and Information Systems - JURIX 2018: The Thirty-first Annual Conference, Groningen, The Netherlands, 12- 14 December 2018 , volume 313 of Frontiers in Artifi- cial Intelligence and Applications , pages 1–10. IOS Press. Ron Artstein and Massimo Poesio. 2008. Inter-coder agreement for computational linguistics. Computa- tional Linguistics , 34(4):555–596. Anna Bavaresco, Raffaella Bernardi, Leonardo Berto- lazzi, Desmond Elliott, Raquel Fernández, Albert Gatt, Esam Ghaleb, Mario Giulianelli, Michael Hanna, Alexander Koller, André F. T. Martins, Philipp Mondorf, Vera Neplenbroek, Sandro Pezzelle, Barbara Plank, David Schlangen, Alessandro Sug- lia, Aditya K. Surikuchi, Ece Takmaz, and Alberto Testoni. 2024. Llms instead of human judges? a large scale empirical study across 20 nlp evaluation tasks. CoRR , abs/2406.18403. Rohan Bhambhoria, Samuel Dahan, Jonathan Li, and Xiaodan Zhu. 2024. Evaluating ai for law: Bridg- ing the gap with open-source solutions. Preprint , arXiv:2404.12349. Ilias Chalkidis, Abhik Jana, Dirk Hartung, Michael Bommarito, Ion Androutsopoulos, Daniel Katz, and Nikolaos Aletras. 2022. LexGLUE: A benchmark dataset for legal language understanding in English. InProceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 4310–4330, Dublin, Ireland. Association for Computational Linguistics.Jessica Clark and Christy DeSanctis. 2013. Toward a unified grading vocabulary: Using rubrics in legal writing courses. J. Legal Educ. , 63. Arman Cohan and Nazli Goharian. 2016. Revisiting summarization evaluation for scientific articles. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC‘16) , pages 806–813, Portorož, Slovenia. European Lan- guage Resources Association (ELRA). DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wang, Bochao Wu, Bei Feng, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Qu, Hui Li, Jianzhong Guo, Jiashi Li, Jiawei Wang, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, J. L. Cai, Jiaqi Ni, Jian Liang, and Jin Chen et al. 2025. Deepseek-r1: Incentivizing reasoning capa- bility in llms via reinforcement learning. Preprint , arXiv:2501.12948. Zhiwei Fei, Xiaoyu Shen, Dawei Zhu, Fengzhe Zhou, Zhuo Han, Alan Huang, Songyang Zhang, Kai Chen, Zhixin Yin, Zongwen Shen, Jidong Ge, and Vincent Ng. 2024. LawBench: Benchmarking legal knowl- edge of large language models. In Proceedings of the 2024 Conference on Empirical Methods in Natu- ral Language Processing , pages 7933–7962, Miami, Florida, USA.
https://arxiv.org/abs/2505.17267v1
Association for Computational Lin- guistics. Ronald Aylmer Fisher. 1935. The Design of Experi- ments . Oliver & Boyd. Markus Freitag, Nitika Mathur, Daniel Deutsch, Chi- Kiu Lo, Eleftherios Avramidis, Ricardo Rei, Brian Thompson, Frederic Blain, Tom Kocmi, Jiayi Wang, David Ifeoluwa Adelani, Marianna Buchicchio, Chrysoula Zerva, and Alon Lavie. 2024. Are LLMs breaking MT metrics? results of the WMT24 metrics shared task. In Proceedings of the Ninth Confer- ence on Machine Translation , pages 47–81, Miami, Florida, USA. Association for Computational Lin- guistics. Markus Freitag, Nitika Mathur, Chi-kiu Lo, Elefthe- rios Avramidis, Ricardo Rei, Brian Thompson, Tom Kocmi, Frederic Blain, Daniel Deutsch, Craig Stew- art, Chrysoula Zerva, Sheila Castilho, Alon Lavie, and George Foster. 2023. Results of WMT23 metrics shared task: Metrics might be guilty but references are not innocent. In Proceedings of the Eighth Con- ference on Machine Translation , pages 578–628, Sin- gapore. Association for Computational Linguistics. Markus Frohmann, Igor Sterner, Ivan Vuli ´c, Benjamin Minixhofer, and Markus Schedl. 2024. Segment any text: A universal approach for robust, efficient and adaptable sentence segmentation. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 11908–11941, Miami, Florida, USA. Association for Computational Linguistics. Jiawei Gu, Xuhui Jiang, Zhichao Shi, Hexiang Tan, Xuehao Zhai, Chengjin Xu, Wei Li, Yinghan Shen, Shengjie Ma, Honghao Liu, Yuanzhuo Wang, and Jian Guo. 2024. A survey on llm-as-a-judge. arXiv preprint arXiv: 2411.15594 . Neel Guha, Julian Nyarko, Daniel E. Ho, Christo- pher Ré, Adam Chilton, Aditya Narayana, Alex Chohlas-Wood, Austin Peters, Brandon Waldon, Daniel N. Rockmore, Diego Zambrano, Dmitry Tal- isman, Enam Hoque, Faiz Surani, Frank Fagan, Galit Sarfaty, Gregory M. Dickinson, Haggai Porat, Jason Hegland, Jessica Wu, Joe Nudell, Joel Niklaus, John Nay, Jonathan H. Choi, Kevin Tobia, Margaret Hagan, Megan Ma, Michael Livermore, Nikon Rasumov- Rahe, Nils Holzenberger, Noam Kolt, Peter Hender- son, Sean Rehaag, Sharad Goel, Shang Gao, Spencer Williams, Sunny Gandhi, Tom Zur, Varun Iyer, and Zehua Li. 2023. Legalbench: a collaboratively built benchmark for measuring legal reasoning in large language models. In Proceedings of the 37th Interna- tional Conference on Neural Information Processing Systems , NIPS ’23, Red Hook, NY , USA. Curran Associates Inc. Abhinav Joshi, Shounak Paul, Akshat Sharma, Pawan Goyal, Saptarshi Ghosh, and Ashutosh Modi. 2024. IL-TUR: Benchmark for Indian legal text understand- ing and reasoning. In Proceedings of the 62nd An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 11460– 11499, Bangkok, Thailand. Association for Compu- tational Linguistics. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open- domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 6769–6781, Online. Association for Computational Linguistics. Marios Koniaris, Dimitris Galanis, Eugenia Giannini, and Panayiotis Tsanakas. 2023. Evaluation of auto- matic legal text summarization techniques for greek case law. Information , 14(4). John Koutsikakis, Ilias Chalkidis, Prodromos Malaka- siotis, and Ion Androutsopoulos. 2020. Greek-bert: The greeks visiting sesame street. In 11th Hellenic Conference on Artificial Intelligence , SETN
https://arxiv.org/abs/2505.17267v1
2020, page 110–117, New York, NY , USA. Association for Computing Machinery. Klaus Krippendorff. 2011. Computing krippendorff’s alpha-reliability. Departmental Papers (ASC) 43, Annenberg School for Communication, University of Pennsylvania.Satyapriya Krishna, Kalpesh Krishna, Anhad Mo- hananey, Steven Schwarcz, Adam Stambler, Shyam Upadhyay, and Manaal Faruqui. 2025. Fact, fetch, and reason: A unified evaluation of retrieval- augmented generation. In Proceedings of the 2025 Conference of the Nations of the Americas Chap- ter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Pa- pers) , pages 4745–4759, Albuquerque, New Mexico. Association for Computational Linguistics. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. 2023. Effi- cient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles . Haitao Li, You Chen, Qingyao Ai, Yueyue Wu, Ruizhe Zhang, and Yiqun Liu. 2024. Lexeval: A compre- hensive chinese legal benchmark for evaluating large language models. In Advances in Neural Information Processing Systems , volume 37, pages 25061–25094. Curran Associates, Inc. Haitao Li, Jiaying Ye, Yiran Hu, Jia Chen, Qingyao Ai, Yueyue Wu, Junjie Chen, Yifan Chen, Cheng Luo, Quan Zhou, and Yiqun Liu. 2025. Casegen: A benchmark for multi-stage legal case documents generation. Preprint , arXiv:2502.17943. Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paran- jape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2024. Lost in the middle: How language mod- els use long contexts. Transactions of the Association for Computational Linguistics , 12:157–173. Antoine Louis, Gijs van Dijck, and Gerasimos Spanakis. 2024. Interpretable long-form legal question answer- ing with retrieval-augmented large language models. Proceedings of the AAAI Conference on Artificial Intelligence , 38(20):22266–22275. Varun Magesh, Faiz Surani, Matthew Dahl, Mirac Suz- gun, Christopher D. Manning, and Daniel E. Ho. 2025. Hallucination-free? assessing the reliability of leading ai legal research tools. Journal of Empirical Legal Studies , 22(2):216–242. Lauren Martin, Nick Whitehouse, Stephanie Yiu, Lizzie Catterson, and Rivindu Perera. 2024. Better call gpt, comparing large language models against lawyers. Preprint , arXiv:2401.16212. Masha Medvedeva and Pauline Mcbride. 2023. Legal judgment prediction: If you are going to do it, do it right. In Proceedings of the Natural Legal Language Processing Workshop 2023 , pages 73–84, Singapore. Association for Computational Linguistics. Joel Niklaus, Veton Matoshi, Pooja Rani, Andrea Galassi, Matthias Stürmer, and Ilias Chalkidis. 2023. LEXTREME: A multi-lingual and multi-task bench- mark for the legal domain. In Findings of the As- sociation for Computational Linguistics: EMNLP 2023 , pages 3016–3054, Singapore. Association for Computational Linguistics. Joel Niklaus, Jakob Merane, Luka Nenadic, Sina Ah- madi, Yingqiang Gao, Cyrill A. H. Chevalley, Claude Humbel, Christophe Gösken, Lorenzo Tanzi, Thomas Lüthi, Stefan Palombo, Spencer Poff, Boling Yang, Nan Wu, Matthew Guillod, Robin Mamié, Daniel Brunner, Julio Pereyra, and Niko Grupen. 2025. Swiltra-bench: The swiss legal translation bench- mark. Preprint , arXiv:2503.01372. Christos Papaloukas, Ilias Chalkidis, Konstantinos Athi- naios, Despina Pantazi, and Manolis Koubarakis. 2021. Multi-granular legal topic classification on Greek legislation. In Proceedings of the Natural Legal Language Processing Workshop 2021 , pages 63–75, Punta Cana,
https://arxiv.org/abs/2505.17267v1
Dominican Republic. Associa- tion for Computational Linguistics. Xueqing Peng, Triantafillos Papadopoulos, Efstathia Soufleri, Polydoros Giannouris, Ruoyu Xiang, Yan Wang, Lingfei Qian, Jimin Huang, Qianqian Xie, and Sophia Ananiadou. 2025. Plutus: Benchmarking large language models in low-resource greek finance. Preprint , arXiv:2502.18772. Ramon Pires, Roseval Malaquias Junior, and Rodrigo Nogueira. 2025. Automatic legal writing evaluation of llms. In Proceedings of the International Confer- ence on Artificial Intelligence and Law (ICAIL) . Vasileios Saketos, Despina-Athanasia Pantazi, and Manolis Koubarakis. 2024. The large language model greeklegalroberta. In Proceedings of the 13th Hellenic Conference on Artificial Intelligence , SETN ’24, New York, NY , USA. Association for Computing Machinery. Giulio Starace, Oliver Jaffe, Dane Sherburn, James Aung, Jun Shern Chan, Leon Maksin, Rachel Dias, Evan Mays, Benjamin Kinsella, Wyatt Thompson, Johannes Heidecke, Amelia Glaese, and Tejal Pat- wardhan. 2025. Paperbench: Evaluating ai’s ability to replicate ai research. Preprint , arXiv:2504.01848. Gemma Team, Aishwarya Kamath, Johan Ferret, Shreya Pathak, Nino Vieillard, Ramona Merhej, Sarah Per- rin, Tatiana Matejovicova, Alexandre Ramé, Mor- gane Rivière, Louis Rouillard, Thomas Mesnard, Ge- offrey Cideron, Jean bastien Grill, Sabela Ramos, Edouard Yvinec, Michelle Casbon, Etienne Pot, Ivo Penchev, Gaël Liu, Francesco Visin, Kathleen Ke- nealy, Lucas Beyer, Xiaohai Zhai, Anton Tsitsulin, Robert Busa-Fekete, Alex Feng, Noveen Sachdeva, Benjamin Coleman, Yi Gao, Basil Mustafa, Iain Barr, Emilio Parisotto, David Tian, Matan Eyal, Colin Cherry, Jan-Thorsten Peter, Danila Sinopal- nikov, Surya Bhupatiraju, Rishabh Agarwal, Mehran Kazemi, Dan Malkin, Ravin Kumar, David Vilar, Idan Brusilovsky, Jiaming Luo, Andreas Steiner, Abe Friesen, Abhanshu Sharma, Abheesht Sharma, Adi Mayrav Gilady, Adrian Goedeckemeyer, and Alaa Saade et al. 2025. Gemma 3 technical report. Preprint , arXiv:2503.19786.Brian Thompson, Nitika Mathur, Daniel Deutsch, and Huda Khayrallah. 2024. Improving statistical sig- nificance in human evaluation of automatic metrics via soft pairwise accuracy. In Proceedings of the Ninth Conference on Machine Translation , pages 1222–1234, Miami, Florida, USA. Association for Computational Linguistics. Dimitrios Tsarapatsanis and Nikolaos Aletras. 2021. On the ethical limits of natural language processing on legal text. In Findings of the Association for Com- putational Linguistics: ACL-IJCNLP 2021 , pages 3590–3599, Online. Association for Computational Linguistics. Leon V oukoutis, Dimitris Roussis, Georgios Paraskevopoulos, Sokratis Sofianopoulos, Prokopis Prokopidis, Vassilis Papavasileiou, Athanasios Katsamanis, Stelios Piperidis, and Vassilis Katsouros. 2024. Meltemi: The first open large language model for greek. Preprint , arXiv:2407.20743. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judging llm-as-a-judge with mt-bench and chatbot arena. In Proceedings of the 37th International Conference on Neural Information Processing Systems , NIPS ’23, Red Hook, NY , USA. Curran Associates Inc. A Annotator Instructions In this section we present the instructions given to the two legal expert annotators. The annota- tors possessed prior experience with the evaluation task, having previously taken the exams themselves. This existing expertise allowed for concise instruc- tions. For the general evaluation of LLM-generated answers for the manual evaluation (§ 5.2), the in- struction (translated to English) was simply to: “Evaluate the candidate answers on each scoring dimension
https://arxiv.org/abs/2505.17267v1
( Facts ,Cited Articles , andAnalysis ).” For the creation of text spans for Span-Judge (§ 3.2), annotators were instructed to: “Highlight the text-spans that correspond to each scoring dimension ( Facts ,Cited Articles , and Analysis ). Highlight the most important subsets of these spans with the label important .” B Complete Prompts In this section we present the complete system (Fig.3) and user (Fig.4) prompts given to candi- date LLMs for generation of answers, as well as the complete system prompts given to LLM-judges for the Simple- (Fig.5) and Span-Judge (Fig.6). You are a legal assistant who answers questions in Greek, focusing on the legal system and the laws of Greece. You analyze your reasoning and respond with well-supported answers and correct references. You only respond in txt format and with only one short paragraph without headings. Figure 3: System prompt for generation given to candidate LLMs. You are given the numbered facts of a legal case, the current relevant legislation of Greece, and a question regarding this case. After carefully reading the entire text, you are to provide a comprehensive answer to the question, analyzing your reasoning. You should answer with references to the relevant legislation using the appropriate abbreviations for the laws (for example, you can say: "according to article X CC" to refer to article "X" of the Civil Code), where necessary. Additionally, you must provide references to the facts of the case (for example, you can say: "according to statement Y of the case data"), where necessary. Figure 4: User prompt for generation given to candidate LLMs. You are a legal exam evaluator. You will be given the following: 1. The facts of a case, 2. The relevant legislation, 3. A question, 4. An ideal reference answer, 5. An answer to be evaluated You must evaluate the answer with three scores and an explanation for each. Each score consists of an integer from 1 to 10, with 10 being excellent. The reference ideal answer is considered excellent (10 in all categories). The Facts Score concerns the facts of the case. If the ideal reference answer mentions certain specific facts from the case, but the answer to be evaluated does not mention them, points should be deducted. Similarly, if the answer to be evaluated mentions facts that are not useful for the answer, points should also be deducted. The Legislation Score concerns references in the answer to the relevant articles of the laws. It is essential to refer to specific articles of laws. If such references are missing or if incorrect articles are cited, points should be deducted from the Legislation Score. Also, points should be deducted if the interpretation of the law is incorrect. The Analysis Score concerns a more general evaluation as to whether the answer to be evaluated has covered the original question, with correct and valid legal reasoning. Points already given or deducted for the above categories should not be counted here. At this point, the final conclusion of the answer is also evaluated. If the answer to be evaluated reaches the wrong
https://arxiv.org/abs/2505.17267v1
conclusion or omits some critical argument, points should be deducted. Use plain text, without markdown. Your answer should follow the template shown below, where X, Y, Z are integers (1-10): Explanation of the Facts Score: <your explanation for the score ...>, Facts Score: X Explanation of the Legislation Score: <...>, Legislation Score: Y Explanation of the Analysis Score: <...>, Analysis Score: Z Figure 5: System prompt given to Simple-Judge LLM-judges. You are a legal exam evaluator. You will be given the following: 1. The facts of a case, 2. The relevant legislation, 3. A question, 4. An ideal reference answer, 5. An answer to be evaluated, 6. The evaluation spans (json file) The evaluation spans are verbatim spans from the text of the ideal reference answer with tags referring to each of the three scores (facts, rules, analysis). That is, for evaluating each score, emphasis should be placed on whether the information from the corresponding excerpt of the ideal reference answer is present in the answer to be evaluated, thus allowing for an appropriate score. For example, for the Facts score, the spans should be present in the answer to be evaluated. If no spans are present, it means that no very important facts (or laws or analysis) are absolutely necessary to be mentioned. However, points can still be deducted if the answer to be evaluated adds facts (or laws or analysis) that are incorrect. There are also important spans, which indicate which parts of the answer are crucial for the evaluation. You must evaluate the answer with three scores and an explanation for each. Each score consists of an integer from 1 to 10, with 10 being excellent. The reference answer is considered excellent (10 in all). The Facts score concerns the facts of the case. If the ideal reference answer mentions specific facts from the case, but the answer to be evaluated does not mention them, points should be deducted. Similarly, if the answer to be evaluated mentions facts that are not useful for the answer, points should also be deducted. The Legislation score concerns the references in the answer to the relevant articles of laws. It is essential to refer to specific articles of laws. If such references are missing or if incorrect articles are cited, points should be deducted from the Legislation score. Also, points should be deducted if the interpretation of the law is incorrect. The Analysis score concerns a more general evaluation as to whether the answer to be evaluated has covered the original question, with correct and valid legal reasoning. Points already given or deducted for the above categories are not scored here. At this point, the final conclusion of the answer is also evaluated. If the answer to be evaluated reaches the wrong conclusion or omits some critical argument, points should be deducted. Use plain text, without markdown. Your answer should follow the template shown below, where X, Y, Z are integers (1-10): Explanation of the Facts Score: <your explanation for the score ...>, Facts Score: X Explanation of the Legislation Score: <...>, Legislation
https://arxiv.org/abs/2505.17267v1
Score: Y Explanation of the Analysis Score: <...>, Analysis Score: Z Figure 6: System prompt given to Span-Judge LLM-judges. C Complete Dataset Example In this section we present the complete version of the example that we presented in Table 1. We show the complete Facts andQuestion (Fig. 7), the Relevant Legal Context (Figures 8 and 9), the com- plete Ground Truth Answer (Fig. 10), the candidate answer by Gemini-2.5-Flash (Fig. 11) and evalua- tions of the candidate answer by the legal experts and the LLM-judge (Fig.12). Facts: [1] Μετά από προεξέταση και προσυνεννόηση με τον δερματολόγο του, κ. Ιωάννη (Ι), ο ασθενής Αντώνης (Α), 20χρονος φοιτητής, μετέβη στις 30-10-2011 στο ιατρείο του Ι προς αφαίρεση δερματικών θηλωμάτων στην περιοχή του προσώπου έναντι συμφωνημένης αμοιβής. Following a preliminary examination and prior consultation with his dermatologist, Mr. Ioannis (I), the patient Antonis (A), a 20-year-old student, went on 30-10-2011 to I's clinic to remove skin papillomas on the face for an agreed fee. [2] Πριν από την αφαίρεση ο Ι συνέστησε, όπως συνηθίζεται σε παρόμοιες περιπτώσεις, την πλύση του σημείου με διάλυμα οξικού οξέως προς εντοπισμό αόρατων θηλωμάτων. Before removal, I recommended, as is customary in similar cases, washing the area with acetic acid solution to detect invisible papillomas. [3] Η κυρία Πηνελόπη (Π), επί σειρά ετών βοηθός του Ι, πήρε από το ράφι ένα μπουκάλι με το υγρό και άρχισε να το επαλείφει σε επαρκή ποσότητα στο δέρμα του Α. Mrs. Pinelopi (P), I's longtime assistant, took a bottle with the liquid from the shelf and began applying it in sufficient quantity on A's skin. [4] Αμέσως μετά την πρώτη επάλειψη ο Α διαμαρτυρήθηκε για πόνο και η Π σταμάτησε αμέσως τη θεραπεία. Immediately after the first application, A complained of pain and P immediately stopped the treatment. [5] Ο Α είχε υποστεί τοπικά εγκαύματα τρίτου βαθμού. A suffered third-degree local burns. [6] Όπως αποδείχθηκε εκ των υστέρων, το μπουκάλι περιείχε αυτούσιο οξικό οξύ και όχι διάλυμα, όπως προδιαγράφεται από την θεραπευτική διαδικασία. As later proven, the bottle contained pure acetic acid and not a solution, as prescribed by the treatment process. [7] Μετά από θεραπευτική αγωγή αρκετών εβδομάδων από ειδικό εγκαυματολόγο ιατρό θεραπεύτηκαν τα εγκαύματα του Α και στη συνέχεια, χρειάσθηκε να γίνει και πλαστική εγχείρηση στο πρόσωπο, η οποία ήταν επιτυχής και οδήγησε στην πλήρη αποκατάστασή του. After several weeks of therapeutic treatment by a specialist burn physician, A's burns healed and subsequently a plastic surgery on the face was necessary, which was successful and led to his full recovery. [8] Ο Α είχε συνολικές ιατρικές δαπάνες 2.500 ευρώ για την αποκατάστασή του και θεωρεί ότι πρέπει να πάρει και 75.000 ευρώ ως χρηματική ικανοποίηση λόγω ηθικής βλάβης. A had total medical expenses of 2,500 euros for his recovery and considers he should also receive 75,000 euros as compensation for moral damage. Question: Ποια πρόσωπα και με βάση ποιες διατάξεις ευθύνονται για τον τραυματισμό του Α; Which persons and based on which legal provisions are responsible for the injury of A? Figure 7: Complete Facts andQuestion (original and below translated in
https://arxiv.org/abs/2505.17267v1
English), as given in to the candidate LLMs, for the example in Table 1. Relevant Legislation: Civil Code (AK) CHAPTER FOUR - LIABILITY FROM CONTRACTS IN GENERAL Articles 361 – 373 CHAPTER EIGHTEENTH - EMPLOYMENT CONTRACT Articles 648 – 680 CHAPTER THIRTY-NINTH – TORTS Articles 914 – 938 Code of Civil Procedure (ΚΠολΔ) CHAPTER C (III) - Jurisdiction by subject matter Articles 12 - 21 CHAPTER IA (XI) - Participation of third parties in the trial Articles 79 - 93 Greek Constitution BASIC PROVISIONS Articles 1 - 2 INDIVIDUAL AND SOCIAL RIGHTS Articles 4 – 25 Figure 8: The Chapters of the Relevant Legislation context given to candidate LLMs, for the example in Table 1. The content of the articles is not shown for brevity. CHAPTER THIRTY-NINTH – TORTS Article 914 Whoever unlawfully and culpably damages another person is obligated to compensate him. Article 915 A person is not liable for damages caused without awareness of their actions or while in a mental or intellectual disorder that decisively limited the functioning of their judgment and will. Whoever, at the time of causing the damage, brought themselves into such a state by consuming alcoholic beverages or other similar means, is liable for the damage, unless they entered that state without fault. Article 916 A person under ten years of age is not liable for the damage caused. Article 922 The master or the one who places another in a service (employment) is liable for damage caused unlawfully to a third party by the servant or the placed person during their service. Article 926 If damage results from a joint act of several persons or if several are jointly liable for the same damage, all are liable severally (jointly and severally). The same applies if several acted simultaneously or successively and it cannot be determined whose act caused the damage. Article 929 In case of harm to a person's body or health, compensation includes, besides medical expenses and damage already incurred, everything the injured party will lose in the future or spend additionally due to increased expenses. There is also an obligation to compensate a third party who legally had the right to demand services from the injured party and is deprived of them. Article 932 In the case of a tort, regardless of compensation for property damage, the court may award monetary satisfaction at its discretion for moral harm. This especially applies to one who suffered an injury to their health, honor, or chastity, or was deprived of their freedom. In the event of a person's death, this monetary satisfaction may be awarded to the victim's family due to emotional distress. Figure 9: Chapter Thirty-Ninth (‘TORTS’) from the Civil Code, which is part of the Relevant Legislation context given to candidate LLMs, for the example in Table 1. The gold cited articles are marked in bold and the articles cited by Gemini-2.5-Flash(Figure 11) are underlined. Ground Truth Answer: Η αμελής παράλειψη του Ι να μεριμνήσει προκειμένου να μην υπάρχει το μπουκάλι με το επικίνδυνο υγρό στο ιατρείο του ή αυτό να φέρει ακριβή
https://arxiv.org/abs/2505.17267v1
και σαφώς διακριτή ένδειξη για το περιεχόμενο του ή έστω να επιστήσει την προσοχή της Π στο επικίνδυνο υγρό συνιστά αφενός, πλημμελή εκτέλεση υποχρεώσεων από τη σύμβαση και αφετέρου, αδικοπραξία κατά την 914 ΑΚ της οποίας το παράνομο στηρίζεται στην παράβαση της γενικής υποχρέωσης πρόνοιας, ασφάλειας και προστασίας που καθιερώνει η έννομη τάξη. Η αδικοπραξία του προστηθέντος κατά την 914 ΑΚ προϋποθέτει ανθρώπινη πράξη, υπαιτιότητα, επέλευση ζημίας, αιτιώδη σύνδεσμο μεταξύ πράξης και ζημίας και τον παράνομο χαρακτήρα της πράξης. Η παρανομία δεν περιορίζεται στην παράβαση ορισμένου κανόνα δικαίου, αλλά εκτείνεται και σε κάθε παράβαση της γενικής υποχρέωσης πρόνοιας, ασφάλειας και προστασίας που απορρέει ως ύψιστη αρχή από την έννομη τάξη μας. Όπως προκύπτει από το πραγματικό, η Π δεν έλεγξε το περιεχόμενο της φιάλης πριν το επαλείψει στο δέρμα του Α (παράνομη και υπαίτια πράξη). Συνεπώς, η Π ευθύνεται κατά τη διάταξη του άρθρου 914 ΑΚ. Σύμφωνα με τη διάταξη του άρθρου 922 ΑΚ, ο κύριος ή ο προστήσας κάποιον άλλον σε μία υπηρεσία ευθύνεται για τη ζημία που ο υπηρέτης ή ο προστηθείς προξένησε σε τρίτον παράνομα κατά την υπηρεσία του. Θεσπίζεται δηλαδή, αντικειμενική ευθύνη ενός προσώπου για άδικη πράξη άλλου υπό την προϋπόθεση ύπαρξης σχέσης πρόστησης με την ανάθεση από κάποιον σε τρίτο ορισμένης υπηρεσίας που αποβλέπει στην εξυπηρέτηση συμφερόντων του πρώτου και στοιχείου εξάρτησης στην σχέση πρόστησης υπό την έννοια της εξουσίας του προστήσαντος να παρέχει σχετικές οδηγίες και διαταγές στον προστηθέντα. Ως προς την αδικοπραξία του προστηθέντος, η παρανομία δεν περιορίζεται στην παράβαση ορισμένου κανόνα δικαίου, αλλά εκτείνεται και σε κάθε παράβαση της γενικής υποχρέωσης πρόνοιας, ασφάλειας και προστασίας που απορρέει ως ύψιστη αρχή από την έννομη τάξη μας. Εφόσον συντρέχουν οι παραπάνω προϋποθέσεις των 922 και 914 ΑΚ, ο προστήσας ευθύνεται σε αποζημίωση του ζημιωθέντος και αποκατάσταση της ηθικής βλάβης του. Εν προκειμένω η ζημία του Α προκλήθηκε εντός του ιατρείου του Ι από παράνομη και υπαίτια πράξη της βοηθού του Π, η οποία είναι προστηθείσα. Νομικό έρεισμα της ευθύνης του Ι είναι οι 922 και 914 ΑΚ των οποίων οι προϋποθέσεις, όπως προκύπτει από το ιστορικό, συντρέχουν στην υπό εξέταση περίπτωση. The negligent omission of I to ensure that the bottle with the hazardous liquid does not exist at his clinic, or that it has a precise and clearly distinct indication of its contents, or at least draws P's attention to the hazardous liquid, constitutes on one hand poor performance of contractual obligations and on the other a tort under article 914 of the Civil Code, whose illegality is based on violation of the general duty of care, safety, and protection established by the legal order. The tort of the agent under 914 requires human act, fault, damage occurrence, causal link between act and damage, and the unlawful character of the act. Illegality is not limited to breach of a specific rule of law but extends to any breach of the general duty of care, safety, and protection which arises as a supreme principle from our legal order. As it emerges from the facts, P did not check the contents of the bottle before applying it
https://arxiv.org/abs/2505.17267v1
to A’s skin (an unlawful and culpable act). Therefore, P is liable according to the provision of Article 914 AK. According to the provision of Article 922 AK, the principal or the person who supervises another in the performance of a service is liable for the damage that the servant or agent unlawfully caused to a third party during the course of their service. This establishes an objective liability of a person for the wrongful act of another provided that there is a supervisory relationship with the delegation by someone to a third party of a specific service aimed at serving the interests of the first and an element of dependence in the supervisory relationship, in the sense of the authority of the principal to give relevant instructions and orders to the agent. As regard the tort of the agent, the illegality is not limited to the violation of a specific legal rule but also extends to any breach of the general duty of care, safety, and protection deriving as a supreme principle of our legal order. Given that the above conditions of Articles 922 and 914 AK are met, the principal is liable to compensate the injured party and restore moral damages. In the present case, A’s damage occurred within I’s clinic due to an unlawful and culpable act of his assistant P, who is the agent. The legal basis of I’s liability is Articles 922 and 914 AK, the conditions of which, as it appears from the facts, are fulfilled in the case at hand. Figure 10: Ground truth answer by the legal expert. Text spans are highlighted in colors (green for Facts , blue for Cited Articles and orange for Analysis ). Gemini-2.5-Flash Answer: Ο δερματολόγος κ. Ιωάννης (Ι) και η βοηθός του κυρία Πηνελόπη (Π) ευθύνονται για τον τραυματισμό του Αντώνη (Α). Ο Ι ευθύνεται πρωτίστως λόγω της συμβατικής του σχέσης με τον Α (δεδομένο 1) για την πλημμελή παροχή των ιατρικών υπηρεσιών, η οποία εκδηλώθηκε μέσω της ενέργειας της προστηθείσας βοηθού του (δεδομένα 3, 6). Παράλληλα, τόσο η Π όσο και ο Ι ευθύνονται εις ολόκληρον κατά τις διατάξεις περί αδικοπραξιών, καθώς η Π προκάλεσε παράνομα και υπαίτια σωματική βλάβη στον Α (δεδομένο 5) εφαρμόζοντας αυτούσιο οξικό οξύ (δεδομένο 6), σύμφωνα με το άρθρο 914 ΑΚ. Ο Ι ευθύνεται και ως προστήσας για την παράνομη πράξη της προστηθείσας του Π, η οποία τελέστηκε κατά την υπηρεσία της (δεδομένο 3), δυνάμει του άρθρου 922 ΑΚ. Καθώς η ζημία προήλθε από κοινή πράξη ή για την ίδια ζημία ευθύνονται παράλληλα περισσότεροι, ενέχονται όλοι εις ολόκληρον σύμφωνα με το άρθρο 926 ΑΚ. Η αποζημίωση περιλαμβάνει τα νοσήλια (δεδομένο 8) κατά το άρθρο 929 ΑΚ και χρηματική ικανοποίηση για την ηθική βλάβη (δεδομένο 8) κατά το άρθρο 932 ΑΚ, λόγω της προσβολής της υγείας του Α. Dermatologist Mr. Ioannis (I) and his assistant Mrs. Pinelopi (P) are liable for the injury of Antonis (A). I is primarily liable due to his contractual relationship with A (fact 1) for the defective provision of medical services, which manifested through the action of his
https://arxiv.org/abs/2505.17267v1
agent assistant (facts 3, 6). At the same time, both P and I are jointly liable under tort law, since P caused unlawful and culpable bodily harm to A (fact 5) by applying pure acetic acid (fact 6), according to Article 914 of the Civil Code (AK). I is also liable as principal for the unlawful act of his agent P, which was committed in the course of her service (fact 3), pursuant to Article 922 AK. As the damage resulted from a joint act or, for the same damage, several persons are liable simultaneously, all are liable jointly and severally according to Article 926 AK. Compensation includes medical expenses (fact 8) pursuant to Article 929 AK and monetary satisfaction for moral damages (fact 8) according to Article 932 AK, due to the harm to A’s health. Figure 11: The answer of Gemini-2.5-Flash for the example in Table 1. Citations are highlighted in color (green for Facts and blue for Cited Articles ). Legal Expert Evaluation: Total score: 10/10 Facts score: 10/10, Cited Articles score: 10/10, Analysis score: 10/10 Span-Judge (GPT-4.1-mini) Evaluation: Total score: 8/10 Facts score: 7/10 Explanation: The response under evaluation mentions the key facts regarding the role of I and P in causing the damage, as well as the structure of the contract and the service. However, it does not clearly present the crucial point that P did not check the contents of the bottle before application, which is a central element of the case and of her responsibility. Additionally, there is no mention of the fact that the damage occurred within I's clinic or that there was negligence in supervising the hazardous liquid. There are no superfluous or incorrect elements. Overall, the absence of important specific details leads to a moderate grade. Cited Articles score: 8/10 Explanation: The response under evaluation correctly identifies the relevant provisions of the Civil Code, namely Articles 914 and 922 regarding the liability of the agent and the principal, as well as Articles 926, 929, and 932 concerning joint and several liability and compensation. Although the latter are not mentioned in the reference text, they are accurate and enrich the answer. There are no errors in the interpretation of the law nor omissions of significant provisions. The inclusion of the relevant articles on compensation and moral damages strengthens the legal argument. Therefore, the grade is high. Analysis score: 9/10 Explanation: The response basically addresses the question with correct legal reasoning and reaches the correct conclusion of the joint liability of I and P. However, it does not provide an extensive analysis of I’s negligence and deficient supervision (the factor considered crucial in the reference text). The argumentation remains general without thorough legal grounding according to the legal order and the general duties of care as presented in the reference text. Despite its completeness, the analysis is not entirely sufficient and can be improved in the presentation of the unlawfulness and the causal link. The final conclusion, however, is correct. Figure 12: Evaluation results for Gemini’s answer by Legal Experts and the LLM-Judge (GPT-4.1-mini
https://arxiv.org/abs/2505.17267v1
Zebra-Llama: Towards Extremely Efficient Hybrid Models Mingyu Yang∗, Mehdi Rezagholizadeh∗, Guihong Li∗, Vikram Appia, Emad Barsoum Advanced Micro Devices, Inc. (AMD) {mingyu.yang,mehdi.rezagholizadeh,guihong.li}@amd.com Abstract With the growing demand for deploying large language models (LLMs) across diverse applications, improving their inference efficiency is crucial for sustainable and democratized access. However, retraining LLMs to meet new user-specific requirements is prohibitively expensive and environmentally unsustainable. In this work, we propose a practical and scalable alternative: composing efficient hybrid language models from existing pre-trained models. Our approach, Zebra-Llama , introduces a family of 1B, 3B, and 8B hybrid models by combining State Space Models (SSMs) and Multi-head Latent Attention (MLA) layers, using a refined initialization and post-training pipeline to efficiently transfer knowledge from pre- trained Transformers. Zebra-Llama achieves Transformer-level accuracy with near- SSM efficiency using only 7–11B training tokens (compared to trillions of tokens required for pre-training) and an 8B teacher. Moreover, Zebra-Llama dramatically reduces KV cache size—down to 3.9%, 2%, and 2.73% of the original for the 1B, 3B, and 8B variants, respectively—while preserving 100%, 100%, and >97% of average zero-shot performance on LM Harness tasks. Compared to models like MambaInLLaMA, X-EcoMLA, Minitron, and Llamba, Zebra-Llama consistently delivers competitive or superior accuracy while using significantly fewer tokens, smaller teachers, and vastly reduced KV cache memory. Notably, Zebra-Llama-8B surpasses Minitron-8B in few-shot accuracy by 7% while using 8×fewer training tokens, over 12×smaller KV cache, and a smaller teacher (8B vs. 15B). It also achieves 2.6×–3.8×higher throughput (tokens/s) than MambaInLlama up to a 32k context length. We will release code and model checkpoints upon acceptance. 1 Introduction Figure 1: Comparing 8B-scale models on average LM Harness score vs. KV cache size. Zebra-Llama (green) matches or exceeds baselines with smaller KV cache and fewer training tokens. Circle and square sizes indicate training tokens (billions for post-training, trillions for pre-training).The exponential growth of deep learning appli- cations has created an urgent demand for models that strike a balance between accuracy and com- putational efficiency—particularly in scenarios constrained by memory or limited hardware ca- pabilities. Transformer-based models, despite their impressive performance across a range of tasks, are fundamentally limited by the quadratic complexity of their self-attention mechanisms and the substantial memory required to store key–value (KV) caches. These bottlenecks hinder their deployment in real-world applica- tions, especially on edge devices or in latency- sensitive settings. At the same time, the rise of large language models (LLMs) has amplified the need for customization —that is, the ability to adapt pre-trained models to meet diverse user needs, hardware configurations, and application requirements. However, developing new LLMs from ∗Equal Contribution First Authors, with order determined randomly.arXiv:2505.17272v1 [cs.LG] 22 May 2025 scratch for each target environment is prohibitively expensive and environmentally unsustainable. Traditional solutions such as model compression, neural architecture search (NAS), and pre-training new architectures offer potential pathways but suffer from significant limitations. Model compression often degrades quality while NAS or pre-training new models have substantial computational costs. To overcome these challenges, a promising new paradigm has emerged: hybrid models that aim to reduce the computational cost of self-attention while maintaining generation quality. These
https://arxiv.org/abs/2505.17272v1
archi- tectures typically integrate efficient state-space models [ 1] or linear attention [ 2] with full attention mechanisms, leveraging the strengths of both. However, many recent hybrid approaches, including Samba [ 3] and Hymba [ 4], require extensive pre-training from scratch, which is computationally expensive. Others, such as MambaInLLaMA [ 5], experience notable performance degradation when attention is replaced too aggressively, often due to insufficient key–value cache compression or ineffective knowledge transfer from the base model. Our goal in this work is to develop a more efficient and sustainable alternative: to compose highly efficient language models directly from existing pre-trained Transformers, avoiding the cost of full pre-training while retaining performance. Our approach, called Zebra-Llama , introduces a family of hybrid models (1B, 3B, and 8B) built on two complementary components: Multi-Latent Attention (MLA) [ 6], a low-rank attention mechanism that compresses memory usage without sacrificing quality under moderate compression; and Mamba2, a state-space model that eliminates KV caches entirely but performs poorly when used alone [ 5]. Specifically, we first initialize pure MLA and pure Mamba2 models from a pre-trained Transformer via a refined weight-mapping procedure. We then use Intermediate Layer Distillation (ILD) to align their internal representations with those of the original Transformer model, ensuring strong initialization. Finally, we strategically compose hybrid architectures from the refined MLA and Mamba2 variants using a sensitivity-aware strategy called SMART (Sensitivity Measure-Aware Replacement of Transformer layers) to select where each component is most effective. This process results in highly efficient models that retain Transformer- level quality with drastically reduced memory and compute requirements. Our Zebra-Llama family of 1B, 3B, and 8B hybrid models, achieving 25×,50×, and 36×KV cache compression relative to their respective base Transformer models, while maintaining 100% ,100% , and >97% of the base model’s average zero-shot performance on the LM Harness evaluation benchmark. Our models also perform competitively on few-shot tasks. Notably, Zebra-Llama-8B improves the average few-shot accuracy over Minitron-8B by 7%, despite using 8×fewer training tokens, >12×smaller KV cache, and a smaller teacher model (8B vs. 15B). Additionally, our Zebra-Llama exhibits high inference speed. Compared to the existing work MambaInLlama, it achieves 2.6–3.8 ×higher throughput for sequences up to 32k tokens. This work introduces a practical route for building efficient, customizable LLMs from existing models. Our key contributions are: •Architecture: We propose a hybrid model combining MLA and Mamba2 layers, replacing classical Transformer blocks to reduce memory usage and address the quadratic bottleneck of attention. •Training: We develop an efficient post-training pipeline including refined initialization, Intermedi- ate Layer Distillation, and SMART layer selection for hybrid model composition. •Empirical Results: Our models match or exceed Transformer-level performance with drastically reduced KV cache and significantly improved inference throughput. 2 Related Work Hybrid Models Prior work on hybrid models can broadly be categorized into two groups based on their training strategy: pre-training-based andpost-training-based approaches. Pre-training- based methods, such as Jamba [ 7], Hymba [ 8], Samba [ 3] and Mamba-2-Hybrid [ 9], interleave heterogeneous hybrid layers during full-scale model training, allowing the development of hybrid models from scratch. While these models achieve strong performance,
https://arxiv.org/abs/2505.17272v1
their training cost remains high, limiting accessibility and sustainability. In contrast, post-training approaches insert efficient modules into pre-trained Transformers, often leveraging knowledge distillation to transfer capabilities without full re-training, including MambaInLLaMA [ 5], MOHAWK [ 10], and Llamba [ 11]. Our work focuses on the post-training setting and pushes it further by targeting extremely efficiency—both in terms of training tokens and runtime memory. We introduce a systematic method to build hybrid models with minimal training costs, making them practical for broad deployment, especially where hardware resources are critically constrained." Efficient Post-training Several recent approaches specifically target post-training efficiency through careful initialization, distillation, and model compression. For example, MambaInLLaMA [ 5] 2 Figure 2: Overview of our hybrid model composition pipeline. The process consists of three stages: (1) Weight Initialization – we initialize pure Mamba2 and MLA models from a pre-trained Transformer via structured mapping; (2) Refined Initialization through Intermediate Layer Distillation (ILD) – we refine both models by aligning their internal representations with the base model on a small dataset; and (3) SMART Layer Selection – we compose the final hybrid model by selecting MLA and Mamba2 layers based on sensitivity analysis. replaces most self-attention blocks of a pre-trained Transformer with linear RNN layers. Through the initialization of RNN layer parameters from the pre-trained Transformer’s weights, followed by distillation-based fine-tuning, the hybrid model achieves comparable performance with the teacher model. MOHAWK [ 10] introduces a multi-stage cross-architecture distillation strategy (aligning attention “mixing” matrices, hidden states, and outputs) to transfer a Transformer’s knowledge into a Mamba-based model using a fraction of the original training data. Building on this, Llamba [ 11] scales the distilled recurrent architecture up to 8B parameters, attaining improved accuracy, while markedly improving inference throughput and memory usage for deployment on edge devices. Orthogonally, X-EcoMLA [ 12] “upcycles” a pre-trained Transformer’s attention into multi-head latent attention modules, jointly compressing key–value caches significantly with teacher-guided fine-tuning to preserve accuracy. Meanwhile, Minitron [ 13] compresses a 15B Nemotron Transformer by pruning its depth, width, and attention heads, then retrains on only about 2–3% of the original data via distillation; this yields 8B and 4B models that rival larger models’ performance without full re-training. Our method offers significant advantages over previous distillation techniques like Minitron, MambaInLlama, X-EcoMLA, and Llamba, which have each faced limitations in KV cache compression, training efficiency or maintaining the base models’ performance. 3 Methodology Our methodology focuses on designing and training a hybrid model architecture that achieves strong performance with significantly enhanced efficiency. The overall approach consists of two key stages: (1) constructing an extremely efficient hybrid model, and (2) applying a lightweight yet effective training pipeline. An overview of the process is provided in Figure 2. 3.1 Composing Extremely Efficient Hybrid Model To compose our hybrid model, we combine two complementary components: Mamba2 [ 14] and MLA [6] blocks. Each of these blocks contributes differently to efficiency and performance: •Mamba2 blocks are based on SSMs with zero KV cache usage, making them ideal for long-context or memory-constrained settings. However, they often underperform when used exclusively [9]. •MLA blocks
https://arxiv.org/abs/2505.17272v1
compress standard attention process to reduce KV cache requirements while maintain- ing high performance. Yet, excessive compression can lead to noticeable performance drops [ 12]. Our hybrid architecture interleaves Mamba2 and MLA layers to set a balance between minimal memory usage and strong predictive performance. The composition process consists of two key stages: ( i) we first construct a pure Mamba2 model and a pure MLA model by replacing all attention blocks in the base Transformer with Mamba2 and MLA blocks, respectively, each initialized from the original pre-trained weights; ( ii) we then apply intermediate layer distillation (ILD) to align the outputs of each layer in the Mamba2 and MLA models with those of the corresponding layers in the base model, thereby preserving the original model’s knowledge during the transition. Following this refined initialization, we compose the final hybrid model using our sensitivity measure- aware replacement of transformer layers (SMART) layer selection strategy, which selects the optimal placement configurations of Mamba2 and MLA layers based on layer-wise sensitivity analysis. 3 3.1.1 Refined Initialization To initialize the MLA and Mamba2 blocks in pure MLA and pure Mamba2 models derived from a pre- trained base model, we introduce an enhanced initialization strategy that goes beyond conventional weight transfer. Background: Multi-Head Attention Given an input hidden representation H∈Rl×d, MHA projects it into query, key, and value matrices using learned weights: Q=HWQ, K =HWK, V =HWV, (1) where WQ, WK, WV∈Rd×nhdhandnh,dhdenote the number of heads and per-head dimension, dis the hidden dimension, and lrepresents the sequence length. The attention operation computes: A=Softmax QK⊤/√ d , O =AV WO, (2) withWO∈Rd×d. During inference, MHA requires caching KandVfor all past tokens, incurring a memory cost of 2nhdhl. Background: Multi-Head Latent Attention MLA [ 6] introduces a low-rank compression scheme to reduce memory usage. Instead of storing KandVdirectly, they are compressed into a latent vector: CKV=HWDKV, (3) withWDKV∈Rd×rkvandrkv≪nhdh. Keys and values are reconstructed via: KC=CKVWUK, VC=CKVWUV, (4) where WUKandWUVare up-projection matrices. MLA can also compress the query: CQ=HWDQ, QC=CQWUQ, (5) where CQ∈Rl×rqandrq≪d. To retain compatibility with RoPE, MLA decouples positional encoding using separate projections: QR=RoPE (CQWQR), KR=RoPE (HWKR), (6) where WQR∈Rrq×nhdrandWKR∈Rd×dr. The final queries and keys are then constructed as:Q= [QC;QR], K = [KC;repeat (KR)], where repeat (KR)denotes replication across heads. Overall, MLA reduces KV-cache size for inference from O(nhdhl)toO((rkv+dr)l). Initializing MLA from Pre-trained MHA To construct an MLA-based hybrid model from a pre-trained Transformer, we upcycle its attention modules by reparameterizing them into low-rank latent attention. This conversion is initialized using a structured approach based on singular value decomposition (SVD), enabling knowledge transfer while minimizing performance loss [ 12]. The core MLA weights (i.e. query, key, and value projections) are initialized using low-rank approximations derived from pre-trained MHA parameters. For query weights, we apply SVD on WQ: WQ=UqΣqV⊤ q, (7) and set WDQ=Uq. The up-projection matrices WUQandWQRare derived from ΣqV⊤ qby partitioning and reshaping into the appropriate query and RoPE dimensions. For key and value projections, we concatenate WKandWVand apply joint SVD: [WK, WV] =UkvΣkvV⊤ kv. (8) We then set WDKV=Ukv, and derive WUKandWUVby partitioning and reshaping ΣkvV⊤
https://arxiv.org/abs/2505.17272v1
kv. The shared RoPE key projection WKRis initialized from the average WKacross heads. We choose a constant rank rq,rkvacross all layers. Non-attention parameters such as the feedforward network and output projection WOare directly copied from the pre-trained model. Additional details of MLA initialization are provided in Appendix A.1. 4 Initializing Mamba2 from Attention Representations It has been shown in [ 5] that linear attention can be reinterpreted as a special case of SSMs. This connection enables the initialization of Mamba2 blocks from pre-trained attention-based Transformers. In particular, the linearized form of attention without softmax resembles a linear RNN update: ht=mt−1,tht−1+K⊤ tVt, y t=1√ DQtht, (9) which parallels the linear RNN structure: ht=Atht−1+Btxt, y t=Ctht. (10) To bridge this connection, MambaInLLaMA [ 5] proposes to initialize the continuous-time Mamba2 SSM parameters ( A,B,C) from the weights of the attention blocks. This includes discretizing the SSM over a learned sequence of sampling intervals ∆tto match the temporal dynamics of attention. Specifically, K⊤ tVtin attention is mapped to Btxtin Mamba2; Qtin attention plays the role of Ct; and the memory coefficient mt−1,tcorresponds to Atin the recurrent update. Additional details of MLA initialization are provided in Appendix A.2. Refined Initialization through Intermediate Layer Distillation (ILD) After initializing the parameters of MLA and Mamba2 layers from the pre-trained Transformer, we further refine their weights through a lightweight ILD procedure on a small portion of the training data, akin to the second phase of MOHAWK [ 10]. This focused training aligns the internal representations between the MLA and Mamba2 layers and the pre-trained Transformer layers, ensuring a smoother start for downstream optimization and better preservation of the knowledge embedded in the original model. To perform ILD, we minimize the mean squared error (MSE) between the outputs of each pre-trained Transformer attention layer hAttn land the corresponding outputs of MLA ( hMLA l) and Mamba2 ( hM2 l) layers. The ILD losses for Mamba2 and MLA are defined as: LMamba2 ILD =X l∈{1,2,...,L} hAttn l−hM2 l 2 2,LMLA ILD =X l∈{1,2,...,L} hAttn l−hMLA l 2 2,(11) where Ldenotes the total number of layers and hlrefers to the output of the lthlayer. Unlike MOHAWK, where the weights of MLP within each layer remain frozen during distillation, we allow training of all parameters in the Mamba2 and MLA layers. The refined initialization resulting from this ILD has proven to be crucial for enhancing the subsequent end-to-end distillation process, as evidenced in Section 4.3.3. 3.1.2 SMART: Sensitivity Measure-Aware Replacement of Transformer Layers Figure 3: Layer sensitivity scores for Llama3.2-1B using 4096 samples from the validation dataset. Red markers indicate the MLA layer indices selected by our SMART strategy with N= 4.The final stage of our initialization process is to compose our hybrid model from the full MLA and full Mamba2 models. Let Ndenote the desired total number of MLA layers. The set of layers assigned as MLA is represented by LMLA⊆ {0,1, . . . , L −1}. To minimize the performance gap between the hybrid and fully attention-based models, we introduce a Sensitiv- ity Measure-Aware Replacement of Transformer Layers (SMART) strategy,
https://arxiv.org/abs/2505.17272v1
which leverages em- pirical sensitivity analysis to guide the layer as- signment. To measure the sensitivity of each layer, we begin with the full Mamba2 model after ILD. First, we compute the KL divergence between the full attention-based teacher model and the student model where all layers are Mamba2. Then, for each layer i, we construct a variant of the student model in which only the ithlayer is replaced with MLA while the rest remain Mamba2, and measure the KL divergence against the teacher. The sensitivity gain sifor layer iis defined as the reduction in KL divergence relative to the full Mamba2 model. The sensitivity score is formally computed as: si=TX t=1KL[p(· |y1:t, x, θ T)∥p(· |y1:t, x, θ)]−KL[p(· |y1:t, x, θ T)∥p(· |y1:t, x, θ′ i)],(12) 5 where θTandθdenote the parameters of the teacher and full Mamba2 model, and θ′ icorresponds to the variant with MLA inserted at layer i. In this equation, y1:trefers to model predictions up to time step tandxis the input sequence and Tis total number of decoding time steps. A higher score indicates that the ithlayer plays a more critical role in aligning with the teacher and should thus be prioritized for MLA replacement due to its higher representational capacity. Figure 3 shows an example sensitivity profile for the Zebra-Llama 1B model after refined initialization, where we observe that the earliest and latest layers tend to exhibit higher sensitivity, while middle layers are typically less critical. While it may seem intuitive to simply select the top Nlayers with the highest sensitivity scores, this strategy often leads to suboptimal layer placements—especially if it results in large gaps between MLA layers. To enforce a more structured distribution and preserve the most sensitive positions, we adopt the following three-step procedure: •Terminal Preservation: We divide the total Llayers into Nroughly uniform partitions. We take the first and last ⌊L/N⌋layers in the first and last partitions. From these two partitions, we select the layer with the highest sensitivity score as the first and last MLA layers, denoted LMLA 1and LMLA N, respectively. This ensures that the most sensitive edge layers are preserved. •Near-Uniform Intermediate Distribution: Given the range [LMLA 1+ 1, LMLA N−1], we aim to place the remaining N−2MLA layers such that the gaps between consecutive MLA layers are as uniform as possible. We constrain the gap between adjacent MLA layers to lie within the range of min: ⌊LMLA N−LMLA 1−N+1 N−1⌋, and max: ⌈LMLA N−LMLA 1−N+1 N−1⌉. We enumerate all valid configurations {Cj}M j=1that meet this spacing constraint, where each candidate Cjdefines a possible set of intermediate MLA layer indices. •Maximal Sensitivity Scores: For each valid configuration, we compute the total sensitivity score and choose the one with the highest cumulative score: C∗= arg max CjP i∈Cjsi. Additional details and examples of this layer selection procedure are provided in Appendix B. 3.2 Efficient Training After initialization and model composition, we follow an end-to-end knowledge distillation and DPO training stages to incrementally improve model accuracy and efficiency. End-to-End Knowledge Distillation This stage involves an end-to-end distillation with supervised fine-tuning (SFT): Lθ=TX
https://arxiv.org/abs/2505.17272v1
t=1KL[p(·|y1:t, x, θ T)||p(·|y1:t, x, θ)], (13) where θandθTare the parameters of the student model and the teacher model respectively. Such distillation step is crucial for transferring the rich, pre-trained knowledge from the teacher model. Direct Preference Optimization (DPO) In the final training stage, we perform DPO which is a binary cross entropy loss to adjust the preference of the student model. We set the distilled student model itself as the reference model as it makes the training much stabler. 4 Experiments and Results 4.1 Training Setup All of our Zebra-Llama models are distilled from the Llama family: Llama3.2-1B-Instruct, Llama3.2- 3B-Instruct, and Llama3.1-8B-Instruct for our experiments. For ILD and SFT, we use the same dataset as in [ 5] which includes multiple public datasets such as OpenHermes-2.5 [15],GenQA [16], andInfinity-Instruct [17], with a total number of 6.8 billion tokens. The dataset is splited into 20% and 80% for ILD and SFT separately. We repeat the same training data more than one epoch to match the desired token budget. For DPO preference tuning, we adopt three datasets Llama3-ultrafeedback [18], orca_dpo_pairs [19], and ultrafeedback_binarized [20]. All models were trained on a single node equipped with eight AMD MI300 GPUs. Our training details are provided in Appendix A.4. 4.2 Performance Evaluation Evaluation Tasks We adopt the LM Harness Eval benchmark [21] to perform zero-shot and few- shot evaluations on language understanding tasks, which includes ARC-Challenge (ARC) [ 22], ARC- Easy (ARE) [ 22],HellaSwag (HS) [ 23], MMLU (MM) [ 24], OpenBookQA (OB) [ 25], PIQA [ 26], RACE (RA) [ 27], and WinoGrande (WG) [ 28]. We provide more details for model evaluations in Appendix A.5. 6 Model and Setting Teacher Tokens Size KV Size ARC ARE HS MM OB PI RA WG Avg. Llama3.2-1B-Inst - 9T 1.24B 100% 37.97 63.30 60.65 46.05 34.80 74.32 38.18 59.67 51.87 MambaInLlama-1B-50%* 8B 7B 1.27B 50% 40.78 65.57 61.4 40.17 39.2 74.32 38.28 58.72 52.31 X-EcoMLA-1B ( rkv= 64 ) 8B 7B 1.23B 9.37% 40.02 67.17 58.4 38.53 37.8 73.83 39.43 60.93 52.01 Llamba-1B 1B+70B 8B 1.41B 0% 37.12 65.36 61.31 38.11 36.8 73.78 37.61 60.62 51.34 Zebra-Llama-1B, 8MLA-8M2 8B 7B 1.27B 7.81% 42.83 67.3 60.59 38.99 41.6 73.29 38.56 61.33 53.06 Zebra-Llama-1B, 6MLA-10M2 8B 7B 1.28B 5.86% 43.94 67.51 60.46 38.21 41.2 73.23 37.61 61.17 52.92 Zebra-Llama-1B, 4MLA-12M2 8B 7B 1.28B 3.91% 42.15 67 58.86 37.96 40.4 73.18 37.89 58.88 52.04 Llama3.2-3B-Inst - 9T 3.21B 100% 46.08 67.93 70.38 60.34 36.4 75.79 40.86 67.25 58.13 MambaInLlama-3B-50% 70B 20B 3.35B 50% 54.1 80.26 74.45 52.47 42.4 77.69 43.44 67.32 61.52 X-EcoMLA-3B ( rkv= 816 ) 3B 7B 3.21B 42.97% 48.38 70.37 72.41 57.51 38.2 76.28 46.41 68.11 59.71 X-EcoMLA-3B ( rkv= 128 )* 8B 7B 3.21B 9.37% 52.05 75.38 70.95 53.2 40.8 77.09 44.69 66.85 60.13 Llamba-3B 3B+70B 10B 3.66B 0% 45.65 73.78 73.31 52.32 42.4 78.02 40.1 70.01 59.45 Zebra-Llama-3B, 14MLA-14M2 8B 9B 3.27B 4.69% 52.65 76.35 72.43 51.97 44.4 76.99 46.99 67.8 61.2 Zebra-Llama-3B, 8MLA-20M2 8B 9B 3.36B 2.68% 51.96 75.97 72.38 48.16 42.8 77.64 43.54 65.67 59.77 Zebra-Llama-3B, 6MLA-22M2 8B 9B 3.39B 2.01% 49.74 74.62
https://arxiv.org/abs/2505.17272v1
71.11 44.21 43.8 77.37 42.11 65.43 58.55 Llama3.1-8B-Inst - 15T 8.03B 100% 54.86 79.55 79.23 68.13 43 80.9 44.69 73.88 65.53 MambaInLlama-8B-50% 70B 20B 8.3B 50% 59.73 84.81 79.69 59.74 44 80.03 46.12 74.11 66.03 Minitron-8B 15B†94B 8.3B 66.67% 52.73 79.5 77.4 62.95 45.2 81.39 39.71 72.69 63.95 X-EcoMLA-8B ( rkv= 128 )* 8B 7B 8.03B 9.37% 56.57 79.04 77.38 58.6 42.8 79.6 48.33 70.96 64.16 Llamba-8B 8B+70B 12B 8.32B 0% 53.84 79.71 76.25 60.29 44 79.16 40.38 72.77 63.3 Zebra-Llama-8B, 16MLA-16M2 8B 11B 8.19B 5.47% 58.96 79.92 77.73 57.18 44.6 80.2 48.71 72.38 64.96 Zebra-Llama-8B, 8MLA-24M2 8B 11B 8.38B 2.73% 57.17 79.46 77.43 55.4 44.6 79.71 44.31 71.43 63.69 Table 1: Zero-shot evaluation on the LM Harness Eval benchmark across eight tasks: ARC-Challenge (ARC), ARC-Easy (ARE), HellaSwag (HS), MMLU, OpenBookQA (OB), PI, RACE (RA), and WinoGrande (WG). All the teacher models are Llama3.1-8B/70B or Llama3.2-1B/3B execpt for Minitron (which is†Nemotron-15B). ∗The X-EcoMLA results are reproduced by ourselves. Zero-shot Results The results of our zero-shot evaluations are summarized in Table 1. We compare our Zebra-Llama with the base Llama models and other baselines based on distillation: MambaInLLaMA (Hybrid Mamba2-GQA)[ 5], X-EcoMLA (Pure MLA)[ 12], Llamba (Pure Mamba2) [11], and Minitron (Pruning) [ 13]. Besides the evaluation results, we list the teacher model size, number of training tokens, student model size (the number of parameters), and the KV cache size compared to the base Llama models. For the MLA layers in Zebra-Llama, we set rkv= 128 for the 1B and 3B models and rkv= 160 for the 8B models. For each model size, we tested various combinations of MLA and Mamba2 layers. As shown in Table 1, compared to the base Llama models, our Zebra-Llama achieves extreme KV cache compression without noticeable performance drop. For 1B and 3B models, we achieves 3.91% (25.6×compression) and 2.01% (49.78×compression) KV cache size with even higher performance than the base Llama models. For the 8B models, we reach 5.47% (18.3×compression) and 2.73% (36.6×compression) KV cache size with only 0.87% and2.8%performance drop. Note that for the 8B models, we achieve such compression ratio using only same size teachers. Further more, our Zebra-Llama achieves a better balance between KV cache compression and performance with much more efficient training (i.e., fewer training tokens or smaller teacher size) than other distillation-based methods. For example, compared with hybrid MambaInLlama , Zebra-Llama has similar performance with12.79×and24.88×smaller KV cache for 1B and 3B models. For 8B models, Zebra-Llama achieves 9.14×KV cache compression with only 1.6%performance degradation by using a much smaller teacher (8B) and fewer training tokens (11B). Similarly, our method has roughly the same performance as the pure-MLA X-EcoMLA models, but with up to 3.4×less KV cache size. Moreover, compared to the pure Mamba model Llamba-8B, Zebra-Llama shows significantly better performance with both smaller teacher and fewer training tokens with minimal KV cache overhead. Moreover, we further compare Zebra-Llama with state-of-the-art hybrid models trained from scratch in Appendix C (see Table 8). While prior methods like SAMBA and Mamba-2-Hybrid rely on 1.5T–3.5T training tokens, our Zebra-Llama models achieve competitive or superior performance using
https://arxiv.org/abs/2505.17272v1
only 7–11B tokens—representing a 214×–500× reduction in training data. Few-shot Results In Table 2, we report the results of few-shot evaluations on the Zebra-Llama-8B models under 25-shot ARC-Challenge (ARC), 10-shot HellaSwag (HS), 5-shot Winogrande (WG), 5-shot MMLU (MM), and 0-shot mc2 for Truthful-QA (TQ) tasks. Among all models, our Zebra- Llama achieves the best performance with only 5.47% KV cache usage. The closest one to us is MambaInLlama-50% but it’s trained with 1.8×more tokens and has 9.14×more KV cache usage. For pure Mamba2 models such as MambaInLlama-8B-0% and Llamba-8B, they don’t use any KV cache but their performance is significantly worse than Zebra-Llama even with more training tokens and larger teachers. 7 Model and Setting KV % ARC HS MM WG TQ Avg. Llama3.1-8B-Inst 100% 60.75 80.12 68.23 73.72 53.99 67.36 Minitron 66.7% 49.49 81.61 64.34 72.77 43.97 62.44 MambaInLlama-50% 50% 60.41 77.97 56.67 71.35 66.6 66.6 MambaInLlama-25% 25% 59.22 76.88 53.94 64.88 64.64 63.91 MambaInLlama-12.5% 12.5% 53.33 72.16 50.85 63.61 61.12 60.21 MambaInLlama-0% 0% 53.51 70.31 44.21 58.91 52.31 55.85 Llamba 0% 57.94 77.13 59.89 72.77 49.46 63.44 X-EcoMLA-8B ( 128)* 9.37% 59.64 76.9 58.73 71.43 60.86 65.51 Zebra-Llama, (16-16) 5.47% 60.49 78.29 58.84 71.98 64.28 66.78 Zebra-Llama, (8-24) 2.73% 60.41 76.11 55.06 71.11 61.01 64.74 Table 2: Few-shot evaluation on the LM Harness Eval benchmark across five tasks. Figure 4: Inference throughput vs. output sequence length of various 8B-size models. We measure the throughput under batch size 1.Throughput Evaluation In Figure 4, we demonstrate the inference efficiency of Zebra- Llama by evaluating the throughput of differ- ent models employing distinct attention replace- ment strategies for Llama-3.1-8B. All experi- ments are conducted on a single AMD MI300X GPU. We fix the batch size and prompt length to 1 and measure the inference throughput across various generation lengths. As shown, our Zebra-Llama significantly outperforms the throughput of the MambaInLlama model (hy- brid GQA-Mamba2). For long generation lengths like 32k, our method achieves through- put improvements of 2.6x and 3.79x with 8 MLA and 16 MLA layers compared to Mam- baInLlama with 8 GQA and 16 GQA layers. We note that MambaInLlama-8B-0% has slightly higher througput thanZebra-Llama; however, it has much worse accuracy scores accuracy, shown in Table 2. 4.3 Ablation Studies In this section, we present a series of ablation studies aimed at justifying key design decisions in our approach. Specifically, we examine the impact of initialization strategies, the effectiveness of our SMART layer selection mechanism, the trade-offs between the number and size of MLA and Mamba2 layers, and the role of teacher model scaling. 4.3.1 Effect of Initialization Strategies Figure 5: Performance of various initialization strate- gies after SFT for different model architectures. Gener- ally, our proposed two-stage method achieves the highest average scores.Figure 5 presents our assessment of various ini- tialization methodologies through a comparison of three scenarios: Random weight initializa- tion without ILD, Structured weight initializa- tion without ILD, and the proposed Structured weight initialization with ILD. The results high- light that both structured weight initialization and ILD significantly boost SFT performance, especially when used together. For Mamba lay-
https://arxiv.org/abs/2505.17272v1
ers, ILD is crucial for aligning their outputs with the original models due to architectural differ- ences from attention layers, providing the pri- mary performance uplift. As for MLA layers, structured initialization can boost the accuracy significantly, which offers a strong foundation further refined by ILD. 8 4.3.2 Impact of SMART Layer Selection Selection Strategy MLA Indices Total sen. Avg. Score Uniform #1 [0,4,8,12] 1787.9 48.84 Uniform #2 [3,7,11,15] 1055.8 49.72 Max score [0,1,2,14] 2672.5 48.98 Possible middle #1 [0,4,9,14] 2125.7 49.76 Possible middle #2 [0,5,9,14] 2297.5 49.95 SMART (ours) [0,5,10,14] 2500.2 50.15 Table 3: SFT results with different layer selection strate- gies on Llama-3.2-1B model.In Table 3, we demonstrate the benefits of our SMART layer selection strategy by comparing with other layer selections for our Zebra-Llama- 1B with 4 MLA layers, by using the configs shown in Figure 3 as an example. First, by com- paring the first two layer selections sets and our optimal solutions, we can conclude that Termi- nal Preservation strategy (i.e., always selecting layers from the first and last few layers) are im- portant for preserving model accuracy. Second, we also testify the naive greed selection greedy method (the third row); the results shows that uniformly distributing the MLA layers has a major contribution to final performance as well, which verified the effectiveness of our Near-Uniform Intermediate Distribution strategy. Third, by analyzing the last three selections, together with Terminal Preservation and Near-Uniform Intermediate Distribution , Maximal Sensitivity Scores is indeed a good indicator for final accuracy score. In short, these outcomes underscore how each of the three pillars–when appropriately combined–contributes critically to the SMART layer selection strategy’s effectiveness. 4.3.3 Tradeoff between number of MLA layers and rkv Figure 6: Tradeoff between KV rank rkvand the num- ber of MLA layers when fixing the total KV-cache size of given model.The KV-cache size in Zebra-Llama is deter- mined by two factors: the number of MLA lay- ers and the KV rank rkvof each MLA layer. Fig- ure 6 presents our findings from varying these two factors for our 1B model while maintaining a constant total KV-cache size. We observed that for significant KV-cache compressions, such as 12.8×and25.6×, optimal performance typi- cally occurs with a moderate number of MLA layers, around six, coupled with an intermediate rkv. Deviating from this balance will hurt the model’s performance. On one hand, increasing the number of MLA layers excessively, like to 16 layers at a 25.6×compression, forces the rkv per layer to become very small (e.g., rkv=8), which significantly degrades performance. On the other hand, reducing the number of MLA layers too much also leads to a decline in performance, since the model will then consist almost entirely of Mamba2 layers, which typically have a higher capacity gap with MHA layers than MLA layers do. 4.3.4 Scaling the Teacher Effective knowledge transfer from a well-chosen, pre-trained teacher model is crucial for our method’s success. As shown in Table 4, appropriately scaled teachers significantly enhance student model performance though with a diminishing return when gradually increasing size of teacher models. This is due to
https://arxiv.org/abs/2505.17272v1
the "capacity gap" phenomenon in knowledge distillation, where a smaller student model struggles to fully absorb and generalize the complex teacher’s representations when the teacher model far exceeds student mimicry capabilities [ 29,30]. Therefore, it’s vital to select a teacher model that is sufficiently powerful to provide rich knowledge but not so extremely larger than the student model, thereby balancing distillation efficacy with student-teacher compatibility. Investigating adaptive teacher scaling or multi-stage distillation offers future solutions to these capacity limitations. 5 Conclusion In this work, we addressed the growing need for efficient LLMs by proposing a practical and scalable framework for composing hybrid models from existing pre-trained Transformers. Motivated by the cost and environmental impact of retraining large models for downstream use, we introduced Zebra-Llama, a family of 1B, 3B, and 8B hybrid models built using SSMs and MLA layers. We developed an effective initialization scheme and a post-training knowledge transfer pipeline that 9 Model and Setting Teacher Size ARC ARE HS MM OB PI RA WG Avg. Llama3.2-1B-Inst - 37.97 63.30 60.65 46.05 34.80 74.32 38.18 59.67 51.87 Zebra-Llama-1B, 4MLA-12M2 1B 38.91 61.7 55.03 33.83 37.2 71.93 35.41 58.88 49.11 Zebra-Llama-1B, 4MLA-12M2 3B 39.51 62.79 57.61 37.94 38.20 72.52 36.94 56.59 50.26 Zebra-Llama-1B, 4MLA-12M2 8B 42.15 67 58.86 37.96 40.4 73.18 37.89 58.88 52.04 Zebra-Llama-1B, 4MLA-12M2 70B 43.17 69.57 57.77 39.45 38.80 72.80 38.09 59.83 52.44 Llama3.2-3B-Inst - 46.08 67.93 70.38 60.34 36.4 75.79 40.86 67.25 58.13 Zebra-Llama-3B, 8MLA-20M2 3B 45.48 69.28 69.04 47.69 40.80 74.81 42.01 63.38 56.56 Zebra-Llama-3B, 8MLA-20M2 8B 51.54 75.55 71.52 47.12 43.6 77.2 42.68 65.9 59.39 Zebra-Llama-3B, 8MLA-20M2 70B 51.96 77.23 69.46 48.32 43.4 76.01 43.35 65.19 59.37 Llama3.1-8B-Inst - 54.86 79.55 79.23 68.13 43 80.9 44.69 73.88 65.53 Zebra-Llama-8B, 8MLA-24M2 8B 56.48 78.79 76.84 53.72 44.4 79.43 44.31 70.64 63.08 Zebra-Llama-8B, 8MLA-24M2 70B 58.53 80.72 76.64 53.82 45.4 80.03 43.06 69.61 63.48 Table 4: Impact of scaling up teacher size on model performance trained on 7B tokens. Except for teacher size, we use the same training configurations for the same size of student models. enabled these hybrid models to inherit capabilities from larger teacher models with minimal additional training. Our approach significantly reduced memory requirements while preserving or exceeding the accuracy of strong baselines, demonstrating the viability of post-training hybridization as an efficient alternative to full retraining. Limitations and Future Work Our work opens several directions for future research. One limitation is the current focus on a single family of base models; in future work, we plan to explore hybridization strategies across diverse LLM architectures, including multimodal models and Mixture- of-Experts (MoE) frameworks. We also aim to scale our training pipeline to support larger models and extend our method to reasoning-intensive architectures. Another limitation lies in our reliance on strong teacher models for knowledge transfer. In some cases, a sufficiently large teacher may not be available for a given base model. Even when such teachers exist, using a separate teacher model for distillation adds to the memory and resource requirements during training. Addressing these issues—such as through teacher-free distillation or efficient self-distillation—remains an important avenue for
https://arxiv.org/abs/2505.17272v1
future work. References [1]Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752 , 2023. [2]Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. Transformers are rnns: Fast autoregressive transformers with linear attention. In International conference on machine learning , pages 5156–5165. PMLR, 2020. [3]Liliang Ren, Yang Liu, Yadong Lu, Yelong Shen, Chen Liang, and Weizhu Chen. Samba: Simple hybrid state space models for efficient unlimited context language modeling. arXiv preprint arXiv:2406.07522 , 2024. [4]Xin Dong, Yonggan Fu, Shizhe Diao, Wonmin Byeon, Zijia Chen, Ameya Sunil Mahabalesh- warkar, Shih-Yang Liu, Matthijs Van Keirsbilck, Min-Hung Chen, Yoshi Suhara, et al. Hymba: A hybrid-head architecture for small language models. arXiv preprint arXiv:2411.13676 , 2024. [5]Junxiong Wang, Daniele Paliotta, Avner May, Alexander M. Rush, and Tri Dao. The mamba in the llama: Distilling and accelerating hybrid models, 2025. [6]Aixin Liu, Bei Feng, Bin Wang, Bingxuan Wang, Bo Liu, Chenggang Zhao, Chengqi Dengr, Chong Ruan, Damai Dai, Daya Guo, et al. Deepseek-v2: A strong, economical, and efficient mixture-of-experts language model. arXiv preprint arXiv:2405.04434 , 2024. [7]Opher Lieber, Barak Lenz, Hofit Bata, Gal Cohen, Jhonathan Osin, Itay Dalmedigos, Erez Safahi, Shaked Meirom, Yonatan Belinkov, Shai Shalev-Shwartz, Omri Abend, Raz Alon, Tomer Asida, Amir Bergman, Roman Glozman, Michael Gokhman, Avashalom Manevich, Nir Ratner, Noam Rozen, Erez Shwartz, Mor Zusman, and Yoav Shoham. Jamba: A hybrid transformer-mamba language model, 2024. [8]Xin Dong, Yonggan Fu, Shizhe Diao, Wonmin Byeon, Zijia Chen, Ameya Sunil Mahabalesh- warkar, Shih-Yang Liu, Matthijs Van Keirsbilck, Min-Hung Chen, Yoshi Suhara, Yingyan Lin, Jan Kautz, and Pavlo Molchanov. Hymba: A hybrid-head architecture for small language models, 2024. 10 [9]Roger Waleffe, Wonmin Byeon, Duncan Riach, Brandon Norick, Vijay Korthikanti, Tri Dao, Albert Gu, Ali Hatamizadeh, Sudhakar Singh, Deepak Narayanan, et al. An empirical study of mamba-based language models. arXiv preprint arXiv:2406.07887 , 2024. [10] Aviv Bick, Kevin Y Li, Eric P Xing, J Zico Kolter, and Albert Gu. Transformers to ssms: Distilling quadratic knowledge to subquadratic models. arXiv preprint arXiv:2408.10189 , 2024. [11] Aviv Bick, Tobias Katsch, Nimit Sohoni, Arjun Desai, and Albert Gu. Llamba: Scaling distilled recurrent models for efficient language processing. arXiv preprint arXiv:2502.14458 , 2025. [12] Guihong Li, Mehdi Rezagholizadeh, Mingyu Yang, Vikram Appia, and Emad Barsoum. X- ecomla: Upcycling pre-trained attention into mla for efficient and extreme kv compression. arXiv preprint arXiv:2503.11132 , 2025. [13] Saurav Muralidharan, Sharath Turuvekere Sreenivas, Raviraj Joshi, Marcin Chochowski, Mostofa Patwary, Mohammad Shoeybi, Bryan Catanzaro, Jan Kautz, and Pavlo Molchanov. Compact language models via pruning and knowledge distillation. Advances in Neural Informa- tion Processing Systems , 37:41076–41102, 2024. [14] Tri Dao and Albert Gu. Transformers are ssms: Generalized models and efficient algorithms through structured state space duality. arXiv preprint arXiv:2405.21060 , 2024. [15] Teknium. Openhermes 2.5: An open dataset of synthetic data for generalist llm assistants, 2023. [16] Jiuhai Chen, Rifaa Qadri, Yuxin Wen, Neel Jain, John Kirchenbauer, Tianyi Zhou, and Tom Goldstein. Genqa: Generating millions of instructions from a handful of prompts. arXiv preprint arXiv:2406.10323 , 2024. [17] Beijing Academy of Artificial Intelligence (BAAI). Infinity instruct. https:// huggingface.co/datasets/BAAI/Infinity-Instruct , 2024. [18] Junxiong
https://arxiv.org/abs/2505.17272v1
Wang. Llama3 ultrafeedback-armorm dataset. https://huggingface.co/ datasets/JunxiongWang/llama3-ultrafeedback-armorm , 2024. [19] Wing Lian, Bleys Goodson, Eugene Pentland, Austin Cook, Chanvichet V ong, and "Teknium". Openorca: An open dataset of gpt augmented flan reasoning traces. https://https: //huggingface.co/Open-Orca/OpenOrca , 2023. [20] Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Wei Zhu, Yuan Ni, Guotong Xie, Zhiyuan Liu, and Maosong Sun. Ultrafeedback: Boosting language models with high-quality feedback, 2023. [21] Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation. 2023. [22] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457 , 2018. [23] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830 , 2019. [24] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300 , 2020. [25] Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. arXiv preprint arXiv:1809.02789 , 2018. [26] Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. Piqa: Reasoning about phys- ical commonsense in natural language. In Proceedings of the AAAI conference on artificial intelligence , volume 34, pages 7432–7439, 2020. [27] Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. Race: Large-scale reading comprehension dataset from examinations. arXiv preprint arXiv:1704.04683 , 2017. [28] Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale. Communications of the ACM , 64(9):99–106, 2021. 11 [29] Seyed Iman Mirzadeh, Mehrdad Farajtabar, Ang Li, Nir Levine, Akihiro Matsukawa, and Hassan Ghasemzadeh. Improved knowledge distillation via teacher assistant. In Proceedings of the AAAI conference on artificial intelligence , volume 34, pages 5191–5198, 2020. [30] Aref Jafari, Mehdi Rezagholizadeh, Pranav Sharma, and Ali Ghodsi. Annealing knowledge distillation. arXiv preprint arXiv:2104.07163 , 2021. [31] Stephanie Lin, Jacob Hilton, and Owain Evans. Truthfulqa: Measuring how models mimic human falsehoods. arXiv preprint arXiv:2109.07958 , 2021. 12 A More Experimental Details for Zebra-Llama A.1 Structured MLA Initialization Algorithm 1 Python-like pseudocode of the proposed SVD initialization for MLA. # MHA weights: W_Q, W_K, W_V # MLA weights: W_DQ, W_UQ, W_QR, W_DKV, W_UK, W_KR, W_UV # Initialization of W_DQ, W_UQ, and W_QR U_q, sigma_q, V_q = svd(W_Q) W_DQ = U_q W_UQR_bar = (sigma_q @ V_q).view(r_q, n_h, d_h) W_UQ = W_UQR_bar[:, :, :d_qk].view(r_q, n_h *d_qk) W_QR = W_UQR_bar[:, :, -d_r:].view(r_q, n_h *d_r) # Initialization of W_DKV, W_UK, W_KR, W_UV U_kv, sigma_kv, V_kv = svd(torch.cat((W_K, W_V), -1)) W_DKV = U_kv W_K_avg = W_K.view(d, n_kv, d_h).mean(1) W_KR = W_K_avg[:, -d_r:] W_UKV = sigma_kv @ V_kv W_UK_bar = W_UKV[:, :d_h *n_kv].view(r_kv, n_kv, d_h) W_UK = W_UK_bar[:,:,:d_qk].view(r_kv, n_kv *d_qk) W_UV = W_UKV[:, d_h *n_h:] Our SVD-based MLA layer initialization follows the methodology
https://arxiv.org/abs/2505.17272v1
outlined in X-EcoMLA [ 12] for Multi-Head Attention (MHA). However, for Generalized Question Answering (GQA) models like the Llama 3 series, our approach diverges slightly by keeping the number of key/value heads from the base model for MLA while X-EcoMLA forces the number of key/value heads to be the same as the number of query head. With such modification, we notice trivial performance difference but with slightly fewer number of parameters. Essentially, the matrices from the base MHA/GQA module could be expressed as WQ∈Rd×nhdh, andWK, WV∈Rd×nkvdh, where ddenotes the hidden dimension, nhdenotes the number of attention heads, nkvdenotes the number of key/value heads. For MHA, we have nh=nkvwhile for GQA we have nh> nkv. All the matrices we need to initialize in MLA could be expressed as WDKV∈Rd×rkv,WUK∈ Rrkv×nkvdqk,WUV∈Rrkv×nkvdv,WDQ∈Rd×rq,WUQ∈Rrq×nhdqk,WKR∈Rd×dr, and WQR∈Rrq×nhdr, where rkvrepresents the latent dimension for key/value, dqkdenotes the head dimension for query/key, dvdenotes the head dimension for value, rqdenotes the latent dimension for query, and drrepresents the dimension for RoPE embeddings. For all the experiments, we keep dv=dhanddqk+dr=dh. Initialization of WDQ,WUQ, and WQRGiven that query compression in MLA can be viewed as a low-rank approximation of attention layers, we initially perform SVD on the pre-trained weight matrix WQ: WQ=UqΣqVT q, (14) where Uq∈Rd×rq,Σq∈Rrq×rq, and Vq∈Rdhnh×rq. For constructing the up-projection matrices, we reshape the product ΣqVT qto form WUQR∈Rrq×nh×dhand subsequently split this tensor at the last dimension into WUQ, containing the first dqkelements, and WQR, containing the remaining drelements. The down-projection matrix WDQis directly assigned with Uq. This method of initial assignment is formulated as: WDQ=Uq, WUQ=reshape (WUQR[:,:,:dqk]), WQR=reshape (WUQR[:,:,−dr:]),(15) where the function reshape (.)is used to integrate the last two dimensions of the specified tensor. 13 Method MLA Indices rkv rq dqkdrModel Size Zebra-Llama-1B, 8MLA8M2 [0,2,4,8,10,12,14] 128 1344 32 32 1.27B Zebra-Llama-1B, 6MLA10M2 [0,2,5,8,11,14] 128 1344 32 32 1.28B Zebra-Llama-1B, 4MLA12M2 [0,5,10,14] 128 1344 32 32 1.28B Zebra-Llama-3B, 14MLA14M2 [0,2,4,6,8,10,12,14,16,18,20,22,24,27] 128 1536 64 64 3.27B Zebra-Llama-3B, 8MLA20M2 [0,4,8,12,16,20,24,27] 128 1536 64 64 3.36B Zebra-Llama-3B, 6MLA22M2 [0,5,11,17,22,27] 128 1536 64 64 3.39B Zebra-Llama-8B, 16MLA16M2 [0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,31] 160 2048 64 64 8.19B Zebra-Llama-8B, 8MLA24M2 [0,4,8,13,18,23,27,31] 160 2048 64 64 8.38B Table 5: Configurations of Zebra-Llama models’ architecture. Initialization of WDKV,WUK,WUV, and WKRThe initialization of the MLA weights associ- ated with keys and values is more complicated because of the decoupled RoPE mechanism. First, a joint SVD is performed on the concatenated WKandWV: [WK, WV] =UkvΣkvVT kv, (16) where Ukv∈Rd×rkv,Σkv∈Rrkv×rkv, and Vkv∈R2dhnkv×rkv. For the down-projection matrix WDKV, we directly set it to Ukv. To derive the up-projection matrix WUVandWUK, we first set WUKV= Σ kvVT kv. Since we have dv=dh, we simply extract the last dhnkvcolumns of WUKV asWUK. For WUK, we first extract the first dhnkvcolumns of WUKVand reshape them into WUK∈Rrkv×nkv×dh. Then, we select the first dqkelements along the last dimension of WUKand reshape it back to obtain WUK. In general, this process can be expressed as: WDKV=Ukv, WUV=WUKV[:,:nkvdh], WUK=reshape (WUK[:,:,:dqk]). (17) In the final step, the initialization of the RoPE key embedding matrix WKRrequires a distinct approach, given that all attention heads in MLA utilize the identical RoPE key embedding. First,
https://arxiv.org/abs/2505.17272v1
the average key projection matrix WK avg∈Rd×dhis calculated for all attention heads. Then, the final dr columns are extracted for initializing WKR, formulated as follows: WKR=WK avg[:,−dr:]. (18) The detailed initialization algorithm can be found in Algorithm 1. A.2 Structured Mamba2 Initialization The structured initialization process of Mamba2 layers follows precisely to the solution of MambaIn- Llama [ 5]. As outlined in Section 3.1.1, excluding the softmax operation in attention shows a direct one-to-one mapping between Bt,xt, and Ctin linear RNN and Kt,Vt, and Qtin attention layers. In the Mamba2 framework, Bt,xt, and Ctfor the continuous-time SSM are derived from input ot via passing through a MLP followed by a 1d convolution layer. The MLP is replaced directly with pre-trained transformer layer weights as follows: xt=WVot∈Rb×nkvhd, Bt=WKot∈Rb×nkvhd, Ct=WQot∈Rb×nhhd, (19) where bsignifies the batch size. Subsequent to this, x,B, and Care processed through the 1d convolutional layer for temporal fusion before undergoing discretization in Mamba SSM. It is important to highlight that for GQA and MQA scenarios where nkv< nq,xtandBtare replicated after the convolution to ensure that xt,Bt, andCtshare identical dimensions. Other parameters, such asAand∆t, adhere to the original initialization procedure within the Mamba2 layers. A.3 Model Structure Details Detailed architectures of our 1B, 3B, and 8B models, as presented in Table 1, are comprehensively outlined in Table 5. This includes MLA layer selections, parameters specific to the MLA layers, and the overall model size. Layers not categorized as MLA are Mamba2 layers, which follows the configuration used in MambaInLlama [5]. 14 Stage Model Teacher Tokens BS LR Time (H) ILD Zebra-Llama-1B-MLA Llama3.2-1B 1.36B 96 8e-5 1.8 ILD Zebra-Llama-1B-Mamba2 Llama3.2-1B 1.36B 96 8e-5 1.7 ILD Zebra-Llama-3B-MLA Llama3.2-3B 1.36B 96 8e-5 3.9 ILD Zebra-Llama-3B-Mamba2 Llama3.2-3B 1.36B 96 8e-5 4.4 ILD Zebra-Llama-8B-MLA Llama3.1-8B 1.36B 48 4e-5 9.2 ILD Zebra-Llama-8B-Mamba2 Llama3.1-8B 1.36B 48 4e-5 10.3 SFT Zebra-Llama-1B Llama3.1-8B 5.44B 192 8e-5 ≈13.7 SFT Zebra-Llama-3B Llama3.1-8B 7.44B 96 4e-5 ≈31.2 SFT Zebra-Llama-8B Llama3.1-8B 9.44B 64 4e-5 ≈78.1 DPO Zebra-Llama-1B N/A 0.2B 32 5e-7 ≈0.5 DPO Zebra-Llama-3B N/A 0.2B 32 5e-7 ≈1.2 DPO Zebra-Llama-8B N/A 0.2B 32 5e-7 ≈2.3 Table 6: Training hyper-parameters of Zebra-Llama models. Layer Index Sensitivity Score Layer Index Sensitivity Score 0 1185.06 8 238.1 1 382.73 9 120.56 2 480.68 10 323.23 3 350.95 11 228.9 4 196.03 12 168.69 5 367.82 13 233.87 6 250.45 14 624.03 7 114.44 15 361.47 Table 7: Actually sensitivity score in Figure 3. A.4 Training Details In Table 6, we present the training configurations for our Zebra-Llama series models, including the number of tokens, batch size, learning rate, and total training time. All experiments are conducted on a single node equipped with eight AMD MI300 GPUs, each featuring 192GB of memory. We apply a learning rate warmup over the first 1%of training data, followed by cosine annealing. The models are optimized using AdamW, with hyperparameters set to β= (0.9,0.8). Additionally, all models process input sequences of length 2048 through sample packing. A.5 Evaluation Details We evaluate all models using the lm-evaluation-harness library (commit from the big-refactor branch) following the task-specific few-shot configurations defined by the Open LLM
https://arxiv.org/abs/2505.17272v1
Leaderboard. For zero- shot evaluation, we report performance across a broad suite of language understanding tasks: MMLU, HellaSwag, PIQA, ARC-Easy, ARC-Challenge, Winogrande, OpenBookQA, and RACE. Evaluations are performed using the command-line interface with ROCm-enabled devices and a batch size of 16. For few-shot runs targeting leaderboard comparisons, we use the officially recommended number of shots per task (e.g., 25-shot for ARC-Challenge, 10-shot for HellaSwag, 5-shot for Winogrande and MMLU, 0-shot mc2 for Truthful-QA(TQ)) [ 31]. We report average scores across tasks, following the same protocol as prior work. B SMART Layer Selection Algorithms We provide the pseudo code for SMART in Algorithm 2. Besides, we provide three examples for the layer selection process of our Zebra-Llama 1B models following SMART. The examples are based on the sensitivity analysis shown in Figure 3, whose actually values are listed in Table 7. 15 Algorithm 2 Pseudo code: SMART—Structured MLA Layer Selection via Sensitivity Scores Require: Sensitivity scores {s1, s2, . . . , s L}, number of MLA layers N Ensure: Selected MLA layer indices {LMLA 1, . . . , LMLA N} 1:// Terminal Preservation 2:Divide Llayers into Nequal partitions 3:LMLA 1←highest-sensitivity layer in first partition of layers i, i∈ 1,L N 4:LMLA N←highest-sensitivity layer in last partition of layers i, i∈ L−L N + 1, L 5:// Near-Uniform Intermediate Distribution 6:// Define Valid Intermediate Layer Range 7:Letrstart←LMLA 1+ 1 8:Letrend←LMLA N−1 9:LetR←list of candidate intermediate layers in [rstart, rend] 10:// Compute Allowable Gap Bounds 11:LetT←LMLA N−LMLA 1−N+ 1 12:Letgmin←j T N−1k 13:Letgmax←l T N−1m 14:// Enumerate All Valid Configurations 15:Initialize empty list of configurations C ←[] 16:for all combinations of N−2layers from Rdo 17: Sort the selected layers in ascending order to form Cj 18: LetGj←list of gaps between consecutive layers in {LMLA 1} ∪Cj∪ {LMLA N} 19: ifall gaps in Gjsatisfy gmin≤gap≤gmaxthen 20: Append CjtoC 21: end if 22:end for 23:// Maximal Sensitivity Scores 24:C∗←arg max CjP i∈Cjsi 25:return {LMLA 1} ∪C∗∪ {LMLA N} Example 1: Zebra-Llama-1B with N= 4 • Terminal Preservation: LMLA 1 = 0,LMLA N = 14 • Near-Uniform Intermediate Distribution: –Define intermediate layer range: rstart= 1,rend= 13 ,R={1,2, . . . , 13}. –Compute allowable gap bounds: T= 11 ,gmin= 3,gmax= 4. –Enumerate valid Configurations: C={{4,9},{5,9},{5,10}}. • Maximal Sensitivity Scores: –Calculate total sensitivity score: *s4+s9= 316 .59, *s5+s9= 488 .38, *s5+s10= 691 .05 –Layers with maximal score: C∗={5,10}. –Return {0,5,10,14}. 16 Model and Setting Tokens Size ARC ARE HS MM OB PI RA WG Mamba-2-Hybrid 3.5T 8.66B - - - -/47.7 77.23/- - - - - - - -/77.68 51.46 - - - -/42.8 79.65/- - - - 39.71 71.27 SAMBA 3.5T 1.7B 48.21/- - - - 79.25/- - - - 49.74/- - - - 48.01 37.20/- - - - 77.10/- - - - - 72.93 SAMBA 3.5T 1.3B - 58.21/- - - - - - - -/54.73 - - 72.36/- - - - - 55.72 Hymba 1.5T 1.5B 42.32/- - - - 74.54/- - - - 53.55/- - - - - - 76.66/- - - - - 66.61 Zebra-Llama, 8MLA-8M2 7B 1.27B 39.68/42.83 72.35/67.3 45.26/60.59 38.99
https://arxiv.org/abs/2505.17272v1
30.2/41.6 72.91/73.29 38.56 61.33 Zebra-Llama, 14MLA-14M2 9B 3.27B 50/52.65 80.47/76.35 53.52/72.43 51.97 31.8/44.4 76.61/76.99 46.99 67.8 Zebra-Llama, 16MLA-16M2 11B 8.19B 57.17/58.96 83.59/79.92 57.82/77.73 57.18 35.20/44.6 79.65/80.2 48.71 72.38 Table 8: Comparing our Zebra-Llama with state-of-the-art hybrid models that are trained from scratch. We report two accuracy scores for each model, i.e, accuracy and normalized accuracy (acc / acc_norm). The missing results (‘-’) in the table means the results are not reported in original paper. Example 2: Zebra-Llama-1B with N= 6 • Terminal Preservation: LMLA 1 = 0,LMLA N = 14 • Near-Uniform Intermediate Distribution: –Define intermediate layer range: rstart= 1,rend= 13 ,R={1,2, . . . , 13}. –Compute allowable gap bounds: T= 9,gmin= 1,gmax= 2. –Enumerate Valid Configurations: C={ {2,5,8,11},{3,5,8,11}, {3,6,8,11},{3,6,9,11},{3,6,9,12}}. • Maximal Sensitivity Scores: –Calculate total sensitivity score: *s2+s5+s8+s11= 1315 .5, *s3+s5+s8+s11= 1185 .8, *s3+s6+s8+s11= 1068 .4, *s3+s6+s9+s11= 950 .86, *s3+s6+s9+s12= 890 .65. –Layers with maximal score: C∗={2,5,8,11}. –Return {0,2,5,8,11,14}. Example 3: Zebra-Llama-1B with N= 8 • Terminal Preservation: LMLA 1 = 0,LMLA N = 14 • Near-Uniform Intermediate Distribution: –Define intermediate layer range: rstart= 1,rend= 13 ,R={1,2, . . . , 13}. –Compute allowable gap bounds: T= 7,gmin= 1,gmax= 1. –Enumerate Valid Configurations: C={{2,4,6,8,10,12}}. • Only one valid configuration. Return {0,2,4,6,8,10,12,14}. C Comparison with Pre-training Methods In Table 8, we include more comparisons of our Zebra-Llama with other hybrid models based on pre-training instead of distillation. It can be observed that our method could achieve competitive performance as the pre-trained baselines 214× −500×fewer training tokens, demonstrating our advantage in the training efficiency. For instance, Zebra-Llama (16MLA–16M2) matches or exceeds the accuracy of Mamba-2-Hybrid (8.66B, 3.5T tokens) and SAMBA (1.7B, 3.5T tokens), while using only 11B tokens and smaller model sizes. This highlights the efficiency and practicality of our post-training hybrid composition strategy, as it achieves high performance with a fraction of the training budget required by scratch-trained models. D More Inference Evaluations 17 Figure 8: Inference throughput vs. batch size of various 8B-size models. We measure the throughput with output sequence length 8192. Figure 9: Peak memory usage during inference vs. batch size of various 8B-size models. We measure the memory footprint with output sequence length 8192. Figure 7: Inference throughput vs. output se- quence length of various 3B-size models. We mea- sure the throughput under batch size 1.In this section, we provide more inference evalua- tions on our Zebra-Llama 3B and 8B models. In Figure 7, we include the throughput evaluations for 3B models following the same setup with Figure 4 where the prompt length and batch size are fixed to 1. Similarly, our Zebra-Llama exhibits much better inference throughput than the hybrid GQA-Mamba model MambaInLlama. With 8k generation length, our Zebra-Llama is 2.26×faster than MambaInL- lama with 14 MLA/GQA layers and 6.33×faster than MambaInLlama with 6 MLA/GQA layers. In Figure 8 and 9, we include the inference evalua- tions for 8B models iunder a setting where the prompt length is fixed at 1, the output length at 8192, and the batch size gradually increases from 8 to 256. All measurements are recorded on a single AMD MI300 GPU
https://arxiv.org/abs/2505.17272v1
with 192GB of memory, using the gen- eration function from the mamba_ssm library. Our Zebra-Llama models demonstrate significant throughput gains compared to Llama, pure MLA models, and hybrid GQA-Mamba models. Although pure Mamba models achieve faster inference speeds than Zebra-Llama, they tend to exhibit poorer overall performance, as shown in Tables 1 and 2. Additionally, the KV cache compression in our Zebra-Llama models translates to notable memory savings, as illustrated in Figure 9. Models incorpo- rating GQA layers experience a sharp increase in memory usage as the batch size grows. In contrast, MLA-based approaches, such as X-EcoMLA and our Zebra-Llama models, demonstrate superior memory efficiency, exhibiting only a marginal increase in memory consumption compared to the pure Mamba baseline. 18
https://arxiv.org/abs/2505.17272v1
arXiv:2505.17281v1 [cs.CL] 22 May 2025Search Wisely: Mitigating Sub-optimal Agentic Searches By Reducing Uncertainty Peilin Wu1∗, Mian Zhang1∗, Xinlu Zhang2, Xinya Du1, Zhiyu Zoey Chen1 1Department of Computer Science, The University of Texas at Dallas, 2Department of Computer Science, University of California, Santa Barbara, {peilin.wu, mian.zhang, zhiyu.chen2}@utdallas.edu Abstract Agentic Retrieval-Augmented Generation (RAG) systems enhance Large Language Mod- els (LLMs) by enabling dynamic, multi-step reasoning and information retrieval. However, these systems often exhibit sub-optimal search behaviors like over-search (retrieving redundant information) and under-search (failing to retrieve necessary information), which hinder efficiency and reliability. This work formally defines and quantifies these behaviors, revealing their prevalence across multiple QA datasets and agentic RAG systems (e.g., one model could have avoided searching in 27.7% of its search steps). Furthermore, we demonstrate a crucial link between these inefficiencies and the models’ uncertainty regarding their own knowledge boundaries, where response accuracy correlates with model’s uncertainty in its search decisions. To address this, we propose β-GRPO, a rein- forcement learning-based training method that incorporates confidence threshold to reward high-certainty search decisions. Experiments on seven QA benchmarks show that β-GRPO enable a 3B model with better agentic RAG ability, outperforming other strong baselines with a 4% higher average exact match score1. 1 Introduction Recent advances in Large Language Models (LLMs) have propelled their use in information- intensive tasks such as question answering and knowledge synthesis, especially when paired with retrieval capabilities (Wang et al., 2025b). Agen- tic Retrieval-Augmented Generation (RAG) frame- works (Jin et al., 2025a; Song et al., 2025a; Chen et al., 2025) push this further by empowering LLMs to perform multi-step reasoning (Li et al., 2025) and dynamically decide when and what to retrieve (Guan et al., 2025), closely emulating sophisticated ∗Equal contribution 1We will release all our codes and data upon acceptance.human research processes. However, despite these advancements, current agentic RAG systems of- ten struggle with efficiency and reliability due to sub-optimal search behaviors (Shen et al., 2024; Qian et al., 2025; Wang et al., 2025a). In particu- lar, two major challenges: 1) over-search, where the model retrieves information it already knows , and 2) under-search, where it fails to seek external knowledge when necessary, have been identified as critical obstacles that degrade performance. In this work, we conduct a thorough quantita- tive analysis to identify and measure the preva- lence of over-search and under-search. Our exper- iments on several multi-hop QA datasets (2Wiki- MultiHopQA (Ho et al., 2020), Bamboogle (Press et al., 2023), HotpotQA (Yang et al., 2018), and MuSiQue (Trivedi et al., 2022)) using contempo- rary LLMs like R1-Searcher (Song et al., 2025a) and Search-R1 (Jin et al., 2025a) reveal significant instances of sub-optimal search. We also further explore the connection between these behaviors and a model’s awareness of its knowledge bound- aries, finding that candidate responses generated with higher certainty about the necessity of a search query tend to achieve better accuracy. To address this, we introduce β-GRPO, a variant of GRPO (Shao et al., 2024) where the confidence of search calls are modeled as the minimal token probability of the search queries produced by
https://arxiv.org/abs/2505.17281v1
the model and a confidence threshold is incorporated into the reward function, only encouraging gener- ations with high-certainty search calls leading to correct answer. Through extensive experiments on seven QA benchmarks, we show that β-GRPO enables a 3B model with better agentic RAG abil- ity compared to strong baselines with a 4% higher average exact match score and 1.21% fewer over- searches and 7.33% fewer under-searches. 051015202530 01234567OverallOver-search Rate (%)Num. of SearchR1-SearcherSearch-R1Figure 1: Percentage for all search steps that can be answered without performing searches of R1-Searcher and Search-R1 on 4 datasets combined, with respect to the number of searches of each test sample. 2 Identifying Sub-optimal Search To investigate the prevalence of over-search and under-search, we conduct three experiments with the test sets of four widely recognized multihop QA datasets: 2WikiMultiHopQA (Ho et al., 2020), Bamboogle (Press et al., 2023), HotpotQA (Yang et al., 2018), and MuSiQue (Trivedi et al., 2022). We mainly investigate two recent LLMs that inter- act with search engines: R1-Searcher (Song et al., 2025b) and Search-R1 (Jin et al., 2025b). We adopt the version trained based on Qwen2.5-7B (Qwen et al., 2025) for a fair comparison. 2.1 Step-wise Analysis To directly measure whether a search step was truly necessary, we separate all outputs into individual steps and identify if each of them matches with the definition of over-search and under-search. For over-search rate measurement, we prompted the model to answer sub-queries from all the steps with search behavior using only their internal knowledge and the preceding context. For under-search rate measurement, we examine steps without searching and evaluate the correctness of the generated in- formation. A detailed explanation of the analysis pipeline is provided in Appendix A.2. Capability to Answer from Memory The re- sults in Figure 1 show that a significant portion of search actions were instances of over-search. R1-Searcher could have answered correctly with- out searching in 20.2% of its search steps over- all, while Search-R1 could have done so in 27.7% of its search steps. This highlights a substantial room for efficiency improvement. Figure 1 also shows the over-search rate for each subset of test samples grouped by the total number of search steps an agent used to solve an entire problem in- Figure 2: Error rate for all non-search steps of R1- Searcher and Search-R1 on 4 datasets combined, with respect to the number of searches of each test sample. stance. The results per each subset indicates that over-search is a persistent issue irrespective of the overall search complexity adopted by the model for a given problem. Despite the step-wise analy- sis, we also conduct an analysis on comparing the number of searches versus the pre-given number of hops from the dataset in Appendix A.3, which also supports our conclusion. Error Rate in Non-Search Steps Figure 2 ana- lyzes the error rate in non-search steps, which can be seen as the rate of under-search. Both mod- els exhibited high error rates (R1-Searcher: 63%, Search-R1: 33.98%) in non-search steps, suggest- ing a strong tendency towards under-search leading to incorrect reasoning or hallucination. For R1-
https://arxiv.org/abs/2505.17281v1
Searcher, this error rate was particularly high with fewer total searches (over 72% if no searches were made). For Search-R1, errors in non-search steps remained notable even when performing many searches overall (e.g., 48.70% for 4-search prob- lems), possibly due to decision complexity in later stages. (See Figure 2 for detailed error rates by search step count). 2.2 Sub-optimal Search & Knowledge Boundary The observed tendencies towards over-search and under-search, combined with our definition, sug- gest a core deficiency in how agentic RAG models perceive knowledge boundaries—the limits of what they know versus what they need to find out. To illustrate the link between better knowledge bound- ary awareness and improved outcomes, we analyze the performance of 4 Qwen2.5-3B based Search-R1 models (including PPO and GRPO trained, Base and Instruct variants). We generate 5 candidate responses for each question and group these re- sponses based on each output’s minimum proba- bilities within all the search query tokens as the Model Config Prob. Group 2Wiki Bamboogle HotpotQA Musique Base + PPOMax 0.184 0.096 0.152 0.038 Min 0.168 0.096 0.114 0.038 Base + GRPOMax 0.249 0.112 0.327 0.085 Min 0.234 0.104 0.289 0.056 Instruct + PPOMax 0.333 0.250 0.262 0.138 Min 0.297 0.250 0.262 0.116 Instruct + GRPOMax 0.402 0.125 0.343 0.116 Min 0.402 0.063 0.302 0.116 Table 1: Cover EM scores on multi-hop QA datasets, comparing groups of responses with higher vs. lower uncertainty (derived from average of minimum proba- bility of search query tokens) on knowledge boundary. Bold indicates instances where the Max Prob. group achieved a strictly better performance. indication of certainty on knowledge boundary. As shown in Table 1, candidate responses gener- ated with lower intrinsic uncertainty generally lead to higher final accuracy (as high as 6% on Bam- boogle and 3.8% on HotpotQA), across different training methods and base models. This suggests that when the model exhibits higher confidence (lower uncertainty) in its generation path, it is more likely to be on a correct trajectory. Therefore, im- proving an agent’s ability to accurately gauge its in- ternal knowledge state—effectively sharpening its knowledge boundary detection and reducing undue uncertainty—is a crucial step towards mitigating both over-search and under-search, thereby enhanc- ing the overall efficiency and reliability of agentic RAG systems. Our approach is motivated by this principle, aiming to train agents to better assess and reduce uncertainty at each search decision. 3 Approach Current RL powered agentic RAG methods (Jin et al., 2025a; Song et al., 2025a; Chen et al., 2025) do not explicitly model the knowledge self- awareness during the training process, resulting in generations with low confidence, which are not desired and shown to easily contain wrong an- swer compared to generations with higher confi- dence (Table 1). To this end, we propose a simple yet effective variant of GRPO (Shao et al., 2024), β-GRPO, which leverages the uncertainty of the search query spans for more effective rewarding and training. Agentic RAG with RL (Search-R1 (Jin et al., 2025a)) Given a question, we prompt the pol- icy model to explicitly reason enclosed within <think></think> tags about whether to
https://arxiv.org/abs/2505.17281v1
use an off-the-shelf search tool, and, if so, to generate a search query within <search></search> tags.The search tool then returns relevant documents in- side<information></information> tags. Once obtaining new information, the policy model can either continue searching for addi- tional information or provide a final answer within <answer></answer> tags. The instruction given to the policy model could be found in Appendix A.4. If the final answer match the groundtruth, the response will be given a reward 1, otherwise 0. And the policy are updated via policy gradient methods like GRPO (Shao et al., 2024). β-GRPO Motivated by the observation that roll- outs with low-confidence search calls are more likely to be incorrect, we incorporate model con- fidence into the RL reward process. Specifically, for each rollout containing search calls (enclosed within <search></search> tags), we extract the probabilities of the search tokens including the tags and use the minimum probability among them as a measure of the model confidence for the search calls within a rollout (Jiang et al., 2023). We then set a confidence threshold β: only rollouts with the confidence of search calls (if exist) above βand correct answers receive a reward of 1, otherwise 0. 4 Experiments Datasets We follow Search-R1 (Jin et al., 2025a) using a mixture of the NQ (Kwiatkowski et al., 2019) and HotpotQA (Yang et al., 2018) training sets for model training. For evaluation, we con- sider seven QA benchmarks, including general QA datasets, NQ, TriviaQA (Joshi et al., 2017), and PopQA (Mallen et al., 2023), as well as multi-hop QA datasets: HotpotQA, 2WikiMultiHopQA (Ho et al., 2020), Bamboogle (Press et al., 2023), and MuSiQue (Trivedi et al., 2022). Exact match (EM) is used as our main evaluation metric. Baselines We compare our method with several baselines: methods that do not use a retriever including direct prompting, Chain-of-Thought (CoT) (Wei et al., 2022) prompting, supervised fine- tuning (SFT) (Chung et al., 2022), and reinforce- ment learning-based fine-tuning (R1) (DeepSeek- AI et al., 2025); methods that use a retriever but do not perform agentic retrieval, such as Retrieval- Augmented Generation (RAG) (Lewis et al., 2020) and IRCoT (Trivedi et al., 2023); and finally, agen- tic retrieval methods, including Search-o1 (Li et al., 2025) and Search-R1 (Jin et al., 2025a). Based on our preliminary experiments, we found that training the policy model from scratch us- Methods General QA Multi-Hop QA NQ†TriviaQA⋆PopQA⋆HotpotQA†2wiki⋆Musique⋆Bamboogle⋆Average Direct Prompting 0.106 0.288 0.108 0.149 0.244 0.020 0.024 0.134 CoT Prompting 0.023 0.032 0.005 0.021 0.021 0.002 0.000 0.015 IRCoT 0.111 0.312 0.200 0.164 0.171 0.067 0.240 0.181 Search-o1 0.238 0.472 0.262 0.221 0.218 0.054 0.320 0.255 RAG 0.348 0.544 0.387 0.255 0.226 0.047 0.080 0.270 SFT 0.249 0.292 0.104 0.186 0.248 0.044 0.112 0.176 R1 0.226 0.455 0.173 0.201 0.268 0.055 0.224 0.229 Search-R1 0.406 0.587 0.435 0.284 0.273 0.049 0.088 0.303 Search-R1-GRPO 0.432 0.578 0.413 0.294 0.271 0.067 0.112 0.309 Search-R1- β-GRPO (ours) 0.468 0.625 0.449 0.334 0.304 0.086 0.144 0.344 Table 2: Main results. The best performance is set in bold.†/⋆represents in-domain/out-domain datasets. Search-R1-GRPO Search-R1- β-GRPO (ours) Question : In what Country is Sul
https://arxiv.org/abs/2505.17281v1
America Esporte Clube in? Gener- ation : <think> The question asks for the country hosting Sul America Esporte Clube, a team in the brazilian C1 and C2 league, thus the direct reasoning is likely to be Brazilian football team residing in Brazil.Question : In what Country is Sul America Esporte Clube in? Gen- eration : <search> in what Country is Sul America Esporte Clube in </search> <information> Doc 1: ... Doc 2: ... Doc 3: ... </information> <answer> Brazil </answer> (Confidence: 0.99) Table 3: Case Study. The retrieved documents are omitted for clarity. ing our confidence-based rewards prevents it from learning effective search behavior. Therefore, we use Qwen2.5-3B (Qwen et al., 2025) and initial- ize it with the parameters from Search-R1. Then we continue training using GRPO with different reward functions: one using the original answer- based reward (Search-R1-GRPO), and the other us- ing our proposed confidence-based reward (Search- R1-β-GRPO). We set the value of βas 0.4 accord- ing to the analysis in Section 5. Detailed training configurations could be found in Appendix A.5. Results As shown in Table 2, agentic search with RL training (Search-R1*) significantly outperforms other baselines, indicating that incorporating search through autonomous reasoning and RL training is more effective than non-agentic or prompting meth- ods. Our model, Search-R1- β-GRPO, achieves the highest overall average EM score across the datasets. Figure 3 in Appendix A.5 shows the train- ing rewards for Search-R1-GRPO and Search-R1- β-GRPO. We observe that the rewards for Search- R1-GRPO fluctuate and do not show clear improve- ment over training steps. In contrast, Search-R1- β-GRPO achieves higher and more stable rewards. This improved performance suggests that our pro- posed reward assignment based on the confidence of search calls within a rollout is effective. 5 Analysis Ablation on β& Case Study Following Jiang et al. (2023), we experiment with three confidencethreshold values: 0.2, 0.4, and 0.6. The aver- age EM scores are 0.341, 0.344 and 0.336 with a threshold of 0.4 yields the best result. More- over, we find 115 test cases from the multi-hop QA datasets where Search-R1- β-GRPO produces a correct answer with higher confidence, while Search-R1-GRPO gives an incorrect answer. These cases clearly benefit from the increased model con- fidence enabled by the proposed β-GRPO. An ex- ample is shown in Table 3: Search-R1-GRPO lacks confidence and fails to provide a definite answer, whereas Search-R1- β-GRPO generates a confident search query and produces the correct answer. Under-searches & Over-searches We also mea- sure the rate of over-search and under-search of our Search-R1- β-GRPO and the baseline Search- R1-GRPO trained based on Qwen2.5-3B with the methods in Section 2.1. Compared with Search- R1-GRPO, which has overall 21.10% over-search rate and 42.04% under-search rate%, our Search- R1-β-GRPO achieves 19.89% over-search rate and 34.71% under-search rate, which are lower than the baseline method. This shows that our method effec- tively reduces both types of sub-optimal searches. 6 Conclusion In this work, we formally define and quantify sub- optimal search behaviors, over-search and under- search, in agentic RAG systems, revealing their prevalence and impact. By introducing β-GRPO, a
https://arxiv.org/abs/2505.17281v1
confidence-aware policy gradient method, we enable a 3B model with better agentic RAG ability than strong baselines. 7 Limitations We formally define and quantify sub-optimal search behaviors in agentic RAG systems and pro- poseβ-GRPO to train agentic RAG models with improved self-knowledge awareness. However, we acknowledge that sub-optimal search behav- iors, over-search and under-search, are persistent challenges that require further investigation, es- pecially in more open-ended tasks like deep re- search (Alzubi et al., 2025). Additionally, due to limited computational resources, we are unable to train larger models and leave it for future work. References Salaheddin Alzubi, Creston Brooks, Purva Chiniya, Edoardo Contente, Chiara von Gerlach, Lucas Irwin, Yihan Jiang, Arda Kaz, Windsor Nguyen, Sewoong Oh, Himanshu Tyagi, and Pramod Viswanath. 2025. Open deep search: Democratizing search with open- source reasoning agents. arXiv [cs.LG] . Mingyang Chen, Tianpeng Li, Haoze Sun, Yijie Zhou, Chenzheng Zhu, Fan Yang, Zenan Zhou, Weipeng Chen, Haofen Wang, Jeff Z Pan, Wen Zhang, and Huajun Chen. 2025. ReSearch: Learning to reason with search for LLMs via reinforcement learning. arXiv [cs.AI] . Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Al- bert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdh- ery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H Chi, Jeff Dean, Ja- cob Devlin, Adam Roberts, Denny Zhou, Quoc V Le, and Jason Wei. 2022. Scaling instruction-finetuned language models. arXiv [cs.LG] . DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z F Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wang, Bochao Wu, Bei Feng, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Qu, Hui Li, Jianzhong Guo, Jiashi Li, Jiawei Wang, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, J L Cai, Jiaqi Ni, Jian Liang, Jin Chen, Kai Dong, KaiHu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Liang Zhao, Litong Wang, Liyue Zhang, Lei Xu, Leyi Xia, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Meng Li, Miaojun Wang, Mingming Li, Ning Tian, Pan- pan Huang, Peng Zhang, Qiancheng Wang, Qinyu Chen, Qiushi Du, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, R J Chen, R L Jin, Ruyi Chen, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shengfeng Ye, Shiyu Wang, Shuiping Yu, Shunfeng Zhou, Shuting Pan, S S Li, Shuang Zhou, Shaoqing Wu, Shengfeng Ye, Tao Yun, Tian Pei, Tianyu Sun, T Wang, Wangding Zeng, Wanjia Zhao, Wen Liu, Wenfeng Liang, Wenjun Gao, Wenqin Yu, Wentao Zhang, W L Xiao, Wei An, Xiaodong Liu, Xiaohan Wang, Xiaokang Chen, Xiaotao Nie, Xin Cheng, Xin Liu, Xin
https://arxiv.org/abs/2505.17281v1
Xie, Xingchao Liu, Xinyu Yang, Xinyuan Li, Xuecheng Su, Xuheng Lin, X Q Li, Xiangyue Jin, Xiaojin Shen, Xiaosha Chen, Xiaowen Sun, Xiaoxi- ang Wang, Xinnan Song, Xinyi Zhou, Xianzu Wang, Xinxia Shan, Y K Li, Y Q Wang, Y X Wei, Yang Zhang, Yanhong Xu, Yao Li, Yao Zhao, Yaofeng Sun, Yaohui Wang, Yi Yu, Yichao Zhang, Yifan Shi, Yiliang Xiong, Ying He, Yishi Piao, Yisong Wang, Yixuan Tan, Yiyang Ma, Yiyuan Liu, Yongqiang Guo, Yuan Ou, Yuduan Wang, Yue Gong, Yuheng Zou, Yu- jia He, Yunfan Xiong, Yuxiang Luo, Yuxiang You, Yuxuan Liu, Yuyang Zhou, Y X Zhu, Yanhong Xu, Yanping Huang, Yaohui Li, Yi Zheng, Yuchen Zhu, Yunxian Ma, Ying Tang, Yukun Zha, Yuting Yan, Z Z Ren, Zehui Ren, Zhangli Sha, Zhe Fu, Zhean Xu, Zhenda Xie, Zhengyan Zhang, Zhewen Hao, Zhicheng Ma, Zhigang Yan, Zhiyu Wu, Zihui Gu, Zi- jia Zhu, Zijun Liu, Zilin Li, Ziwei Xie, Ziyang Song, Zizheng Pan, Zhen Huang, Zhipeng Xu, Zhongyu Zhang, and Zhen Zhang. 2025. DeepSeek-R1: Incen- tivizing reasoning capability in LLMs via reinforce- ment learning. arXiv [cs.CL] . Xinyan Guan, Jiali Zeng, Fandong Meng, Chunlei Xin, Yaojie Lu, Hongyu Lin, Xianpei Han, Le Sun, and Jie Zhou. 2025. Deeprag: Thinking to retrieval step by step for large language models. Xanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara, and Akiko Aizawa. 2020. Constructing a multi- hop QA dataset for comprehensive evaluation of reasoning steps. In Proceedings of the 28th Inter- national Conference on Computational Linguistics , pages 6609–6625, Barcelona, Spain (Online). Inter- national Committee on Computational Linguistics. Zhengbao Jiang, Frank F Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Active retrieval augmented generation. arXiv [cs.CL] . Bowen Jin, Hansi Zeng, Zhenrui Yue, Dong Wang, Hamed Zamani, and Jiawei Han. 2025a. Search-R1: Training LLMs to reason and leverage search engines with reinforcement learning. arXiv [cs.CL] . Bowen Jin, Hansi Zeng, Zhenrui Yue, Jinsung Yoon, Sercan Arik, Dong Wang, Hamed Zamani, and Jiawei Han. 2025b. Search-r1: Training llms to reason and leverage search engines with reinforcement learning. Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehen- sion. Vladimir Karpukhin, Barlas O ˘guz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-Tau Yih. 2020. Dense passage retrieval for open- domain question answering. arXiv [cs.CL] . Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Ken- ton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering re- search. Trans. Assoc. Comput. Linguist. , 7:453–466. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich Küttler, Mike Lewis, Wen-Tau Yih, Tim Rock- täschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge- intensive NLP tasks. arXiv [cs.CL] . Xiaoxi Li, Guanting Dong, Jiajie Jin, Yuyao Zhang, Yujia Zhou, Yutao Zhu, Peitian Zhang, and Zhicheng Dou. 2025. Search-o1: Agentic search-enhanced
https://arxiv.org/abs/2505.17281v1
large reasoning models. arXiv [cs.AI] . Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Daniel Khashabi, and Hannaneh Hajishirzi. 2023. When not to trust language models: Investigating effectiveness of parametric and non-parametric mem- ories. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers) , Stroudsburg, PA, USA. Asso- ciation for Computational Linguistics. OpenAI, :, Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, Aleksander M ˛ adry, Alex Baker-Whitcomb, Alex Beutel, Alex Borzunov, Alex Carney, Alex Chow, Alex Kirillov, Alex Nichol, Alex Paino, Alex Renzin, Alex Tachard Passos, Alexander Kirillov, Alexi Christakis, Alexis Conneau, Ali Kamali, Allan Jabri, Allison Moyer, Allison Tam, Amadou Crookes, Amin Tootoochian, Amin Tootoonchian, Ananya Kumar, Andrea Vallone, Andrej Karpathy, Andrew Braunstein, Andrew Cann, Andrew Codispoti, An- drew Galu, Andrew Kondrich, Andrew Tulloch, An- drey Mishchenko, Angela Baek, Angela Jiang, An- toine Pelisse, Antonia Woodford, Anuj Gosalia, Arka Dhar, Ashley Pantuliano, Avi Nayak, Avital Oliver, Barret Zoph, Behrooz Ghorbani, Ben Leimberger, Ben Rossen, Ben Sokolowsky, Ben Wang, Benjamin Zweig, Beth Hoover, Blake Samic, Bob McGrew, Bobby Spero, Bogo Giertler, Bowen Cheng, Brad Lightcap, Brandon Walkin, Brendan Quinn, BrianGuarraci, Brian Hsu, Bright Kellogg, Brydon East- man, Camillo Lugaresi, Carroll Wainwright, Cary Bassin, Cary Hudson, Casey Chu, Chad Nelson, Chak Li, Chan Jun Shern, Channing Conger, Char- lotte Barette, Chelsea V oss, Chen Ding, Cheng Lu, Chong Zhang, Chris Beaumont, Chris Hallacy, Chris Koch, Christian Gibson, Christina Kim, Christine Choi, Christine McLeavey, Christopher Hesse, Clau- dia Fischer, Clemens Winter, Coley Czarnecki, Colin Jarvis, Colin Wei, Constantin Koumouzelis, Dane Sherburn, Daniel Kappler, Daniel Levin, Daniel Levy, David Carr, David Farhi, David Mely, David Robin- son, David Sasaki, Denny Jin, Dev Valladares, Dim- itris Tsipras, Doug Li, Duc Phong Nguyen, Duncan Findlay, Edede Oiwoh, Edmund Wong, Ehsan As- dar, Elizabeth Proehl, Elizabeth Yang, Eric Antonow, Eric Kramer, Eric Peterson, Eric Sigler, Eric Wal- lace, Eugene Brevdo, Evan Mays, Farzad Khorasani, Felipe Petroski Such, Filippo Raso, Francis Zhang, Fred von Lohmann, Freddie Sulit, Gabriel Goh, Gene Oden, Geoff Salmon, Giulio Starace, Greg Brockman, Hadi Salman, Haiming Bao, Haitang Hu, Hannah Wong, Haoyu Wang, Heather Schmidt, Heather Whitney, Heewoo Jun, Hendrik Kirchner, Henrique Ponde de Oliveira Pinto, Hongyu Ren, Huiwen Chang, Hyung Won Chung, Ian Kivlichan, Ian O’Connell, Ian O’Connell, Ian Osband, Ian Sil- ber, Ian Sohl, Ibrahim Okuyucu, Ikai Lan, Ilya Kostrikov, Ilya Sutskever, Ingmar Kanitscheider, Ishaan Gulrajani, Jacob Coxon, Jacob Menick, Jakub Pachocki, James Aung, James Betker, James Crooks, James Lennon, Jamie Kiros, Jan Leike, Jane Park, Jason Kwon, Jason Phang, Jason Teplitz, Jason Wei, Jason Wolfe, Jay Chen, Jeff Harris, Jenia Var- avva, Jessica Gan Lee, Jessica Shieh, Ji Lin, Jiahui Yu, Jiayi Weng, Jie Tang, Jieqi Yu, Joanne Jang, Joaquin Quinonero Candela, Joe Beutler, Joe Lan- ders, Joel Parish, Johannes Heidecke, John Schul- man, Jonathan Lachman, Jonathan McKay, Jonathan Uesato, Jonathan Ward, Jong Wook Kim, Joost Huizinga, Jordan Sitkin, Jos Kraaijeveld, Josh Gross, Josh Kaplan, Josh Snyder, Joshua Achiam, Joy Jiao, Joyce Lee, Juntang Zhuang, Justyn Harriman, Kai Fricke,
https://arxiv.org/abs/2505.17281v1
Kai Hayashi, Karan Singhal, Katy Shi, Kavin Karthik, Kayla Wood, Kendra Rimbach, Kenny Hsu, Kenny Nguyen, Keren Gu-Lemberg, Kevin Button, Kevin Liu, Kiel Howe, Krithika Muthukumar, Kyle Luther, Lama Ahmad, Larry Kai, Lauren Itow, Lau- ren Workman, Leher Pathak, Leo Chen, Li Jing, Lia Guy, Liam Fedus, Liang Zhou, Lien Mamitsuka, Lil- ian Weng, Lindsay McCallum, Lindsey Held, Long Ouyang, Louis Feuvrier, Lu Zhang, Lukas Kon- draciuk, Lukasz Kaiser, Luke Hewitt, Luke Metz, Lyric Doshi, Mada Aflak, Maddie Simens, Madelaine Boyd, Madeleine Thompson, Marat Dukhan, Mark Chen, Mark Gray, Mark Hudnall, Marvin Zhang, Marwan Aljubeh, Mateusz Litwin, Matthew Zeng, Max Johnson, Maya Shetty, Mayank Gupta, Meghan Shah, Mehmet Yatbaz, Meng Jia Yang, Mengchao Zhong, Mia Glaese, Mianna Chen, Michael Jan- ner, Michael Lampe, Michael Petrov, Michael Wu, Michele Wang, Michelle Fradin, Michelle Pokrass, Miguel Castro, Miguel Oom Temudo de Castro, Mikhail Pavlov, Miles Brundage, Miles Wang, Mi- nal Khan, Mira Murati, Mo Bavarian, Molly Lin, Murat Yesildal, Nacho Soto, Natalia Gimelshein, Na- talie Cone, Natalie Staudacher, Natalie Summers, Natan LaFontaine, Neil Chowdhury, Nick Ryder, Nick Stathas, Nick Turley, Nik Tezak, Niko Felix, Nithanth Kudige, Nitish Keskar, Noah Deutsch, Noel Bundick, Nora Puckett, Ofir Nachum, Ola Okelola, Oleg Boiko, Oleg Murk, Oliver Jaffe, Olivia Watkins, Olivier Godement, Owen Campbell-Moore, Patrick Chao, Paul McMillan, Pavel Belov, Peng Su, Pe- ter Bak, Peter Bakkum, Peter Deng, Peter Dolan, Peter Hoeschele, Peter Welinder, Phil Tillet, Philip Pronin, Philippe Tillet, Prafulla Dhariwal, Qiming Yuan, Rachel Dias, Rachel Lim, Rahul Arora, Ra- jan Troll, Randall Lin, Rapha Gontijo Lopes, Raul Puri, Reah Miyara, Reimar Leike, Renaud Gaubert, Reza Zamani, Ricky Wang, Rob Donnelly, Rob Honsby, Rocky Smith, Rohan Sahai, Rohit Ramchan- dani, Romain Huet, Rory Carmichael, Rowan Zellers, Roy Chen, Ruby Chen, Ruslan Nigmatullin, Ryan Cheu, Saachi Jain, Sam Altman, Sam Schoenholz, Sam Toizer, Samuel Miserendino, Sandhini Agar- wal, Sara Culver, Scott Ethersmith, Scott Gray, Sean Grove, Sean Metzger, Shamez Hermani, Shantanu Jain, Shengjia Zhao, Sherwin Wu, Shino Jomoto, Shi- rong Wu, Shuaiqi, Xia, Sonia Phene, Spencer Papay, Srinivas Narayanan, Steve Coffey, Steve Lee, Stew- art Hall, Suchir Balaji, Tal Broda, Tal Stramer, Tao Xu, Tarun Gogineni, Taya Christianson, Ted Sanders, Tejal Patwardhan, Thomas Cunninghman, Thomas Degry, Thomas Dimson, Thomas Raoux, Thomas Shadwell, Tianhao Zheng, Todd Underwood, Todor Markov, Toki Sherbakov, Tom Rubin, Tom Stasi, Tomer Kaftan, Tristan Heywood, Troy Peterson, Tyce Walters, Tyna Eloundou, Valerie Qi, Veit Moeller, Vinnie Monaco, Vishal Kuo, Vlad Fomenko, Wayne Chang, Weiyi Zheng, Wenda Zhou, Wesam Manassra, Will Sheu, Wojciech Zaremba, Yash Patil, Yilei Qian, Yongjik Kim, Youlong Cheng, Yu Zhang, Yuchen He, Yuchen Zhang, Yujia Jin, Yunxing Dai, and Yury Malkov. 2024. Gpt-4o system card. Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah Smith, and Mike Lewis. 2023. Measuring and narrowing the compositionality gap in language mod- els. In Findings of the Association for Computational Linguistics: EMNLP 2023 , pages 5687–5711, Singa- pore. Association for Computational Linguistics. Cheng Qian, Emre Can Acikgoz, Hongru Wang, Xiusi Chen, Avirup Sil, Dilek Hakkani-Tür, Gokhan Tur, and Heng Ji. 2025. Smart: Self-aware agent for tool overuse mitigation. Qwen, :, An Yang, Baosong Yang, Beichen
https://arxiv.org/abs/2505.17281v1
Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tianyi Tang, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, YichangZhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. 2025. Qwen2.5 technical report. Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, Y K Li, Y Wu, and Daya Guo. 2024. DeepSeekMath: Pushing the limits of mathematical reasoning in open language models. arXiv [cs.CL] . Yuanhao Shen, Xiaodan Zhu, and Lei Chen. 2024. SMARTCAL: An approach to self-aware tool-use evaluation and calibration. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track , pages 774– 789, Miami, Florida, US. Association for Computa- tional Linguistics. Huatong Song, Jinhao Jiang, Yingqian Min, Jie Chen, Zhipeng Chen, Wayne Xin Zhao, Lei Fang, and Ji- Rong Wen. 2025a. R1-searcher: Incentivizing the search capability in LLMs via reinforcement learning. arXiv [cs.AI] . Huatong Song, Jinhao Jiang, Yingqian Min, Jie Chen, Zhipeng Chen, Wayne Xin Zhao, Lei Fang, and Ji- Rong Wen. 2025b. R1-searcher: Incentivizing the search capability in llms via reinforcement learning. Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022. MuSiQue: Multi- hop questions via single-hop question composition. Transactions of the Association for Computational Linguistics , 10:539–554. Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2023. Interleaving retrieval with chain-of-thought reasoning for knowledge- intensive multi-step questions. In Proceedings of the 61st Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers) , Strouds- burg, PA, USA. Association for Computational Lin- guistics. Hongru Wang, Cheng Qian, Wanjun Zhong, Xiusi Chen, Jiahao Qiu, Shijue Huang, Bowen Jin, Mengdi Wang, Kam-Fai Wong, and Heng Ji. 2025a. Otc: Optimal tool calls via reinforcement learning. Liang Wang, Haonan Chen, Nan Yang, Xiaolong Huang, Zhicheng Dou, and Furu Wei. 2025b. Chain-of- retrieval augmented generation. Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, and Furu Wei. 2022. Text embeddings by weakly- supervised contrastive pre-training. arXiv [cs.CL] . Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain-of-thought prompting elicits reasoning in large language models. arXiv [cs.CL] . Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christo- pher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. InProceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing , pages 2369–2380, Brussels, Belgium. Association for Com- putational Linguistics. A Appendix A.1 Formal Definition of Under-search & Over-search Formally, let an LLM agent’s interaction for a ques- tion be a sequence of steps T={s1, s2, . . . , s N}. Each step stcomprises a reasoning component rt. If the model decides to retrieve information, the re- trieval step sR t= (rt, qt, ct)includes a
https://arxiv.org/abs/2505.17281v1
search sub- query qtand the retrieved context ct=search (qt). The sub-answer atfor this step sR tis typically de- rived using ctand reflected in rt+1. If the model does not retrieve, the non-retrieval step sNR t= (rt) relies on the existing context {s1, s2, . . . , s t−1} and the model’s internal knowledge Mto derive at reflected in rt. Let a∗ tbe the ground-truth answer stepst. Over-search occurs if a retrieval step sR t’s answer atcould have been derived from Mand {s1, s2, . . . , s t−1}only. Under-search occurs if a non-retrieval step sNR tleads to at̸=a∗ t. A.2 Detailed Step-wise Analysis Procedure To empirically measure the rates of over-search and under-search, we conducted a detailed step-wise analysis of the agent’s decision-making process. The interactions of the agent are logged as a se- quence of steps, where each step can involve in- ternal reasoning (thinking), querying a search tool, processing retrieved context, and generating an an- swer. We define specific procedures to identify and quantify each type of sub-optimal search behavior: 1.Step Extraction: We parse the agent’s in- teraction log following the definition in Ap- pendix A.1. Each distinct thinking process is a decision point and considered a step, typi- cally delineated by <step> and </step> tags (or a similar structured logging format). A "search step" is identified as any step where all three relevant operations—think (the model’s reasoning), search (the search query issued), context (the information retrieved). A "non- search step" typically only consists of think- ing. In this work specifically, the Step Extrac- tion is done by prompting QwQ-32B (Qwen et al., 2025) as we discover that reasoning LLM typically perform better on such task. 2.Extraction of Partial Input: For each identi- fied search step, we reconstruct the input that would have been available to the model be- fore it decided to search. This is achieved by taking the complete output generated by theagent from the beginning of the interaction up to and including the content of the think field of the current search step. 3.Querying with Internal Knowledge for Over-search Analysis: For over-search rate measurement, the extracted partial output is then appended with a specific instructional prompt: "I will use my own knowledge to answer this query and provide my answer to this query enclosed in <query_answer> </query_answer> tags." This combined text serves as a new input to the original RL-tuned model (e.g., Search-R1- β-GRPO and Search- R1-GRPO), which is tasked with generating an answer without performing any new search. The over-search rate is then measured by com- puting the percentage of steps that provide equivalent answer (determined by QwQ-32B in our analysis) for both with and without searching, among all "search steps". 4.Generation of Reference Answer for Under- search Analysis: For each identified non- search step, the original query or sub-query that the agent was attempting to answer at that point is presented to a more powerful, state- of-the-art language model (e.g., ChatGPT-4o (OpenAI et al., 2024)) with recent knowledge cutoff date. This model generates a "refer- ence answer," which is assumed to be of
https://arxiv.org/abs/2505.17281v1
high quality. The reference answer obtained is com- pared with the actual answer generated by the agent for that non-search step. The under- search rate is calculated as the proportion of non-search steps where the agent’s answer does not match (determined by QwQ-32B in our analysis) the reference answer, quantify- ing how often the agent fails to search when doing so would have likely led to a more ac- curate or complete answer. A.3 Search Frequency vs. Optimal Hops One indicator of potential over-search is when the number of search queries generated by an agent exceeds the optimal number of reasoning hops re- quired to answer a question. A significantly higher search count often points to redundant information gathering. For this experiment, we only use the test set from Bamboogle (Press et al., 2023) and MuSiQue (Trivedi et al., 2022) as they are the only two datasets providing pre-defined number of hops for each test sample. Model Dataset Search vs. Hops Correct (%) Incorrect (%) Sum (%) Less 2.8 19 21.8 R1-Searcher Musique Match 21.8 45.8 67.6 More 1.8 8.8 10.6 Less 0 0 0 R1-Searcher Bamboogle Match 40.8 52.8 93.6 More 3.2 3.2 6.4 Less 1.8 7 8.8 Search-R1 Musique Match 12.4 27.6 40 More 8.8 42.4 51.2 Less 0.8 1.6 2.4 Search-R1 Bamboogle Match 28.8 28 56.8 More 12 28.8 40.8 Table 4: Comparison of the number of searches gener- ated vs. annotated hops on Bamboogle and Musique datasets. "More" indicates potential over-search as number of searchers exceeds pre-defined optimal hops. "Less" may indicate a potential under-search. R1-Searcher exhibits a tendency to perform more searches than hops in 10.6% of Musique cases and 6.4% of Bamboogle cases. Search-R1 shows a more pronounced tendency, with 51.2% (Musique) and 40.8% (Bamboogle) of cases issu- ing more searches than annotated hops. This result suggests that models trained with different methods do not inherently solve over-search and might even exacerbate it under certain configurations if not properly guided. While "Less" searches than hops might indicate efficient reasoning or under-search, the "More" category strongly suggests instances of over-searching. A.4 Instruction Answer the given question. You must con- duct reasoning inside <think> and </think> first every time you get new information. After reasoning, if you find you lack some knowledge, you can call a search engine by <search> query </search>, and it will return the top searched results between <informa- tion> and </information>. You can search as many times as you want. If you find no further external knowledge needed, you can directly provide the answer inside <answer> and </answer> without detailed illustrations. For example, <answer> Beijing </answer>. Question: question. A.5 Training Configuration & Rewards We train Search-R1-GPRO and Search-R1- β- GPRO for 200 steps, with a learning rate of 1e-6 and batch size of 512. For a question, we produce 5 generations with temperature of 1 to form a GPRO group. For the search engine, for fair compari- 0 25 50 75 100 125 150 175 200 Step0.3000.3250.3500.3750.4000.4250.4500.475Reward Search-R1-GRPO Search-R1- -GPRO (ours) Figure 3: Training Rewards for Search-R1-GRPO and Search-R1- β-GRPO. son, we also use
https://arxiv.org/abs/2505.17281v1
arXiv:2505.17282v1 [cs.LG] 22 May 2025Attention with Trained Embeddings Provably Selects Important Tokens Diyuan Wu1,∗Aleksandr Shevchenko2,∗Samet Oymak3Marco Mondelli1 Abstract Token embeddings play a crucial role in language modeling but, despite this practical relevance, their theoretical understanding remains limited. Our paper addresses the gap by characterizing the structure of embeddings obtained via gradient descent. Specifically, we consider a one-layer softmax attention model with a linear head for binary classification, i.e., Softmax (p⊤E⊤ X)EXv=PT i=1exp(p⊤Exi)E⊤ xiv PT j=1exp p⊤Exj, where EX= [Ex1, . . . , E xT]⊤contains the embeddings of the input sequence, pis the embedding of the ⟨cls⟩token and vthe output vector. First, we show that, already after a single step of gradient training with the logistic loss, the embeddings EXcapture the importance of tokens in the dataset by aligning with the output vector vproportionally to the frequency with which the corresponding tokens appear in the dataset. Then, after training pvia gradient flow until convergence, the softmax selects the important tokens in the sentence (i.e., those that are predictive of the label), and the resulting ⟨cls⟩embedding maximizes the margin for such a selection. Experiments on real-world datasets (IMDB, Yelp) exhibit a phenomenology close to that unveiled by our theory. 1 Introduction The introduction of the attention mechanism [ BCB15,VSP+17] marked a paradigm shift in the design of frontier machine learning models, leading to significant advances such as ChatGPT [ AAA+23], Claude [ Ant25], AlphaFold [JEP+21], CLIP [ RKH+21] and Dall-E [ RPG+21]. This success prompted a surge of interest in understanding the structure and function of attention layers, with their optimization dynamics and inductive biases being object of extensive theoretical research [ ABL+24,CDB24,GLPR23a ,MBG+25,TLZO23,VBC20] (see also Section 2). Embeddings are a crucial component of the attention mechanism [ ZNS+23], especially for downstream adaptation [HGJ+19,JZP+24,KCM+23] with some works [ LRL+24,ZNS+23] specifically highlighting their importance. However, despite the importance of learning embeddings, the existing analyses of transformer-like architectures either ignore the properties of embeddings by resorting to orthogonal structures [ YKWL24 ], or omit embeddings completely by considering unprocessed inputs [TMIS24]. Our paper fills this gap by studying directly the embedding training dynamics. Specifically, we aim to provide theoretical insight to the following questions: What is the structure learnt by the embeddings during gradient descent training? How is this structure related to the statistical properties of the data? In Figure 1, we investigate these questions by analyzing the embeddings of a two-layer transformer trained on a sentiment analysis task on IMDB and Yelp reviews. The plots reveal a remarkable simplicity in the structure of the learned embeddings, which capture the frequency of appearance of tokens in the dataset. Specifically, the predictive mechanism (overlap with the regression coefficient v) favors the tokens which appear more frequently in the corresponding positive/negative context. A similar pattern emerges at the selection stage of the attention mechanism (overlap with the ⟨cls⟩embedding p), i.e., more frequent tokens have a higher attention score. For the theoretical study of this emergent structure, we focus on a one-layer softmax attention model. Namely, for an input sequence X= [x1,···, xT], the output of the model
https://arxiv.org/abs/2505.17282v1
is given by f(X;p,E) =Softmax (p⊤E⊤ X)EXv=PT i=1exp p⊤Exi E⊤ xiv PT j=1exp p⊤Exj, (1) whereEX= [Ex1, . . . , E xT]⊤contains the embeddings of the input X,pis the embedding of the ⟨cls⟩token and vis the final regression vector. Our main results are summarized below: ∗Equal contribution. 1Institute of Science and Technology Austria (ISTA). Emails: {diyuan.wu, marco.mondelli}@ist.ac.at 2ETH Zürich. Email: aleksandr.shevchenko@inf.ethz.ch 3University of Michigan. Email: oymak@umich.edu 1 Figure 1: Dot-product of token embeddings with ⟨cls⟩embedding p(left) and regression coefficients v(right), as a function of the token-wise difference in posterior probabilities, for IMDB (top row) and Yelp (bottom row) datasets. We consider the two-layer attention model in (17) with all parameters trained until convergence. •We show that, already after a single step of gradient training with the standard logistic loss, the embeddings EXcapture the importance of tokens in the dataset by aligning with the output vector vproportionally to the corresponding empirical frequencies (Lemma 4.1). •In a setting where each sequence contains a single important token, the ⟨cls⟩embedding obtained from gradient flow must select all important tokens. We further characterize all the possible directions that the ⟨cls⟩embedding may converge to, which are the max-margin solutions associated to feasible token selections (Theorem 4.3). •While in general the ⟨cls⟩embedding may select irrelevant tokens, we identify sufficient conditions leading to the selection only of important tokens (Lemmas 4.4 and 4.5). 2 Related work Implicit bias, margin maximization, attention. The implicit bias literature has been instrumental in understanding the behavior of neural networks or overparameterized models optimized by gradient methods [CB20,ACHL19 ,NTS15]. A key phenomenon is that gradient descent on separable data with logistic loss di- rectionally converges to the max-margin separator [ SHN+18,JT19]. More recently, a series of works [ TLTO23, MSX+24,JTA24,VDT25,LHI+24,SCWZ25 ,TLZO23,SS24] has established an equivalence between the optimiza- tion geometry of self-attention and a hard-margin SVM problem selecting a subset of tokens via linear constraints on the outer-products of token pairs. Compared to these works that mostly focus on the training of single-layer attention weights, we point out two differences. First, we study the role of embeddings and their joint training with the ⟨cls⟩ token. Second, under our data model, we establish benign properties of the solution reached at convergence (which may not hold for arbitrary datasets [TLZO23]). 2 Theory of attention. A line of work [ MBG+25,MBG+24,NDL24] has explored whether attention-based architec- tures can extract causal structure from Markovian inputs. The mechanics of next-token prediction when training a single self-attention layer is characterized in [ LHI+24]. Towards understanding how to utilize structural properties of the data, the behavior of transformers on sparse token selection tasks is considered in [ SHT23,WWHL24 ]. The study [IHL+24] provides a theoretical justification to the tendency of modern language models to generate repetitive text by showing that the underlying self-attention mechanism collapses into sampling only a limited subset of tokens. This stands in contrast to the slightly different setup of [ TWCD23 ] where the transformer model does not degrade to a “winner-takes-all” strategy. The works [ GLPR23a ,GLPR23b ,GKPR24 ] take a mean-field view to analyze the
https://arxiv.org/abs/2505.17282v1
clustering behavior in transformer representations that emerges after successive applications of the attention block. Under a random feature design, it is shown in [ BM24] that softmax attention exhibits a sensitivity property which allows for a sharp change in attention scores given the perturbation of a single token. The role of the attention mechanism is also studied in [ORST23] for prompt-tuning and in [GIZ+25] for test-time-training. 3 Problem setup Data model. We focus on binary text classification problems. We consider a (context) vocabulary set Swith size|S|, together with a ⟨cls⟩token for classification. Let (Xi, yi)n i=1be the dataset containing ncontext sequences, where yi∈ {− 1,1}and each context sequence X∈ Xn:={X1, . . . , X n}contains Ttokens, i.e., X= [x1, . . . , x T] with xi∈ S. Without loss of generality, we let Sbe the set of tokens that appears in Xn, as the embeddings of the remaining tokens are not trained and are not relevant for the problem at hand. Architecture. We consider a one-layer softmax attention model with a linear head for classification. First, we append a ⟨cls⟩token at the end of the sequence X, and then we embed each token into a vector of dimension d. Namely, after the embedding layer, we have EX= [Ex1, . . . , E xT]⊤∈RT×d, where Es∈Rddenotes the embedding of the token s.We let E∈R|S|×dbe the embedding matrix of all context tokens and p∈Rdthe embedding of the ⟨cls⟩token. We focus on the architecture defined in (1)where, given a vector a∈RT,[Softmax (a)]i:=exp(ai)PT j=1exp(aj)for i∈ {1, . . . , T }. We remark that the same model is also studied in [ TLZO23,SS24]. In practice, it is common to include the WKQmatrix and consider a model with output f(X;p, W KQ,E) =Softmax (p⊤WKQE⊤ X)EXv.Since p⊤WKQplays the same role as pand one can easily reconstruct WKQfrom pin each gradient update as discussed in [TLZO23], we use the model in (1) for simplicity. Optimization problem. The output vector vis fixed and all the embedding vectors p,Eare trained with the standard logistic loss: L(E, p) =1 nnX k=1log(1 + exp( −ykf(Xk;E, p))) =bE[log(1 + exp( −yf(X;E, p)))], (2) where the notation bEis a shorthand for the average over the dataset D={(Xk, yk)}n k=1. Empirical statistics of each token in the dataset. The goal of the paper is to characterize the structure of the embeddings E, pobtained by optimizing the objective (2)via gradient descent, and we show that such structure is related to the empirical statistics of the tokens in the dataset. Specifically, after training, the softmax attention learns to select tokens that are more correlated to the labels based on the dataset. To quantify the correlation between a token sand the label y, we define the average signed frequency of a token as: αs:=1 nTX (X,y)∈DyTX i=11xi=s=1 TbE" yTX i=11xi=s# . (3) In words, αsis obtained by taking the number of occurrences of sin sequences with a positive label, subtracting the number of occurrences of sin sequences with a negative label, and finally dividing by the total number of tokens nT. As such, it provides
https://arxiv.org/abs/2505.17282v1
an average of the signed frequency of s, where the sign comes from the label of the sequences in which the token appears. Definition 3.1 (Positive, negative and irrelevant tokens) .We say that a token sis(i) positive ifαs>0,(ii) negative ifαs<0, and(iii) irrelevant ifαs= 0. Moreover, we say that a token siscompletely positive if it appears only in sequences with label 1, andcompletely negative if it appears only in sequences with label −1. 3 In words, a token is positive (negative) when it is more frequently associated to the positive (negative) label; tokens that appear the same number of times associated to positive and negative labels are irrelevant. The quantity αsquantifies how positive/negative a token is. Intuitively, if either αs> αs′>0orαs< αs′<0, then the token sis more relevant than the token s′for the classification task and, therefore, we expect that this will be reflected into the structure of the corresponding embeddings. 4 Main theoretical results We show that the trained attention model (1)learns to select important tokens from the empirical statistics of the data. First, in Section 4.1 we prove that training the context embeddings with a single gradient step suffices to capture the empirical importance of tokens in the dataset; then, in Section 4.2 we characterize the implicit bias of training the ⟨cls⟩embedding puntil convergence, having fixed the context embeddings after the first gradient step. 4.1 One step of gradient descent learns the importance of the tokens We start by showing that the first step of gradient descent is already enough to give a correlation between embeddings and the output vector v. Furthermore, this correlation is proportional to the average signed frequency defined in (3). We initialize vwith any unit-norm vector and E0 s, p0i.i.d.∼ N (0,1 dI)for all s∈ S. Then, we perform one step of gradient descent with step size η0on all trainable embeddings: p1=p0−η0∇pL(E0, p0), E1 s=E0 s−η0∇EsL(E0, p0),for all s∈ S. (4) Lemma 4.1. For any δ >0, let d≥max( 256, 2 log|S|2 δ2) . (5) Then, after the first step of gradient descent in (4), we have that, for any s∈ S, E1 s=E0 s+η0 2αsv+errs, p1=p0+errp, (6) where the error terms err s,errpare bounded with probability at least 1−δas max{max s∈S∥errs∥2,∥errp∥2} ≤11η0d−1 4. (7) Lemma 4.1 implies that after one step of training, the embedding vector Esof each token slearns the empirical importance of the tokens by adding a vector in the direction of the output vector vwith magnitude proportional to αs. The proof follows from the structure of the gradient update. In particular, it can be shown that ∇EsL(E0, p0) =−bE yg(X, y) TX i=1(X j̸=i(1xi=s−1xj=s)qiqj)(E0 xi)⊤vp0+TX i=11xi=sqiv  , (8) with qi:=exp((p0)⊤E0 xi) PT j=1exp (p0)⊤E0xj. Note that, for all xi,(p0)⊤E0 xiis of order 1/√ d, due to the independent Gaussian initialization. Thus, the characterization in (8)implies that the gradient is roughlyη0 2αsvplus a term that is vanishing ind. The full argument is deferred to Appendix C.1. 4.2 Gradient flow on pperforms max-margin token selection Lemma 4.1 shows the informative overlap between the output vector vand the context embedding vectors Esafter the first step
https://arxiv.org/abs/2505.17282v1
of gradient descent. However, (6)also implies that the overlap between the ⟨cls⟩embedding vector p andEsdoes not improve after the first step. Thus, next, we study the training dynamics of p, characterizing its implicit bias. Specifically, we fix the context embedding matrix to E1(obtained after the first gradient step) and train the ⟨cls⟩embedding vector pwith gradient flow initialized at p1(obtained after the first gradient step): d dtpt=−∇pL(E1, pt). (9) We consider gradient flow for technical convenience, and all results in this section can be readily extended to gradient descent with small enough step size. For the rest of the section and in the related proofs appearing in the appendix, we will refer to the embeddings in E1asEsand not E1 s, omitting the superscript to favor readability. 4 Max-margin token selection. Given the ⟨cls⟩embedding pand a sequence X, we denote the set of tokens in X selected by pas SX(p) ={s:s= arg max s∈Xp⊤Es}, (10) and we define SX(p)=X\ SX(p). Intuitively, given a sequence X,the selected tokens in Xhave the largest softmax weight (proportional to exp p⊤Exi ). Note that, for p′̸=p,we may have that SX(p′) =SX(p)for all X. Thus, we define the equivalence relation p≊p′⇐⇒ S X(p) =SX(p′),for all X∈ Xn. (11) Intuitively, two vectors p, p′are equivalent under the above relation if they select the same tokens for all the sequences. Given a vector p◦,we denote by Pp◦its equivalence class, and we define the set of max-margin directions among all vectors in Pp◦as P∗(p◦) =( ˆp ∥ˆp∥2: ˆp= arg min p∈Pp◦∥p∥2 s.t. p⊤(Es−Es′)≥1,∀s∈ SX(Pp◦),∀s′∈SX(Pp◦),∀X∈ Xno .(12) We first show in the lemma below that the max-margin problem in (12)always has a unique solution, which means thatP∗(p◦)is always a singleton. Thus, later on, we will use ˆp(p◦)as the solution to (12), and p∗(p◦) =ˆp(p◦) ∥ˆp(p◦)∥2. We drop the dependency on p◦when there is no confusion. Lemma 4.2. For any p◦̸= 0,the max margin problem in (12)has a unique solution denoted as ˆp. Furthermore, for anyδ >0,pick d≥max( 256, 2 log|S|2 δ2 ,(88η2 0+ 111 η0+ 2)8,|S|+ 3) . LetNbe the number of constraints in (12)and let the i-th constraint be p⊤(Esi−Es′ i)≥1.Define M= [Es1−Es′ 1, . . . , E sN−Es′ N]⊤∈RN×d. Then, with probability at least 1−δ, we have ˆp=M†1N, whereM†denotes the pseudo-inverse of Mand1Na vector of Nones. Lemma 4.2 (proved in Appendix C.2) implies that with high probability, the solution ˆpof the max-margin problem in (12) makes all the constraints tight. Implicit bias of gradient flow. While Lemma 4.1 holds for any dataset, we need an extra assumption on the data to analyze the gradient flow, due to the complex loss landscape caused by softmax attention. Assumption 1. Each sequence in Xncontains either a single completely positive token or a single completely negative token, and all remaining tokens are irrelevant . Assumption 1 implies that all sequences in the dataset contain precisely one relevant token, and the relevant token also aligns with the label. We remark that datasets containing only one relevant token have been also considered in prior work, see [ TLTO23 , Theorem 1] and [ MBBB25 ]. We
https://arxiv.org/abs/2505.17282v1
further denote by Scthe set containing all completely positive and all completely negative tokens. Theorem 4.3. Under Assumption 1, for any δ >0,let η0≥4n2T2, d≥max( 256, 2 log|S|2 δ2 ,(88η2 0+ 111 η0+ 2)8,|S|+ 3) . (13) Letptbe the solution of the gradient flow (9). Then, with probability at least 1−δ, we have that ∥pt∥2→ ∞. Furthermore, assuming that p∞:=limt→+∞pt ∥pt∥2exists, the limiting direction p∞satisfies the following properties with probability at least 1−δ: 5 1.p∞selects all completely positive and completely negative tokens, i.e., Sc⊆S XSX(p∞). 2.p∞is the max-margin direction for such a selection, i.e., p∞=p∗(p∞). Theorem 4.3 shows that, if ptconverges in direction, it must converge to the max-margin direction that selects all the completely positive/negative token. A sketch of the argument is given below and the complete proof is in Appendix C.3. Proof sketch. We prove the three statements separately. First, we show that ∥pt∥2→ ∞(Lemma C.1). To do so, we explicitly construct a vector ˆpsuch that ˆp∇pL(E1, p)<0for all p. This means that there is no stationary point with finite norm, which implies that the norm of the vector obtained via gradient flow diverges. Next, we show that, if the directional limit p∞exists, then it must select all the important tokens (Lemma C.2). To do so, we note that, after one step, the model approximately selects the important tokens. This implies that pt selects important tokens for all t, as gradient flow cannot increase the loss. Finally, we show that p∞is the max-margin solution of a feasible selection (Lemma C.6). To do so, we assume by contradiction that the directional limit is any vector p′that is not the max-margin solution of a feasible selection. Then under this assumption, we prove that limt→∞pt ∥pt∥2=p∗(p′),which is in fact the solution of (12), thus giving a contradiction. Before proceeding with the characterization of the max-margin solution, we highlight some differences with respect to the related work [ TLZO23]. Theorem 3 in [ TLZO23] shows that gradient descent on pconverges to a locally optimal max-margin solution when initialized in a regime close enough to such solution, and Theorem 4 in [ TLZO23 ] shows that the regularization path can only converge to locally max-margin solutions. However, these results do not exclude the possibility of the gradient flow converging to directions that are notlocally optimal and notthe max-margin direction. In contrast, we characterize all possible directions the gradient flow converges to, showing that these are max-margin directions that select all completely positive/negative tokens. Furthermore, we do so without starting from an initialization that is close enough to such solution. This requires a different proof strategy as compared to [TLZO23]. Characterization of the max-margin solution. Theorem 4.3 gives that, if gradient flow converges in direction, the limiting direction is the max-margin one that selects all important tokens in each sentence. However, this does not exclude a-priori the possibility that gradient flow also selects some irrelevant tokens. To address this point, we now establish if and how many irrelevant tokens can be selected: in general, it is not possible to select all irrelevant tokens (Lemma 4.4) and,
https://arxiv.org/abs/2505.17282v1
under an additional assumption, no irrelevant token is selected (Lemma 4.5). Lemma 4.4. Suppose ˆpselects all the tokens, i.e., SX(ˆp) =S. Then, p∞̸= ˆp. Proof.Ifˆpselects all the tokens, then ˆp⊤Exi=ˆp⊤Exjfor all xi, xj∈Xand for all X∈ Xn. Thus, by Lemma A.2, ˆp⊤∇pL(E, p) = 0for any p, which gives the desired result. Lemma 4.4 shows that the directional limit p∞(when it exists) cannot select all tokens and, as it selects all important ones, it must be biased towards them. As an application, consider the case where there is only one irrelevant token in the vocabulary. Then, the combination of Theorem 4.3 and Lemma 4.4 gives that only the completely positive/negative tokens are selected by gradient flow. Going beyond the case where there is a single irrelevant token, the result below provides a sufficient condition for gradient flow to select only important tokens. Lemma 4.5. Under Assumption 1, for any δ >0, assume that (13)holds. Let ˆpbe the solution of the max-margin problem (12)that only selects the completely positive/negative tokens, i.e., ˆp= arg min p∥p∥2,s.t. p⊤(EsX∗−Es)≥1,∀s∈X\ {sX ∗},∀X∈ Xn, where sX ∗denotes the unique completely positive/negative token in the sequence X. Assume that p∞:=limt→∞pt ∥pt∥2 exists and that, for any ˆp′solving(12)with a different selection, ∥ˆp∥2<(1−µ)∥ˆp′∥2for some constant µthat does not depend d. Then, by taking d≥ 8T(1−µ) µη04 , we have that p∞=ˆp ∥ˆp∥2with probability at least 1−δ. The proof of Lemma 4.5 is deferred to Appendix C.4. The sufficient condition of the result above requires the max-margin direction that does not select irrelevant tokens to have a larger margin than any other max-margin solution associated to a different token selection. We expect this to be the case e.g. for datasets where all the 6 Figure 2: Dot-product of token embeddings with ⟨cls⟩embedding p(left) and regression coefficients v(right), as a function of the token-wise difference in posterior probabilities for synthetic data sampled according to (15). We consider the one-layer attention model in (1)with all parameters trained until convergence. The point cloud around zero corresponds to the tokens in the irrelevant set. completely positive/negative tokens have the same αs. In fact, given the structure of the context embeddings in (6), the max-margin solution ˆpis expected to satisfy ˆp⊤v≈0,ˆp⊤Es≈1,ˆp⊤Es′≈0for all s∈ SX(ˆp)ands′∈SX(ˆp). Since the token embeddings at initialization are approximately orthogonal to each other, ˆp≈P s∈SX(p)E0 s, meaning that∥ˆp∥2≈p |SX(p)|,which implies that the sufficient condition holds. 5 Numerical experiments To support our theoretical findings, we showcase the correlation of the embeddings with the ⟨cls⟩embedding pand the output vector v, having trained allthe parameters with gradient descent until convergence. We consider different datasets (synthetic data in Figure 2; IMDB/Yelp datasets in Figures 1 and 3) and different architectures (one-layer model(1)in Figures 2 and 3; two-layer model (17)in Figure 1). Taken together, the experiments display an excellent agreement with our theory going beyond the one-layer architecture (1)and also beyond the requirements on the data-generating process. Specifically, the trained embeddings capture the importance of the corresponding tokens: the dot-product with vis proportional to how frequently the token appears in positive sequences rather than in negative ones, and the
https://arxiv.org/abs/2505.17282v1
dot-product with pis proportional to the modulus of such frequency. We detail below the experimental design. Synthetic data. Let us define the data-generating process for the synthetic experiments in Figure 2. The data is generated according to a K-level model. Namely, the vocabulary set Sis partitioned as S=˜S ∪ S−1 k K k=1∪ S+1 k K k=1. (14) Here, ˜Scontains irrelevant tokens appearing in both positive and negative contexts with equal probability, while S+1 kandS−1 k(fork∈ {1, . . . , K })contain tokens appearing mostlyin positive and negative contexts, respectively. Formally, define the importance levels ˜δ, δ1, . . . , δ K>0. Then, given the sequence label y∈ {− 1,+1}ands∈ S, we sample the tokens from the vocabulary as p(s|y) =  1−˜δ |˜S|, s ∈˜S, ˜δ(1−δk)PK k=1|Sy k|, s∈ Sy k, ˜δδkPK k=1|S¬y k|, s∈ S¬y k,(15) where ¬denotes the binary inversion, i.e., ¬(+1) = −1and¬(−1) = +1. The law (15)implies the following posterior distribution: p(y|s) =  1/2, s ∈˜S, 1−δk, s∈ Sy k, δk, s ∈ S¬y k.(16) 7 Figure 3: Dot-product of token embeddings with ⟨cls⟩embedding pand regression coefficients v, as a function of the token-wise difference in posterior for IMDB dataset (top row) and Yelp dataset (bottom row). We consider the one-layer attention model in (1) with all parameters trained until convergence. From(16), it is clear that (i)˜Scontains irrelevant tokens as the posterior is uniform, and (ii)δkquantifies the importance of the tokens in S±1 kby skewing the posterior to be (δk,1−δk). For the experiments in Figure 2, we select the following hyper-parameters: |S|= 2048,K= 8and sequence length T= 256;|S+1 k|=|S−1 k|with|S+1 k|= 4+2k−1, and|˜S|= 964;˜δ= 0.05and{δk}K k=1={0.45,0.35,0.3,0.25,0.2,0.1,0.05,0.02}. Figure 2 shows a clear separation between positive and negative tokens (right plot with the dot-product ⟨Exi, v⟩), and the selection mechanism ( ⟨cls⟩token) assigns high weights to tokens that have larger importance δk(left plot with the dot-product ⟨Exi, p⟩). IMDB and Yelp datasets. The IMDB dataset1consists of 50000reviews of average length 239words per review, associated to either a positive or a negative sentiment. Yelp reviews2provide a much larger selection. To align the data size and sequence length with the IMDB dataset, we randomly subsample a portion of the Yelp dataset constrained on the sequence length, i.e., we select reviews which have at least 1000and not more than 1500characters. In addition, Yelp reviews provide a five-star ranking, which we convert to the binary sentiment based on the following rule: 1/2 stars reviews are assigned label −1; 4/5 star reviews are assigned label +1; neutral reviews, i.e., 3-star score, are removed. We adhere to a typical preprocessing pipeline for both datasets: we start by cleaning the data from punctuation symbols and omitting the stop-words, followed by an application of stemming; and we use the Bert tokenizer from Hugging Face3to tokenize sequences. Tokens that appear less than 50times are purged. The numerical simulations for both datasets are reported in Figure 3, which displays a phenomenology similar to that obtained for synthetic data in Figure 2, thus providing additional grounding for our theoretical claims. Two-layer model. We also consider the following
https://arxiv.org/abs/2505.17282v1
two-layer model: E′ X= LayerNorm( Softmax (EXE⊤ X)EX+EX), f(X;p,E) =Softmax (p⊤(E′ X)⊤)E′ Xv, (17) 1https://www.kaggle.com/datasets/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews 2https://www.kaggle.com/datasets/yelp-dataset/yelp-dataset 3https://huggingface.co/google-bert/bert-base-uncased 8 which includes both a skip connection and the layer-norm. We note that, for both IMDB and Yelp data, the model in (17)achieves significantly smaller loss values at convergence (of the order of 10−5, in contrast to the order of 10−1 achieved by the model in (1)). However, even if this model is more complex than the one analyzed in Section 4, the results in Figure 1 are still remarkably similar to those in Figures 2 and 3. Finally, we note that all plots consider on the x-axis the difference in posterior probabilities p(1|Exi)−p(0|Exi) =P (X,y)∈DyPT i=11xi=s P (X,y)∈DPT i=11xi=s(18) in place of the quantity αsdefined in (3). In fact, while the quantity in (3)appears naturally from the analysis of gradient descent, the difference in posterior probabilities provides better visuals for real data (IMDB and Yelp). The difference between (3)and(18)lies in the normalization used: the posterior difference in (18)is the discrepancy between counts of the token xiin positive and negative sentences normalized by the total number of occurrences ofxi, while the quantity in (18)normalizes the discrepancy by the total number of tokens nTin the datasets. For synthetic data sampled according to (15), due to the uniform nature of the sampling procedure, all tokens appear the same number of times. Thus, both quantities are the same up to a fixed scaling and, thus, they are equivalent. Additional details on the hyperparameter settings for all the experimental setups are contained in Appendix D. 6 Conclusions and limitations In this paper, we study how the embedding vectors trained via gradient methods capture the importance of different tokens in the dataset. We theoretically characterize (i)the context embedding Esafter one gradient step, and (ii)the implicit bias of the ⟨cls⟩embedding pafter training with gradient flow until convergence. We conduct experiments on synthetic and realistic datasets which demonstrate the generality of our findings. A limitation of our work is that the characterization we put forward is only in terms of the first-order statistics of the tokens (i.e., the frequencies with which they occur in the dataset), and it does not describe how the model learns the causal structure between tokens. In practice, both first-order statistics and causal structure are expected to be crucial for the model to “understand” a text. While our theory assumes a one-layer attention model, the numerical results of Figure 1 suggests that a similar qualitative picture holds more generally. This prompts us to conjecture that in deeper attention models with multiple heads, the earlier layers form induction heads [ OEN+22] which learn the causal structure between tokens, and later layers perform classification based on the empirical statistics of the resulting k-tuples. We regard this investigation as an exciting future direction. Acknowledgements This research was funded in whole or in part by the Austrian Science Fund (FWF) 10.55776/COE12. For the purpose of open access, the authors have applied a CC BY public copyright license to any Author Accepted Manuscript version arising from this submission. MM and AS were partially
https://arxiv.org/abs/2505.17282v1
funded by the European Union (ERC, INF2, project number 101161364). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them. AS was also supported by the SNF Grant 204439. SO was supported by the Office of Naval Research grant N000142412289. References [AAA+23]Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. GPT-4 technical report. arXiv preprint , 2023. [ABL+24]Emmanuel Abbe, Samy Bengio, Aryo Lotfi, Colin Sandon, and Omid Saremi. How far can transformers reason? The globality barrier and inductive scratchpad. NeurIPS , 2024. [ACHL19] SanjeevArora, NadavCohen, WeiHu, andYupingLuo. Implicitregularizationindeepmatrixfactorization. NeurIPS , 2019. [Ant25]Anthropic. Claude language model. https://www.anthropic.com/ , 2025. Accessed via https://www. anthropic.com/ . 9 [BCB15] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. International Conference on Learning Representations , 2015. [BM24]Simone Bombari and Marco Mondelli. Towards understanding the word sensitivity of attention layers: A study via random features. International Conference on Machine Learning , 2024. [CB20]Lenaic Chizat and Francis Bach. Implicit bias of gradient descent for wide two-layer neural networks trained with the logistic loss. Conference on Learning Theory , 2020. [CDB24] VivienCabannes, ElvisDohmatob, andAlbertoBietti. Scalinglawsforassociativememories. International Conference on Learning Representations , 2024. [GIZ+25]Halil Alperen Gozeten, M Emrullah Ildiz, Xuechen Zhang, Mahdi Soltanolkotabi, Marco Mondelli, and Samet Oymak. Test-time training provably improves transformers as in-context learners. International Conference on Machine Learning , 2025. [GKPR24] Borjan Geshkovski, Hugo Koubbi, Yury Polyanskiy, and Philippe Rigollet. Dynamic metastability in the self-attention model. arXiv preprint , 2024. [GLPR23a] Borjan Geshkovski, Cyril Letrouit, Yury Polyanskiy, and Philippe Rigollet. The emergence of clusters in self-attention dynamics. NeurIPS , 2023. [GLPR23b] Borjan Geshkovski, Cyril Letrouit, Yury Polyanskiy, and Philippe Rigollet. A mathematical perspective on transformers. arXiv preprint , 2023. [HGJ+19]Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, An- drea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for NLP. International Conference on Machine Learning , 2019. [IHL+24]M Emrullah Ildiz, Yixiao Huang, Yingcong Li, Ankit Singh Rawat, and Samet Oymak. From self-attention to markov models: Unveiling the dynamics of generative transformers. International Conference on Machine Learning , 2024. [JEP+21]John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, et al. Highly accurate protein structure prediction with alphafold. Nature, 2021. [JT19]Ziwei Ji and Matus Telgarsky. The implicit bias of gradient descent on nonseparable data. Conference on Learning Theory , 2019. [JTA24] Addison Kristanto Julistiono, Davoud Ataee Tarzanagh, and Navid Azizan. Optimizing attention with mirror descent: Generalized max-margin token selection. Workshop Contribution at NeurIPS , 2024. [JZP+24]Albert Q Jiang, Alicja Ziarko, Bartosz Piotrowski, Wenda Li, Mateja Jamnik, and Piotr Miłoś. Re- purposing language models into embedding models: Finding the compute-optimal recipe. NeurIPS , 2024. [KCM+23]Jannik Kossen, Mark Collier, Basil Mustafa, Xiao Wang, Xiaohua Zhai, Lucas Beyer, Andreas Steiner, Jesse Berent, Rodolphe Jenatton, and
https://arxiv.org/abs/2505.17282v1
Effrosyni Kokiopoulou. Three towers: Flexible contrastive learning with pretrained image models. NeurIPS , 2023. [LHI+24]Yingcong Li, Yixiao Huang, Muhammed E Ildiz, Ankit Singh Rawat, and Samet Oymak. Mechanics of next token prediction with self-attention. International Conference on Artificial Intelligence and Statistics , 2024. [LRL+24]Daniel Levine, Syed Asad Rizvi, Sacha Lévy, Nazreen Pallikkavaliyaveetil, David Zhang, Xingyu Chen, Sina Ghadermarzi, Ruiming Wu, Zihe Zheng, Ivan Vrkic, et al. Cell2sentence: Teaching large language models the language of biology. International Conference on Machine Learning , 2024. [MBBB25] Pierre Marion, Raphaël Berthier, Gérard Biau, and Claire Boyer. Attention layers provably solve single-location regression. International Conference on Learning Representations , 2025. 10 [MBG+24]Ashok Vardhan Makkuva, Marco Bondaschi, Adway Girish, Alliot Nagle, Hyeji Kim, Michael Gastpar, and Chanakya Ekbote. Local to global: Learning dynamics and effect of initialization for transformers. NeurIPS , 2024. [MBG+25]Ashok Vardhan Makkuva, Marco Bondaschi, Adway Girish, Alliot Nagle, Martin Jaggi, Hyeji Kim, and Michael Gastpar. Attention with markov: A curious case of single-layer transformers. International Conference on Learning Representations , 2025. [MSX+24]Roey Magen, Shuning Shang, Zhiwei Xu, Spencer Frei, Wei Hu, and Gal Vardi. Benign overfitting in single-head attention. arXiv preprint , 2024. [NDL24] Eshaan Nichani, Alex Damian, and Jason D Lee. How transformers learn causal structure with gradient descent. International Conference on Machine Learning , 2024. [NTS15] Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. In search of the real inductive bias: On the role of implicit regularization in deep learning. Workshop Contribution at International Conference on Learning Representations , 2015. [OEN+22]Catherine Olsson, Nelson Elhage, Neel Nanda, Nicholas Joseph, Nova DasSarma, Tom Henighan, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, et al. In-context learning and induction heads. arXiv preprint, 2022. [ORST23] Samet Oymak, Ankit Singh Rawat, Mahdi Soltanolkotabi, and Christos Thrampoulidis. On the role of attention in prompt-tuning. International Conference on Machine Learning , 2023. [RKH+21]Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. International Conference on Machine Learning , 2021. [RPG+21]Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. International Conference on Machine Learning , 2021. [SCWZ25] Heejune Sheen, Siyu Chen, Tianhao Wang, and Harrison H Zhou. Implicit regularization of gradient flow on one-layer softmax attention. Workshop Contribution at International Conference on Learning Representations , 2025. [SHN+18]Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Suriya Gunasekar, and Nathan Srebro. The implicit bias of gradient descent on separable data. Journal of Machine Learning Research , 2018. [SHT23] Clayton Sanford, Daniel J Hsu, and Matus Telgarsky. Representational strengths and limitations of transformers. NeurIPS , 2023. [SS24]Keitaro Sakamoto and Issei Sato. Benign or not-benign overfitting in token selection of attention mechanism. arXiv preprint , 2024. [TLTO23] Davoud Ataee Tarzanagh, Yingcong Li, Christos Thrampoulidis, and Samet Oymak. Transformers as support vector machines. Workshop Contribution at NeurIPS , 2023. [TLZO23] Davoud Ataee Tarzanagh, Yingcong Li, Xuechen Zhang, and Samet Oymak. Max-margin token selection in attention mechanism. NeurIPS , 2023. [TMIS24] Lorenzo Tiberi, Francesca Mignacco, Kazuki Irie,
https://arxiv.org/abs/2505.17282v1
and Haim Sompolinsky. Dissecting the interplay of attention paths in a statistical mechanics theory of transformers. NeurIPS , 2024. [TWCD23] Yuandong Tian, Yiping Wang, Beidi Chen, and Simon Shaolei Du. Scan and snap: Understanding training dynamics and token composition in 1-layer transformer. NeurIPS , 2023. [VBC20] James Vuckovic, Aristide Baratin, and Remi Tachet des Combes. A mathematical theory of attention. arXiv preprint , 2020. [VDT25] Bhavya Vasudeva, Puneesh Deora, and Christos Thrampoulidis. Implicit bias and fast convergence rates for self-attention. Transactions on Machine Learning Research , 2025. 11 [VSP+17]Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. NeurIPS , 2017. [WWHL24] Zixuan Wang, Stanley Wei, Daniel Hsu, and Jason D Lee. Transformers provably learn sparse token selection while fully-connected nets cannot. International Conference on Machine Learning , 2024. [YKWL24] HongruYang, BhavyaKailkhura, ZhangyangWang, andYingbinLiang. Trainingdynamicsoftransformers to recognize word co-occurrence via gradient flow analysis. NeurIPS , 2024. [ZNS+23] Tian Zhou, Peisong Niu, Liang Sun, Rong Jin, et al. One fits all: Power general time series analysis by pretrained lm. NeurIPS , 2023. 12 Additional notation. Throughout the appendices, to simplify the notation, we write ai(X) :=p⊤Exi, q i(X) :=exp(ai(X))PT j=1exp(aj(X)), (19) so that f(X;p,E) =PT i=1qi(X)E⊤ xiv. We will drop the dependence on Xinai(X), qi(X)when there is no confusion. We also denote γi(X, y) :=yE⊤ xiv, (20) dropping again the dependency on X, ywhen there is no confusion. Finally, we define g(X, y) :=1 1 + exp( yf(X;p,E)). (21) Properties of initialization. By standard concentration inequalities, with probability at least 1−δ, at initialization we have max max s̸=s′∈S|E⊤ sEs′|,max s∈S|E⊤ sv|,max s∈S|E⊤ sp|,|p⊤v| ≤1√ dr 2 log|S|2 δ, max max s∈S∥Es∥2,∥p∥2 ≤2, min s∈S∥Es∥2≥1 2.(22) For all results of the paper holding with probability at least 1−δ, we will be implicitly conditioning on (22). A Technical lemmas Lemma A.1. The gradients of the empirical loss are given by ∇EsL(E, p) =−bE yg(X, y) TX i=1(X j̸=i(1xi=s−1xj=s)qi(X)qj(X))E⊤ xivp+TX i=11xi=sqiv  , ∇pL(E, p) =−bE yg(X, y) TX i=1(X j̸=iqi(X)qj(X)(Exi−Exj))E⊤ xiv  , where we have defined g(X, y) =1 1+exp( yf(X)). Proof.We start by taking the gradient of qias ∇Esqi(X) =1xi=sexp E⊤ xip pPT j=1exp E⊤ xjp −PT j=11xj=sexp E⊤ xjp p exp E⊤ xip PT j=1exp E⊤xjp2 =pPT j=1(1xi=s−1xj=s) exp E⊤ xjp exp E⊤ xip PT j=1exp E⊤xjp2 =p TX j=1(1xi=s−1xj=s)qiqj  =p X j̸=i(1xi=s−1xj=s)qiqj , 13 ∇pqi(X) = exp E⊤ xip ExiPT j=1exp E⊤ xjp −PT j=1exp E⊤ xjp Exjexp E⊤ xip PT j=1exp E⊤xjp2 =PT j=1exp E⊤ xjp exp E⊤ xip (Exi−Exj) PT j=1exp E⊤xjp2 =TX j=1qiqj(Exi−Exj) =X j̸=iqiqj(Exi−Exj). Next, we look at the gradient of f(X;p,E): ∇Esf(X;p,E) =TX i=1(∇Esqi)E⊤ xiv+TX i=11xi=sqiv =TX i=1 X j̸=i(1xi=s−1xj=s)qiqj E⊤ xivp+TX i=11xi=sqiv, ∇pf(X;p,E) =TX i=1 X j̸=iqiqj(Exi−Exj) E⊤ xiv. This allows us to conclude that ∇EsL(E, p) =bE−y 1 + exp( yf(X;p,E))∇Esf(X;p,E) =bE" −y 1 + exp( yf(X;p,E))TX i=1 X j̸=i(1xi=s−1xj=s)qi(X)qj(X) E⊤ xivp +TX i=11xi=sqi(X)v# , ∇pL(E, p) =bE−y 1 + exp( yf(X;p,E))∇pf(X;p,E) =bE −y 1 + exp( yf(X;p,E)) TX i=1 X j̸=iqi(X)qj(X)(Exi−Exj) E⊤ xiv  , thus concluding the proof.
https://arxiv.org/abs/2505.17282v1
Lemma A.2. For any vector bp,we have −bp⊤∇pL(E, p) =bE g(X, y) TX i=1X j>i(bai(X)−baj(X))qi(X)qj(X)(γi(X, y)−γj(X, y))  , wherebai=bp⊤Exifor all i∈ {1, . . . , T }. Proof.From Lemma A.1, we have ∇pL(E, p) =−bE yg(X, y) TX i=1 X j̸=iqi(X)qj(X)(Exi−Exj) E⊤ xiv   =−bE g(X, y) TX i=1 X j̸=iqi(X)qj(X)(Exi−Exj) γi(X, y)   =−bE g(X, y)E⊤ X Diag(qX)−qXq⊤ X γ(X, y) , 14 where qX= [q1(X), . . . , q T(X)]⊤, γ(X, y) = [γ1(X, y), . . . γ T(X, y)]⊤andDiag(qX)denotes the diagonal matrix with [Diag(qX)]i,i=qi(X). Thus, letting ba= [ba1, . . . ,baT]∈RTwithbai= ˆp⊤Exi, we have −bp⊤∇pL(E, p) =bE g(X, y)bp⊤E⊤ X(Diag(qX)−qXq⊤ X)γ(X, y) =bE g(X, y)ba⊤(Diag(qX)−qXq⊤ X)γ(X, y) =bE g(X, y) TX i=1baiqi(1−qi)γi−TX i=1X j̸=ibaiqiqjγj   =bE g(X, y) TX i=1X j̸=ibaiqiqj(γi−γj)  (use1−qi=X j̸=iqj) =bE g(X, y) 1 2TX i=1X j̸=ibaiqiqj(γi−γj) +1 2TX j=1X i̸=jbajqiqj(γj−γi)   =bE g(X, y) 1 2TX i=1X j̸=i(bai−baj)qiqj(γi−γj)   =bE g(X, y) TX i=1X j>i(bai−baj)qiqj(γi−γj)  . Lemma A.3 (Convergence lemma) .Let∥pt∥2→ ∞and suppose there exists bpsuch that, for any ϵ >0, there is a ¯t(ϵ)ensuring −bp⊤ ∥bp∥2∇pL(E, pt)≥ −(1−ϵ)p⊤ t ∥pt∥2∇pL(E, pt),for all t≥¯t(ϵ). (23) Then, if limt→∞pt ∥pt∥2exists, we have lim t→∞pt ∥pt∥2=bp ∥bp∥2. Proof.By the definition of the gradient flow, (23) is equivalent to bp⊤ ∥bp∥2dpt dt≥(1−ϵ)p⊤ t ∥pt∥2dpt dt. We note that p⊤ t ∥pt∥2dpt dt=d dt∥pt∥2. Thus, by integrating both sides from [¯t(ϵ), t],we have: bp⊤ ∥bp∥2(pt−p¯t(ϵ))≥(1−ϵ)(∥pt∥2− ∥p¯t(ϵ)∥2), which gives bp⊤pt ∥bp∥2∥pt∥2≥(1−ϵ)−(1−ϵ)∥p¯t(ϵ)∥2 ∥pt∥2+bp⊤p¯t(ϵ) ∥bp∥2∥pt∥2. Since p¯t(ϵ),bphave finite norm for fixed ϵ, by taking the limit on both sides, we have lim inf t→∞bp⊤pt ∥bp∥2∥pt∥2≥1−ϵ. As we assume that limt→∞pt ∥pt∥2exist and the above argument holds for any ϵ, we conclude lim t→∞pt ∥pt∥2=ˆp ∥ˆp∥2. 15 Lemma A.4. Given a sequence X, model parameters E, p, v, and indices i∗, js.t.xi∗∈ SX(p), xj∈X\ SX(p),the following results hold. 1. We have1 T≤qi∗≤1. 2. If there exist τ >0such that p⊤(Exi∗−Exj)≥τfor all xi∗∈ SX(p), then we have qj≤1 1 + exp( τ). 3. If there exist τ >0such that p⊤(Exi∗−Exj)≤τfor all xi∗∈ SX(p), then we have qj≥1 Texp(τ). Proof.The upper bound on qi∗is trivial. For the lower bound: qi∗=exp p⊤Exi∗ exp p⊤Exi∗ +P j̸=i∗exp p⊤Exj ≥exp p⊤Exi∗ Texp p⊤Exi∗=1 T. If there exists τ >0such that p⊤(Exi∗−Exj)≥τfor all xi∈ SX(p), then we have qj=1 1 +P i̸=jexp p⊤(Exi−Exj) ≤1 1 + exp p⊤(Exi∗−Exj) ≤1 1 + exp( τ). If there exists τ >0such that p⊤(Exi∗−Exj)≤τfor all xi∗∈ SX(p), then we have qi∗=1 1 +P i̸=jexp p⊤(Exi−Exj) ≥1 1 + (T−1) exp p⊤(Exi∗−Exj)(by definition of SX(p)) ≥1 Texp(τ). B Properties after the first gradient step Lemma B.1 (Boundedness of the embeddings) .For any δ >0,let d≥max( 256, 2 log|S|2 δ2) , then with probability at least 1−δ, max s∈S∥E1 s∥2≤2(1 + 2 η0),∥p1∥2≤2 + 11 η0d−1 4. 16 Proof.By using (22), we have that max s∈S∥E1 s∥2≤max s ∥E0 s∥2+η0 2∥v∥2+∥errs∥2 ≤max s∈S 2 +η0 2+ 11η0d−1 4 ≤2 + 4 η0, and that ∥p1∥2≤ ∥p0∥2+∥errp∥2≤2 + 11 η0d−1 4. (24) Lemma B.2 (Upper bound on the loss) .For any δ >0,let d≥max(
https://arxiv.org/abs/2505.17282v1
256, 2 log|S|2 δ2 ,(88η2 0+ 111 η0+ 2)8) , then with probability at least 1−δ, L(E1, p1)≤bE" log 1 + exp −1 TTX i=1η0 2yαxi+1 22η0!!# . Proof.We first lower bound yf(X;p,E)for each pair X, y.After the first step, we have max s,s′|(p1)⊤(E1 s−E1 s′)|= max s,s′ (p0)⊤(E0 s−E0 s′) +η0 2(αs−αs′)(p0)⊤v +err⊤ p(E1 s−E1 s′) + (errs−errs′)⊤p1 . We bound each term separately: max s,s′|(p0)⊤(E0 s−E0 s′)| ≤2 max s|(p0)⊤E0 s| ≤2d−1 4, η0 2(αs−αs′)|(p0)⊤v| ≤η0|(p0)⊤v| ≤η0d−1 4, |err⊤ p(E1 s−E1 s′)| ≤ ∥err⊤ p∥2∥E1 s−E1 s′∥2≤44η0d−1 4(1 + 2 η0), |(errs−errs′)⊤p1| ≤2∥p1∥2max s∥errs∥2≤22η0d−1 4 2 + 11 η0d−1 4 , where we have used (22). By picking d≥(88η2 0+ 111 η0+ 2)8,we get max s,s′|(p1)⊤(E1 s−E1 s′)| ≤d−1 8,which implies that, for any Xand any i∈ {1, . . . , T }, 1 T−2d−1 8 T≤qi(X)≤1 T+2d−1 8 T. Thus, we lower bound yf(X;p,E)for each pair (X, y)as yf(X;p,E) =TX i=1qi(X)γi(X) ≥1 TTX i=1η0 2yαxi−TX i=12d−1 8 Tη0 2αxi+TX i=1yqi(X)v⊤(E0 xi+errxi) ≥1 TTX i=1η0 2yαxi−d−1 8η0−(1 + 2 d−1 8)v⊤(E0 xi+errxi) ≥1 TTX i=1η0 2yαxi−d−1 8η0−3(1 + 11 η0)d−1 4 ≥1 TTX i=1η0 2yαxi−1 22η0, 17 which allows us to conclude that L(E1, p1) =bE[log(1 + exp( −yf(X;p,E)))] ≤bE" log 1 + exp −1 TTX i=1η0 2yαxi+1 22η0!!# .(25) C Proofs for Section 4 C.1 Proof of Lemma 4.1 For simplicity, in the proof we drop the time dependency in all the variables. By picking d≥ 2 log|S|2 δ2 , from (22) we have max max s∈S|E⊤ sv|,max s∈S|E⊤ sp|,|p⊤v| ≤d−1 4, max max s∈S∥Es∥2,∥p∥2 ≤2. Thus, at initialization, we have that, for all s, exp −d−1 4 ≤exp p⊤Es ≤exp d−1 4 , which implies that, for any sequence Xand any position i, 1 T+ 2T d−1 4≤1 1 + (T−1) exp 2d−1 4≤qi(X)≤1 1 + (T−1) exp −2d−1 4≤1 T−2T d−1 4, where we use the fact that for z∈[−1,1],1− |z| ≤exp(z)≤1 +|z|. Furthermore, for d >256and for any sequence (X, y),we have 1 T−4d−1 4 T≤qi(X)≤1 T+4d−1 4 T, and −2d−1 4≤−Td−1 4 T−2Td−1 4≤yf(X;p,E)≤Td−1 4 T−2Td−1 4≤2d−1 4. Then, g(X, y)≤1 1 + exp −2d−1 4≤1 2−2d−1 4≤1 2+d−1 4, and similarly g(X, y)≥1 2−d−1 4. 18 Now we look at the gradient update of the first step. By Lemma A.1, we have −∇EsL(E, p) =bE yg(X, y) TX i=1 X j̸=i(1xi=s−1xj=s)qiqj E⊤ xivp+TX i=11xi=sqiv   =1 2TbE" yTX i=11xi=s# v +1 2bE" yTX i=11xi=s qi−1 T# v +bE yg(X, y) TX i=1 X j̸=i(1xi=s−1xj=s)qiqj E⊤ xivp   +bE" y g(X, y)−1 2TX i=11xi=sqiv# , −∇pL(E, p) =bE yg(X, y) TX i=1 X j̸=iqiqj(Exi−Exj)E⊤ xiv   . We note that 1 2TbE" yTX i=11xi=s# v=1 2αsv, and we bound the remaining error terms. We have that 1 2bE" yTX i=11xi=s qi−1 T# v 2≤d−1 4, and bE" yg(X, y) TX i=1 X j̸=i(1xi=s−1xj=s)qiqj E⊤ xivp  +y g(X, y)−1 2TX i=11xi=sqiv# 2≤10d−1 4. Furthermore, we also have that ∥∇pL(E, p)∥2≤8d−1 4. Thus, the desired claim follows. C.2 Proof of Lemma 4.2 Proof.We first show that, if (12)is feasible, then the solution
https://arxiv.org/abs/2505.17282v1
is unique. Indeed, assume by contradiction that p1, p2 are two different solutions of (12). Clearly, p1andp2have the same norm, sop⊤ 1p2 ∥p1∥2∥p2∥2̸= 1.Then, any convex combination of p1, p2gives a feasible solution with a strictly smaller norm, which is a contradiction. Next, we show that (12) is always feasible. To see this, by definition, there exists some τsuch that p⊤ ◦(Es−Es′)≥τ, ∀s∈ SX(p◦),∀s′∈SX(p◦),∀X∈ Xn. Then,p◦ τis a feasible solution of (12) which concludes the proof of uniqueness. To characterize p∗(p◦), we first note that (12) can be equivalently written as: arg min p1 2∥p∥2 2 s.t. p⊤(Es−Es′)≥1,∀s∈ SX(Pp◦),∀s′∈SX(Pp◦),∀X∈ Xn.(26) 19 Now we characterize the solution of (26) explicitly. First of all, we can rewrite the constraints as 1N−Mp≤0. Then we can write the Lagrangian of (26) as L(p, λ) =1 2∥p∥2 2+λ⊤(1N−Mp), where λ∈RNandpis a KKT point if ∇pL(p, λ) =p−M⊤λ= 0, ∇λL(p, λ) =1N−Mp= 0. Since the objective function is convex and the constraints are affine, the global optimum is achieved at the KKT point, which satisfies Mp=1N. Thus, if there exists a psatisfying this condition, we can rewrite (26) as arg min p1 2∥p∥2 2 s.t.Mp=1N, whose solution is ˆp=M†1N. It remains to show that there exists a feasible p. Since d >|S|+ 2,we have that, with high-probability, E0is full rank. Furthermore, E1=E0+∆and each row of ∆is in the subspace generated by vandp0. Thus, we can pick bp⊥v, p0,so that E1bp=E0bp. Then, we define a∈R|S|such that ai= 1for all i∈S XSX(p◦), and ai= 0otherwise. Let E0bp=a. Since d >|S|andE0has full row rank, there exists a non-zero bpthat solves the above equation, which finishes the proof. C.3 Proof of Theorem 4.3 We prove each part separately. We first show that limt→∞∥pt∥2=∞. Lemma C.1. Under Assumption 1, for any δ >0,by picking d≥max( 256, 2 log|S|2 δ2 ,|S|+ 3) , with probability at least 1−δ, we have limt→∞∥pt∥2=∞. Proof.It is sufficient to show that there exists a non-zero finite-norm bp,such that for any finite norm p, bp⊤∇pL(E1, p)̸= 0. Indeed, the above condition means that there is no stationary point for any finite-norm p.For gradient flow, we have that lim t→∞∇pL(E1, pt) = 0 , which by contradiction implies the desired result. Now we construct such bp.Since d >|S|+ 2,we have that with high-probability E0is full rank. Furthermore, E1=E0+∆and each row of ∆is in the subspace generated by vandp0. Thus, we can pick bp⊥v, p0,so that E1bp=E0bp. 20 Without loss of generality, let x1be an important token in a positive sequence Xk, i.e., γ1(Xk)≥η0 4nT. Then, we define a∈R|S|such that a1= 1andai= 0for all i̸= 1. Let E0bp=a. Since d >|S|andE0has full row rank, there exists a non-zero bpthat solves the above equation. By Lemma A.2, we have that, for any p, −bp⊤∇pL(E1, p) =bE g(X, y) TX i=1X j>i(ai−aj)qiqj(γi−γj)   =g(Xk, yk)X j>1q1(Xk)qj(Xk)η0 4nT>0, which concludes the proof. Next, we show that, if the directional limit exists, then it must select all completely positive/negative tokens. Lemma C.2. Under Assumption 1, for any δ >0,by picking η0≥4n2T2, d≥max( 256, 2 log|S|2 δ2 ,(88η2 0+ 111 η0+ 2)8) , with probability
https://arxiv.org/abs/2505.17282v1
at least 1−δ, ifp∞= lim t→∞pt ∥pt∥2exists, then p∞satisfies sX ∗∈ SX(p∞),for all X∈ Xn, where sX ∗denotes the unique completely positive/negative token in the sequence X. Proof.We prove the lemma by contradiction. W.l.o.g., assume by contradiction that there exists X∈ Xncointaining the important token x1s.t.x1/∈ SX(p∞).We show that there exists ¯tsuch that, for all t≥¯t, L(E1, pt)>L(E1, p1), which contradicts the fact that the gradient flow always decreases the loss. To see this, we first note that by the definition of SX(p∞),there exists some τ >0independent of tsuch that min j̸=1p⊤ ∞(Ex1−Exj) =−τ. W.l.o.g, we assume that x2is the token that achieves the minimum. Aslimt→∞∥pt∥2=∞andlimt→∞pt ∥pt∥2=p∞,we have that, for any µ >0, R > 0,there exists a large enough ¯t such that ∥pt∥2≥2R, pt ∥pt∥2−p∞ 2≤µ,for all t≥¯t. Thus, we have: p⊤ t ∥pt∥2(Ex1−Ex2) =p⊤ ∞(Ex1−Ex2) +pt ∥pt∥2−p∞⊤ (Ex1−Ex2) ≤ −τ+ 2µ(4η0+ 2)2, where we have used the result of Lemma B.1. Thus, by picking µ=τ 4(4η0+2)2,we have p⊤ t ∥pt∥2(Ex1−Ex2)≤ −τ 2, which implies that p⊤ t(Ex1−Ex2)≤ −τR. Next, we upper bound yf(X;pt,E1).We first note that q1 q2= exp p⊤ t(Ex1−Ex2) ≤exp(−τR), 21 which gives q1≤exp(−τR). Note that yf(X;pt,E1) =TX i=1qiγi ≤exp(−τR)γ1+ max j̸=1γj ≤exp(−τR)η0 2+ (1 + 11 η0)d−1 4 + (1 + 11 η0)d−1 4. Thus, by picking R≥logd 4τ,we have yf(X;pt,E1)≤3 2+23 2η0 d−1 4≤3 4d−1 8, which implies a lower bound on the loss: L(E1, pt)≥1 nlog 1 + exp −yf(X;pt,E1) ≥1 nlog 1 + exp −3 4d−1 8 ≥1 2n, (27) where we used that d≥256in the last passage. Under Assumption 1, by Lemma 4.1, we have that yαxi≥1/(nT)if xiis either the completely positive or the completely negative token in X, and otherwise yαxi= 0. Hence, given that each sequence Xcontains a completely positive or negative token, we have that 1 TTX i=1yαxi≥1 nT2. Asη0>4n2T2>p 2nT2/11, by applying Lemma B.2, we obtain L(E1, p1)≤log 1 + exp −η0 4nT2 ≤log(1 + exp( −n))≤exp(−n)<1 2n, which gives a contradiction and concludes the proof. Finally, we show that for each possible selection, if ptconverges in direction, it must converge to the max-margin solution. In particular, we first prove the following lemma which gives an approximation to the directional gradient of the locally optimal selection. To do so, we define the secondary selection set and the locally optimal selection as follows: Definition C.3. Given a vector p, for each sequence X,denote by S2 X(p)the secondary selection set given by S2 X(p) = arg max {s:p⊤Es, s /∈ SX(p)}. (28) We also denote by S< X(p)the set of tokens that are not chosen in the first and in the second place, i.e., S< X(p) =X\(SX(p)[ S2 X(p)). (29) Definition C.4. Given a vector p,we say that pis locally optimal if for every (X, y)pair, we have X i∈SX(p)(γi(X, y)−γj(X, y))≥µ >0,for all j∈ S2 X(p), for some constant µthat does not depends on p. In the definition above and for the rest of this appendix, to help readability, we will abuse notation by letting indices (e.g., i, jabove) also denote the corresponding tokens (e.g., xi, xjabove). 22 Lemma C.5.
https://arxiv.org/abs/2505.17282v1
Letpbe a unit-norm vector and p=Rpfor some positive constant R.Suppose pis a locally optimal direction as defined in Definition C.4 with some µthat does not depends on R.Moreover, suppose there exists a constant τ1that may depend on p, η0, n, T, dbut not on R,such that: min X{p⊤(Es−Es′),∀s∈ SX(p),∀s′∈ S2 X(p)} ≥τ1, min X{p⊤(Es−Es′),∀s∈ S2 X(p),∀s′∈ S< X(p)} ≥τ1.(30) Then, for any ϵ >0, for any ˆp≊psuch that ∥ˆp∥2does not depend on Rand min X{ˆp⊤(Es−Es′),∀s∈ SX(ˆp),∀s′∈X\ SX(ˆp)} ≥τ2, there exists Rlarge enough such that: −ˆp⊤∇L(E1, p)≤(1 +ϵ)bE X i∈SX(p)X j∈S2 X(p)(bai(X)−baj(X))hi,j(X, y, p ) , −ˆp⊤∇L(E1, p)≥(1−ϵ)bE X i∈SX(p)X j∈S2 X(p)(bai(X)−baj(X))hi,j(X, y, p ) , wherebai(X) = ˆp⊤Exi,baj(X) = ˆp⊤Exjand hi,j(X, y, p ) =g(X, y)qi(X)qj(X)(γi(X, y)−γj(X, y)). Proof.By Lemma A.2, we can write the directional gradient as follows: −bp⊤∇pL(E1, p) =bE TX i=1X j>i(bai(X)−baj(X))hi,j(X, y, p ))  =bE X i∈SX(p)X j∈S2 X(p)(bai(X)−baj(X))hi,j(X, y, p )  (B0) +bE X i∈SX(p)X j∈S< X(p)(bai(X)−baj(X))hi,j(X, y, p ))  (B1) +bE X i∈X\SX(p)X j>i:j∈X\SX(p)(bai(X)−baj(X))hi,j(X, y, p )) . (B2) The rest of the proof is to show that −C1exp(−τ1R)(B0)≤(B1)≤C1exp(−τ1R)(B0), −C2exp(−τ1R)(B0)≤(B2)≤C2exp(−τ1R)(B0), for some C1, C2>0that do not depend on R.Then, by taking Rlarge enough, we obtain the desired result. First, we simplify (B0). Note that, for all i, i0∈ SX(p), we have that bai(X) =bai0(X). Hence, by switching the order of i, j, we obtain X i∈SX(p)X j∈S2 X(p)(bai(X)−baj(X))hi,j(X, y, p ) =X j∈S2 X(p)(bai0(X)−baj(X))X i∈SX(p)hi,j(X, y, p ) =g(X, y)X j∈S2 X(p)(bai0(X)−baj(X))qi0(X)qj(X)X i∈SX(p)(γi(X, y)−γj(X, y)), for any i0∈ SX(p). Since pis a locally optimal direction, we have X i∈SX(p)(γi(X, y)−γj(X, y))≥µ,for all j∈ S2 X(p). 23 Now, we compare (B1) and (B0). By the exact same reason above, we can rewrite X i∈SX(p)X j∈S< X(p)(bai(X)−baj(X))hi,j(X, y, p ) =g(X, y)X j∈S< X(p)(bai0(X)−baj(X))qi0(X)qj(X)X i∈SX(p)(γi(X, y)−γj(X, y)), for any i0∈ SX(p), and we compare to (B0) term-by-term. Namely, for any X,j∈ S2 X(p)andk∈ S< X(p),we have: |bai0(X)−bak(X)| bai0(X)−baj(X)≤∥ˆp∥2∥Exi0−Exj∥2 τ2≤2∥ˆp∥2max s∥Es∥2 τ2:=C3, (31) qk(X) qj(X)= exp( ak(X)−aj(X))≤exp(−τ1R), (32) P i∈SX(p)|γi(X, y)−γk(X, y)| P i∈SX(p)(γi(X, y)−γj(X, y))≤2Tmax s|γs| µ≤2Tmax s∥Es∥2 µ:=C4, (33) which implies that, for any X,j∈ S2 X(p)andk∈ S< X(p), |bai0(X)−bak(X)|qi0(X)qk(X)X i∈SX(p)|γi(X, y)−γk(X, y)| ≤exp(−τ1R)C3C4(bai0(X)−baj(X))qi0(X)qj(X)X i∈SX(p)(γi(X, y)−γj(X, y)). Thus, we get that: |(B1)| ≤exp(−τ1R)TC3C4|(B0)|. Next, we compare (B2)and(B0). Take any i′∈X\ SX(p), k > i′∈X\ SX(p), i0∈ SX(p), j∈ S2 X(p). We compare (bai′(X)−bak(X))hi′,k(X, y, p ) with each term in (B1). We note that the bounds onbai′(X)−bak(X) bai0(X)−baj(X)and|γi′(X)−γk(X)|P i∈SX(p)(γi(X,y)−γj(X,y))are the same as those in (31) and (33). Furthermore, qi′qk qi0qj≤exp(−τ1R), which gives that |(B2)| ≤T2exp(−τ1R)C3C4|(B0)|, thus concluding the proof. Lemma C.6. Under Assumption 1, for any δ >0,by picking η0≥4n2T2, d≥max( 256, 2 log|S|2 δ2 ,(88η2 0+ 111 η0+ 2)8) , with probability ≥1−δover the initialization, if p∞= lim t→∞pt ∥pt∥2exists, then p∞∈ P∗(p∞). Proof.We prove the lemma by contradiction. We first assume that there exists p∞such that p∞/∈ P∗(p∞)and p∞= lim t→∞pt ∥pt∥2.Then, we show that there exists bp∈ P∗(p∞)such that, for any ϵ >0, there is ¯t(ϵ)ensuring −bp⊤ ∥bp∥2∇pL(E1, pt)≥ −(1−ϵ)p⊤ t ∥pt∥2∇pL(E1, pt),for all t≥¯t(ϵ). As a consequence, by Lemma A.3, we have that p∞=bp ∥bp∥2,which gives a contradiction. For the
https://arxiv.org/abs/2505.17282v1
rest of the proof, we fix any ϵ >0and denote R=∥pt∥2. We define pt=pt∥bp∥2 ∥pt∥2,and we equivalently show that: −bp⊤∇pL(E1, pt)≥ −(1−ϵ)pt∇pL(E1, pt). (34) 24 To prove this, we first note that since p∞/∈ P∗(p∞),for allbp ∥bp∥2∈ P∗(p∞),there exists τ0independent of Rsuch that ∥bp−p∞∥bp∥2∥2≥τ0. Thus, by the definition of P∗(p∞), there exists X0⊆ X nsuch that for each sequence X∈ X0, we can find a pair of indices (i, j)with i∈ SX(p∞), j∈X\ SX(p∞)violating the margin, i.e., (∥bp∥2p∞)⊤(Exi−Exj)≤1−3τ, for some τ <1 6that does not depend on R.With a slight abuse of notation, we define τas τ=1 3min{min X∈X0{1−(∥bp∥2p∞)⊤(Exi−Exj), i∈ SX(p∞), j∈ S2 X(p∞)}, min X∈Xn{(∥bp∥2p∞)⊤(Exi−Exj), i∈ SX(p∞), j∈ S2 X(p∞)}, min X∈Xn{(∥bp∥2p∞)⊤(Exi−Exj), i∈ S2 X(p∞), j∈ S< X(p∞)}}. This means that, for all X∈ Xnand for all (i, j)pairs such that i∈ SX(p∞), j∈ S2 X(p∞), we have (∥bp∥2p∞)⊤(Exi−Exj)≥3τ; for all pairs (i, j)such that i∈ S2 X(p∞), j∈ S< X(p∞), we have (∥bp∥2p∞)⊤(Exi−Exj)≥3τ; and for all X∈ X0, i∈ SX(p∞), j∈ S2 X(p∞),we have (∥bp∥2p∞)⊤(Exi−Exj)≤1−3τ, with some τthat does not depend on R. Now, we compute the overlap with pt.For all Xand(i, j), we have pt⊤(Exi−Exj) = (∥bp∥2p∞)⊤(Exi−Exj) + (pt− ∥bp∥2p∞)⊤(Exi−Exj). We upper bound |(pt− ∥bp∥2p∞)⊤(Exi−Exj)| ≤ ∥bp∥2 pt ∥pt∥2−p∞ 2∥Ex1−Ex2∥2, and since ∥bp∥2,∥Ex1−Ex2∥2are finite, we have lim t→∞|(pt− ∥bp∥2p∞)⊤(Exi−Exj)|= 0. Thus, we can pick t1,such that for t≥t1,we have |(pt− ∥bp∥2p∞)⊤(Exi−Exj)| ≤τ, which implies that, for all X∈ Xnand for all (i, j)pairs such that i∈ SX(p∞), j∈ S2 X(p∞), we have pt⊤(Exi−Exj)≥τ; for all (i, j)pairs such that i∈ S2 X(p∞), j∈ S< X(p∞), we have pt⊤(Exi−Exj)≥τ; and for all X∈ X0, i∈ SX(p∞), j∈ S2 X(p∞),we have: pt⊤(Exi−Exj)≤1−τ, for some τthat does not depend on R. 25 Next, we show that ptis a locally optimal solution as per Definition C.4. By Lemma C.1, p∞selects all the completely positive/negative tokens. Thus, as pt≊p∞,ptalso selects such tokens, the rest being irrelevant by Assumption 1. Hence, for any pair (X, y)and for any j∈X\ SX(pt),we have: X i∈SX(pt)(γi(X, y)−γj(X, y))≥η0 4nT, by picking dlarge enough (as per the hypothesis of the lemma). By construction, ˆp≊pt,∥ˆp∥2does not depends on R and, moreover, for any X, ˆp⊤(Exi−Exj)≥1,for all i∈ SX(ˆp), j∈X\ SX(ˆp). By applying Lemma C.5 on both ˆpandpt,we have that for any ϵ1>0there exist t2s.t. for all t≥max{t1, t2},we have −ˆp⊤∇pL(E1, pt)≥(1−ϵ1)bE X i∈SX(p)X j∈S2 X(p)(bai(X)−baj(X))hi,j(X, y, p t) , −pt⊤∇pL(E1, pt)≤(1 +ϵ1)bE X i∈SX(p)X j∈S2 X(p)(ai(X)−aj(X))hi,j(X, y, p t) , where ai(X),aj(X)are defined analogously to bai(X),baj(X)by replacing ˆpwith pt. Now, we further show that, for any ϵ2>0,there exist t3such that for all t≥t3, bE X i∈SX(p)X j∈S2 X(p)(ai(X)−aj(X))hi,j(X, y, p t)  ≤(1 +ϵ2)bEX∈X0 X i∈SX(p)X j∈S2 X(p)(ai(X)−aj(X))hi,j(X, y, p t) . To see this, we use the same idea as in the proof of Lemma C.5. We can write bE X i∈SX(p)X j∈S2 X(p)(ai(X)−aj(X))hi,j(X, y, p t)  =bEX∈X0 X i∈SX(p)X j∈S2 X(p)(ai(X)−aj(X))hi,j(X, y, p t)  (A0) +bEX′∈Xn\X0 X i∈SX′(p)X j∈S2 X′(p)(ai(X′)−aj(X′))hi,j(X′, y′, pt) , (A1) and it is sufficient to show that (A1)≤ϵ2(A0). To prove this, we compare term-by-term. Let X∈ X0, X′∈ Xn\ X0, j∈ S2
https://arxiv.org/abs/2505.17282v1
X(pt), j′∈ S2 X(pt),and recall that: X i∈SX(pt)(ai(X)−aj(X))hi,j(X, y, p t) =g(X, y)(ai0(X)−aj(X))qi0(X)qj(X)X i∈SX(pt)(γi(X, y)−γj(X, y)), X i∈SX′(pt)(ai(X′)−aj′(X′))hi,j′(X′, y′, pt) =g(X′, y′)(ai1(X′)−aj′(X′))qi1(X′)qj′(X′)X i∈SX′(pt)(γi(X′, y′)−γj′(X′, y′)), 26 for any i0∈ SX(pt), i1∈ SX′(pt). Note that g(X′, y′) g(X, y)≤max X,yg(X, y) minX,yg(X, y)≤max X,y(1 + exp( yf(X)))≤(1 + exp( η0)) := C5. By using the same argument as in (31) and (33), we have ai1(X′)−aj′(X′) ai0(X)−aj(X)≤C3, P i∈SX′(pt)(γi(X′, y′)−γj′(X′, y′)) P i∈SX(pt)(γi(X, y)−γj(X, y))≤C4. Finally, we need to upper bound: qi1(X′)qj′(X′) qi0(X)qj(X). We note that ai1(X′)−aj′(X′)≥R/∥ˆp∥2, ai0(X)−aj(X)≤(1−τ)R/∥ˆp∥2, where ai(X) =p⊤ tExi.Thus by Lemma A.4, we have: qi0(X)≥1 T, q j(X)≥1 Texp((1 −τ)R/∥ˆp∥2), q i1(X′)≤1, q j′(X′)≤1 exp(R/∥ˆp∥2), which implies that qi1(X′)qj′(X′) qi0(X)qj(X)≤T2exp(−τR/∥ˆp∥2). Thus, for each X∈ X0, X′∈ Xn\ X0, j∈ S2 X(pt), j′∈ S2 X(pt), we have X i∈SX′(p)(ai(X′)−aj′(X′))hi,j′(X′, y′, pt)≤C6exp(−τR/∥ˆp∥2)X i∈SX(p)(ai(X)−aj(X))hi,j(X, y, p t). Thus by picking large enough t3which gives large enough R,we have: (A1)≤ϵ2(A0). This allows us to conclude that −ˆp⊤∇pL(E1, pt)≥(1−ϵ1)bE X i∈SX(p)X j∈S2 X(p)(bai(X)−baj(X))hi,j(X, y, p t)  ≥(1−ϵ1)bEX∈X0 X i∈SX(p)X j∈S2 X(p)(bai(X)−baj(X))hi,j(X, y, p t) , −pt⊤∇pL(E1, pt)≤(1 +ϵ1)(1 + ϵ2)bEX∈X0 X i∈SX(p)X j∈S2 X(p)(ai(X)−aj(X))hi,j(X, y, p t) . Note that, for each X∈ X0, bai(X)−baj(X)≥1, ai(X)−aj(X)≤1−τ, which gives that −ˆp⊤∇pL(E1, pt)≥ −1−ϵ1 (1 +ϵ1)(1 + ϵ2)(1−τ)pt⊤∇pL(E1, pt). Since ϵ1, ϵ2can be arbitrarily small, the proof is complete. 27 C.4 Proof of Lemma 4.5 Proof.Letˆp′be the max-margin solution of (12)with a different selection. By Theorem 4.3, we have that, for all X, sX ∗∈ SX(ˆp′). We denote by iX ∗the index of sX ∗. Assume by contradiction p∞=ˆp′ ∥ˆp′∥2. We will now show that this implies the following statement: for any ϵ >0, there is a t(ϵ)ensuring −ˆp⊤ ∥ˆp∥2∇pL(E, pt)≥ −(1−ϵ)p⊤ t ∥pt∥2∇pL(E, pt),for all t≥t(ϵ). (35) Then, by Lemma A.3, we have that p∞=bp ∥bp∥2,which gives a contradiction. As in the proof of Lemma C.6, we define pt=pt ∥pt∥2∥ˆp∥2. Thus, (35) is equivalent to −ˆp⊤∇pL(E, pt)≥ −(1−ϵ)pt⊤∇pL(E, pt). First of all, since ˆp,ˆp′are two max-margin solutions, by Lemma 4.2 we have that all the constraints are tight, that is: ˆp⊤(EsX∗−Es) = 1 ,∀s∈X\sX ∗,∀X∈ Xn, (ˆp′)⊤(Es−Es′) = 1 ,∀s∈ SX(ˆp′),∀s′∈X\ SX(ˆp′),∀X∈ Xn, which implies that ˆp′⊤∥ˆp∥2 ∥ˆp′∥2(Es−Es′) =∥ˆp∥2 ∥ˆp′∥2= 1−µ <1,∀s∈ SX(ˆp′),∀s′∈X\ SX(ˆp′),∀X∈ Xn. As∥pt∥2=∥ˆp∥2<∥ˆp′∥2,ptviolates the max-margin condition. Moreover, as limt→∞pt=ˆp′⊤∥ˆp∥2 ∥ˆp′∥2, for any ϵ1∈(0, µ), there exists a t1ensuring the following for all t≥t1: for all (i, j)pairs such that i∈ SX(p∞), j∈ S2 X(p∞), we have pt⊤(Exi−Exj)≤1−µ+ϵ1≤1. By applying Lemma C.5 to pt, we obtain that, for any ϵ2>0,there exists a t2ensuring that, for all t≥t2, −pt⊤∇pL(E1, pt)≤(1 +ϵ2)bE g(X, y)X i∈SX(pt)X j∈S2 X(pt)(ai(X)−aj(X)qi(X)qj(X)(γi(X)−γj(X)))  = (1 + ϵ2)bE g(X, y)X j∈S2 X(pt)(aiX∗(X)−aj(X)qiX∗(X)qj(X)(γiX∗(X)−γj(X)))  + (1 + ϵ2)bE g(X, y)X i∈SX(pt),i̸=iX∗X j∈S2 X(pt)(ai(X)−aj(X)qi(X)qj(X)(γi(X)−γj(X))) . We then compute by Lemma A.2 that −ˆp⊤∇pL(E1, pt) =bE g(X, y)X j∈X\SX(pt)(daiX∗(X)−baj(X))qiX∗(X)qj(X)(γiX∗(X)−γj(X)))  ≥bE g(X, y)X j∈S2 X(pt)(daiX∗(X)−baj(X))qiX∗(X)qj(X)(γiX∗(X)−γj(X)))  = (1−µ)bE g(X, y)X j∈S2 X(pt)(daiX∗(X)−baj(X))qiX∗(X)qj(X)(γiX∗(X)−γj(X)))  +µbE g(X, y)X j∈S2 X(pt)(daiX∗(X)−baj(X))qiX∗(X)qj(X)(γiX∗(X)−γj(X))) , 28 where in the first equality we use the fact that, for all j, j′̸=iX ∗,caj′(X)−baj(X) = 0 ,and in the second inequality we use the fact that all the terms in the summand are positive.
https://arxiv.org/abs/2505.17282v1
We note that: (1 +ϵ2)bE g(X, y)X j∈S2 X(pt)(aiX∗(X)−aj(X)qiX∗(X)qj(X)(γiX∗(X)−γj(X)))  <(1−µ)bE g(X, y)X j∈S2 X(pt)(daiX ∗(X)−baj(X))qiX ∗(X)qj(X)(γiX ∗(X)−γj(X))) , as daiX∗(X)−baj(X) = 1 , aiX∗(X)−aj(X)≤1−µ−ϵ1. It remains to show that µbE g(X, y)X j∈S2 X(pt)(daiX∗(X)−baj(X))qiX∗(X)qj(X)(γiX∗(X)−γj(X)))  ≥(1 +ϵ2)bE g(X, y)X i∈SX(pt):i̸=iX∗X j∈S2 X(pt)(ai(X)−aj(X))qi(X)qj(X)(γi(X)−γj(X))) .(36) We have that, for each i∈ SX(pt) :i̸=iX ∗, j∈ S2 X(pt), |γi(X)−γj(X)| ≤2d−1/4, γ iX∗(X)−γj(X)≥η0 4. Asd≥ 8T(1−µ) µη04 , (36) holds and the proof is complete. D Details of numerical experiments For all numerical simulations, we use the AdamW optimizer from torch.optim , and we reduce the learning rate in a multiplicative fashion by a factor γ= 0.1at epochs 100and200, i.e., LRnew= LR old·γ. We adhere to the batch size of 128and fix the embedding dimension to 2048. IMDB and Yelp datasets. The hyperparameters do notdiffer between the two-layer model and the one-layer model. We set the number of training epochs to 500, the learning rate to 0.01, and the weight decay to 10−8. Synthetic data. We set the number of training epochs to 196, the learning rate to 10−4, and the weight decay to 10−4. 29
https://arxiv.org/abs/2505.17282v1
SELF: Self-Extend the Context Length With Logistic Growth Function Phat Thanh Dang*1Saahil Thoppay∗1Wang Yang1Qifan Wang2 Vipin Chaudhary1Xiaotian Han1 1Case Western Reserve University2Meta {ptd18,svt21,wxy320,vxc204,xhan}@case.edu wqfcr@meta.com Abstract Large language models suffer issues when op- erated on long contexts that are larger than their training context length due to the standard po- sition encoding for tokens in the attention layer. Tokens a long distance apart will rarely have an effect on each other and long prompts yield unexpected results. To solve this problem, we propose SELF (Self-Extend the Context Length With Logistic Growth Function) : a solution of grouping consecutive tokens at varying group sizes using a logistic capacity equation com- bined with a constant group size at smaller rel- ative distances. Our model had an increase in performance of up to 12% compared to the LongLM extension method in LEval (specif- ically on the Qwen model). On summariza- tion related tasks in LongBench, our model performed up to 6.4%better than LongLM (specifically on the Llama-2-7b model). On reading comprehension tasks from LEval, our model performed up to 5.4%better than the LongLM. Our code is available at https:// github.com/alexeipc/SELF-LLM . 1 Introduction Large language models (LLMs) are typically pre- trained on sequences with fixed maximum context lengths (e.g., 2k–4k tokens), limiting their abil- ity to reason over or generate responses based on longer inputs. When the context length exceeds the pretraining context length, the output is severely degraded and can become unreadable and undeci- pherable (Xiao et al., 2024b; Peng et al., 2023a; Han et al., 2024a; Chen et al., 2023b; Xiong et al., 2023). The main reason why the output is unpre- dictable when dealing with long context is Out- of-distribution (O.O.D) issues of the relative posi- tional for LLMs using RoPE (Liu et al., 2023; Bai et al., 2021; Zhang et al., 2022a). When encoun- tering relative distances on which models were not *Equal contributions. Thisis averylongtext… averylonglonglongtext 0 1 2 3 4 5 …n-5n-4n-3n-2n-1 npos_id Thisis averylongtext… averylonglonglongtext 0 0 0 1 1 2 …m-5m-4m-3m-2m-1mgroup_id Neighbor window Group sizes determined by a function Figure 1: Illustration of our method in extending context length. Given a sequence of length n, that is larger than the training sequence length, the model groups consecu- tive tokens into groups whose sizes are determined by a function with the help of the neighbor window. As a re- sult, the greatest index is now m < n , and the sequence now can be fully in the model’s scope. trained, model seems to generate unpredictable out- put vectors that cannot be decoded by the tokenizer. The most intuitive way is to fine-tune the models to extend the context windows, which needs high- quality long-context data and comes with a trade- off in the performance of short-context tasks (Chen et al., 2023b,b; Zhu et al., 2024). Thus, there ex- ist some training-free methods. For example,Jin et al. (2024a) introduced Self-Extend, which lever- ages the model’s inherent ability to generalize to out-of-distribution (O.O.D) contexts by remapping untrained relative distances to those observed dur- ing training. This is done by grouping
https://arxiv.org/abs/2505.17296v1
consecutive tokens into fixed-size chunks, combined with a neighbor window for nearby tokens. While LongLM’s method shows promising re- sults on long-context tasks, we propose a more adaptive strategy grounded in the observation that, in natural language, the relevance of a token typ- ically decreases with its distance from the cur- rent context. This suggests that distant tokens can be grouped into larger units without significantly harming comprehension. Based on this intuition, we introduce a dynamic grouping strategy where group sizes increase with distance from the query. 1arXiv:2505.17296v1 [cs.CL] 22 May 2025 150 182 224 256 512 Window size7.17.27.37.47.57.6Perplexity Perplexity of Llama-2-7b-chat SE on PG19 Model / Group Size SE group size=16 SE group size=8 SE group size=12 Llama2-7B-chatFigure 2: Illustration about relation between neighbor window and perplexity after applying Self-Extend (Jin et al., 2024a). The results is derived from testing Llama- 2-7B and its Self-Extend variants on the first book in PG19 (Rae et al., 2019) with sequences of 2048 tokens. The perplexity of models applying Self-Extend slowly approaches the perplexity of the original model when increasing the neighbor window size. Unlike fixed-size chunking, our approach deter- mines group boundaries through a distance-aware function, allowing for more efficient use of context length while preserving semantic fidelity. Thus, we introduce SELF (Self-Extend the Con- text Length With Logistic Growth Function) , a more adaptive token grouping strategy that dynamically adjusts group boundaries based on context struc- ture rather than relying on fixed-size chunks. This allows for better capture of long-range dependen- cies and finer preservation of semantic boundaries with different distances. In essence, our method addresses the O.O.D. challenge through a shared principle but differs in the way token groups are constructed (see Figure 11for illustration). Through comprehensive experimental results, we witness an increase up to 8%(specifically Qwen2-7B model (Yang et al., 2024)) of accuracy when applying our grouping method compared to Self-Extend (Jin et al., 2024a) when benchmarking on LEval. We also witnessed an accuracy increase of up to 5%when using SELF over LongLM on the Llama-2-7B (Touvron et al., 2023b). 2 Motivations LongLM (Jin et al., 2024a) proposes a solution to handle prompts that are longer than the models’ pretraining sequence lengths by grouping tokens at far distances because the exact position is less important than the relative order of information in long context and keep the exact positions of closer token by a neighbor window. However, the out- 1This is an oversimplification of how the method works. More details will be explained in Our Proposal sectionput’s perplexity will increase right after where the neighbor window ended. As a result, to make the model less "confused", the neighbor window has to be increased (see Figure 2), which decreases the total number of tokens the models after applying Self-Extend can handle. Although LongLM’s method yielded signifi- cantly improved results in key retrieval, the Long- Bench benchmark (Bai et al., 2024a) and the LEval benchmark (An et al., 2023), we believe that we can increase the total context length by leveraging a property of natural language: the farther a word is from
https://arxiv.org/abs/2505.17296v1
the current token, the less important it tends to be to the current context. In LongLM (Jin et al., 2024a), the group size is the same for every group, which is not the most optimized way. Intuitively, in natural languages, the further a token, the less relevant the token is to the context, allowing us to group them into progres- sively larger groups, meaning that we can improve the total context length by allowing larger groups without significantly trade-off in model’s compre- hension ability. By this intuition, the group sizes have to be dynamic, which means that they have to be determined by a function. Therefore, we need to choose a monotonic in- creasing function for group sizes (the further, the larger the group). However, if the group size is too large, it will affect the performance because every word is treated the same regardless of their positions. Therefore, we must choose a function whose maximum value is limited and controllable. Based on those conditions, we decided to choose the Logistic growth Function, which is a monotonic increasing function with a defined maximum value. Because the group sizes have to be integers, we will take the floor of the Logistic growth Function. f(x) =⌊Cerx C+erx−1⌋, Cis the capacity i.e. the maximum group size and ris the growth rate of group sizes. 3 Preliminaries 3.1 Position Encoding Most models use two types of position encodings, relative and absolute position encoding. Relative position encoding utilizes the distance between one token and another token while absolute po- sition encoding uses the token’s position from 0. (Vaswani et al., 2023) Since the importance of 2 words is usually based on how far they are away from the base word, relative position encoding is more common. Examples of absolute encoding include GPT3, learned positional encoding (Brown et al., 2020), OPT (Zhang et al., 2022b). Examples of relative encoding include T5, learnable atten- tion bias (Xue et al., 2021), Transformer-XL (Dai et al., 2019), Alibi, fixed linear attention (Press et al., 2022). This is especially important when it comes to long context prompts as our LLM might need to consider tokens further away as still being important. These position encodings are applied at the attention layer so that when tokens are intercon- nected with eachother the positions are considered. Our goal is to design a mechanism where we can consider tokens far apart in our decision making while also holding closer tokens to a high impor- tance. Considering an example of long context key retrieval, we need to consider the close-by tokens (instructions) to a high degree but also ensure the key (at a far away position) is considered. 3.2 RoPE Considering tokens a1. . . a nand their embeddings x1. . . x nwhere each embedding is a real matrix. RoPE (Su et al., 2023) integrates the position in- formation into the query and key vectors. If done properly, qTkwill already contain the positional embeddings preventing an extra step from being needed. To embed the position, RoPE uses the func- tionqm=fq(xm, m)∈R|n|, kn=fx(xn, n)∈ R|n|where |L|is the
https://arxiv.org/abs/2505.17296v1
hidden dimension of each head. fq(xm, m) = Wqxmeimθ, fk(xn, n) = Wkxneinθ, θd=b−2d/|D|. The positional embed- ding system keeps the real section of qTkwhich isRe(q∗k). The dot product of the query and key will always yield a result depending on the relative distance between the two tokens as follows: ⟨fq(xm, m), fk(xn, n)⟩R (1) =Re(⟨fq(xm, m), fk(xn, n)⟩C) =Re(x∗ mW∗ qW∗ kxneiθ(m−n)) =g(xm, xn, m−n), where gis an abstract mapping function. 4 Our Proposal 4.1 Self-Extend with constant group size Self-Extend(Jin et al., 2024a) maps unseen relative positions to trained relative positions by using the FLOOR operation to group neighboring tokensinto one single group that shares the same posi- tional index. Their important finding is the importance of neighbor attention. By just purely grouping to- kens together, the perplexity will be higher than in the original model. Grouping all tokens with a constant group size degrades the effect of closer tokens which usually have more importance. To solve this problem, LongLM uses separate grouped attention, for tokens further away, and neighbor attention, for nearby tokens. Acknowledging this, our method will also apply neighbor attention. 4.2 SELF: Self-Extend with dynamic group size Despite successfully tricking the LLM into believ- ing that the tokens are closer than they really are, LongLM’s approach abruptly increases the group size from 1(within the neighbor window, the group size can be regarded as 1) to a much larger value (the value of group size, e.g., 512). To avoid this sudden jump, we propose that group sizes should increase gradually rather than all at once. More specifically, the group size should grow according to a smooth function such as the Logistic Growth Function, which starts small and increases steadily. Based on this idea, we propose a new method called SELF (Self-Extend the Context Length With Logis- tic Growth Function) . In SELF, we use a function f:N→Nto deter- mine the size of each group. Given a group index (like the 0th, 1st group), this function returns the number of tokens assigned to that group. Example 1 : Given a function fwhose f(0) = 1, f(1) = 2 , f(2) = 2 , f(3) = 3 andf(4) = 3 . The grouping will be: F= [0,1,1,2,2,3,3,3,4,4,4] Let’s define: •GK:N→Nas the group position index used in the encoding of the key-value pairs. GK i=Fi (2) •GQ:N→Nas the group position index used in the encoding of the query. •R:N×N→Nis the relative distance between GQandGK. The relative position right after the neighbor win- dow ( Ri,i−W) has to be W, where Wis the width 3 0 0 1 0 1 … … … W -1 W - 2 W - 3 … 0 i -1 … W W -1 W - 2 … 1 0 i … W W -1 … 2 1 0 i+1 … … … W -1 W - 2 W - 3 … 0 n - 2 … W W -1 W - 2 … 1 0 n -1 … W W -1 … 2 1 0 n 0 1 .. i - W i - W+1 i
https://arxiv.org/abs/2505.17296v1
- W+ 2 … i -1 i i+11 … n - W - 2 n - W -1 n - W … n - 2 n -1 n Figure 3: Illustration of the relation between GKand GQknowing that the relative position right after the neighbor window has to be W. of the neighbor window, because the relative posi- tions inside the neighbor window will range from 0 toW−1(see illustration in Figure 3). Therefore, GQ i=( W+GK i−W,ifi≤W c, otherwise(3) No matter what constant cwe choose, it will be completely covered by the neighbor window. If we used only group attention, the number of to- kens can be fully extended toPL i=1f(i). However, because we have the neighbor window, Rn,n−W= Winstead of Rn,n−W= max( F)−FW, that is, it takes W+FW−max( F)more indices than us- ing only group attention. Therefore, the number of tokens can be extended to L′=L+max( F)−W−FW X i=1f(i) where Lis the initial token limit. This formula gives the total number of tokens that can be processed using our SELF method, which blends regular and group-based attention in a way that grows group sizes smoothly and avoids any sudden jumps that could disrupt the model. 4.3 Efficient Implementation: grouping indices in parallel The most naive approach to calculate Fgiven fis to start with an empty F, than sequentially compute f(i)and add f(i)more elements to the end of F. F←F∥replicate( i, f(i)) However, since we have thousands of tokens, com- puting the new positional embeddings sequentially would take O(n)of run time.Algorithm 1 Construct group indices( n, W, C, r ) p← −1 fork←1toC−1do Compute parallelly F[p+ 1..p+g(k)]using Equa- tion (6) p←p+g(k) end for Compute GKparallelly using Equation (2) Compute GQparallelly using Equation (3) return group _key_idandgroup _query _id In order to solve this high computing time issue, we use inverse function of the grouping function which will divide each sequence into sections in which the group sizes are the same so we can easily calculate and assign group sizes in parallel. We define the inverse function f−1:N→N, when given the group size, the inverse function returns the smallest index that has the given group size. Using Example 1, the inverse function will be f−1(1) = 0 , f−1(2) = 1 andf−1(3) = 3 . Let’s define the function g:N→N, when given the group size, the function returns the total number of elements that are in the group of the given size. g(x) =x·[f−1(x+ 1)−f−1(x) + 1] (4) In order to find the Figiven i, define kthe largest number such that: S=kX j=1g(j)< i (5) This means that Fihas to be in the group whose size is k+ 1and it is ⌈i−S k+1⌉indices away from the last index of the group whose size is k, which is f−1(k+ 1)−1. Therefore, Fi= [f−1(k+ 1)−1] +⌈i−S k+ 1⌉ (6) Now, considering the logistic growth function, we have: f(x) =⌊Cerx C+erx−1⌋ f−1(y) =⌊ln(Cy−y)−ln(C−y) r⌋ In the logistic growth function, the maximum group size kin Equation (5) is C, which is a very small number compared to the sequence length. We can utilize GPU parallelism using the
https://arxiv.org/abs/2505.17296v1
pseudo-code. 4 Figure 4: Illustration of the algorithm grouping the indices using the function f:N→N, where f(0) = 1 , f(1) = 2, f(2) = 2 , f(3) = 3 andf(4) = 3 . The sequence with length of n= 11 was run the model with the pretraining sequence length of L= 6. The numbers denote the relative position between the corresponding key and query token. It has two kinds of self-attention, similar to Self-Extend (Jin et al., 2024a): neighbor tokens inside the neighbor window ( W= 3) (blue cells in the figure) use regular self-attention; group tokens outside the neighbor window (orange cells in the figure) use group self-attention (group indices are denoted as the Grow and column in the figure). Green GQmeans it can be anything as it is covered completed by the neighbor window. By applying Equation (4), g(k)can be computed inO(1). We can easily tell that the total work for computing FisO(n+C), and the total work for computing GKandGQknowing FisO(n), since it takes O(1)at each index. Putting this together, the total work for the algo- rithm is O(n+C)and the parallel span is O(C). Assuming that there are Pthreads available, the runtime is bounded by: T(P) =O(max{n+C P, C}) This means that if having sufficient resources ( P is large enough), we can speed up to near-linear since the lower bound is C, which is usually a very small number. 5 Experiments In this section, we first analyze the impact of Group Size on the SELF method based on perplexity re- sults from the PG19 dataset, in order to identify an appropriate group size configuration. We then compare SELF with the standard SEmethod on real-world long-context benchmarks such as LONG- BENCH andLEVAL, demonstrating the effective- ness of SELF on practical long-conetxt tasks. We ran experiments on Llama-2-7B, Llama- 2-13B (Touvron et al., 2023a), Qwen-7b (Yang et al., 2024), and a distilled reasoning model from Deepseek-R1 (DeepSeek-AI et al., 2025). We con- ducted our tests on Longbench v1 (Bai et al., 2024a) and LEval (An et al., 2023)2. 2We skipped the GSM benchmark as we were unable to5.1 Understanding the impact of group size on SELF via Perplexity We begin by measuring the perplexity of LLaMA- 2-7B-Chat with both SEandSELF under varying group sizes and context window lengths on the PG19 dataset. From Table 1, we can observe: •SELF achieves lower perplexity scores when working with the same group size. •When group size is small, there isn’t much differ- ence in perplexity score between SE and SELF. •The larger the group size, the longer the se- quence lengths over which noticeable differ- ences can be observed. For example, the dif- ference is still noticeable when C= 32 , and the sequence length is 6144 and when C= 64 and the sequence length is 12288 . With larger group size, it takes longer sequence to reach its maximum group size, meaning that most of the groups has their sizes less than the maximum group size, decreasing the final perplexity score. •When dealing with sequences that are signifi- cantly longer than the
https://arxiv.org/abs/2505.17296v1
original context length, the scores are basically the same for SE and SELF. When the sequence is significantly long, the amount of intermediate group is negligible compared to the number of groups that have reached the maximum group size. As a result, the model behavior closely resembles that of SE, where all groups have the maximum group size. From the above observations, there is a trade off replicate the results the paper provided on our own 5 Table 1: Perplexity on dataset PG19 (Rae et al., 2019) first book with Llama-2-7b-chat and compare SE and SELF (the growth rate r= 0.02) with the same group size ( C= 16 ,C= 32 andC= 64 ) and neighbor window. Model Perplexity with Context Window Size (log scale) Name 4096 6144 8192 10240 12288 14336 16384 Llama-2-7b-chat 7.231 >103>103>103>103>103>103C= 16SE-Llama-2-7b-chat 7.103 7.086 7.126 7.174 7.229 7.248 7.270 SELF-Llama-2-7b-chat 7.085 7.085 7.122 7.168 7.203 7.234 7.270C= 32SE-Llama-2-7b-chat 7.141 7.184 7.199 7.314 7.346 7.410 7.496 SELF-Llama-2-7b-chat 7.119 7.133 7.196 7.275 7.345 7.408 7.484C= 64SE-Llama-2-7b-chat 7.186 7.316 7.303 7.458 7.530 7.625 8.041 SELF-Llama-2-7b-chat 7.135 7.180 7.267 7.364 7.467 7.619 8.068 in perplexity when increasing the group size (Jin et al., 2024a). When using larger group size, mod- els are more uncertain in their predictions. How- ever, when dealing with the same group size3, ide- ally, models with SELF have lower perplexity than ones with SE because instead of increasing rapidly from one to the maximum group size, models with SELF have to go through smaller group sizes (in- termediate groups) before reaching the maximum group size. Therefore, although the context length after extension of both methods are approximately the same with the same group size, SELF performs better when the sequence length is not excessively larger than the orginial context window. Base on this analysis, we decided to choose larger group size for most of our experiments com- pared to ones in LongLM (Jin et al., 2024a) without experiencing a significant trade off in performance. 5.2 Comparisons of SELF and SE in Real-World Long Context Tasks 5.2.1 LongBench We conducted experiment on LongBench (Bai et al., 2024a) using Llama-2-7B and then compared our results with the original model and the model where Self-Extend is applied. Here we decided instead of using small group size of 6and8like in LongLM (Jin et al., 2024a), we used a much bigger group size (C= 32 ) and still observed a better results in most tasks. The results are in Table 2 We see an im- provement of summarizing tasks because having to go through smaller group sizes makes the models have a better understanding of the text. Moreover, our suspicion that SELF tends to perform better when the length is closer to the original context 3Group size in context of SELF refer to the maximum group size i.e. the capacity in the Logistic growth function 8 16 32 48 64 80 96 128 Group size2728293031Metric Comparison of SELF vs SE Values on 2WikiMultihopQA Source SELF SEFigure 5: Compare the trade off with different group sizes of SE and SELF on 2WikiMultihopQA. The
https://arxiv.org/abs/2505.17296v1
two grouping methods has the same neighbor window size W= 1024 . window is also confirmed since SELF performs bet- ter in tasks where the average context length is not significantly long such as MultiFieldQA, 2Wiki- MultihopQA, HotpotQA and TREC. We can also observe the trend in trade off of accuracy in 2WikiMultihopQA task (see Figure 5). Unlike SE, whose changes are more abrupt and unpredictable despite of the general trend is still decreasing (this can also be observed when SE doing other tasks with vary group sizes (Jin et al., 2024a)), SELF’s accuracy seems to slowly decrease after reaching its peak. In SE, group indices can be every different with different group sizes. In contrast, SELF’s group sizes increase gradually, resulting in overlapping early groups. As the result, the differences between varying group sizes are relatively subtle compare to SE. 5.2.2 LEval We tested one 4 different models using the same hyperparameter as in the LongBench v1 test (see the result Table 3). •Llama-2-7B: An improvement can be seen in every tasks except for CodeU. However, the dif- 6 Table 2: Performance of different models on LongBench (Bai et al., 2024a). * means that the results are reported by Self-Extend (Jin et al., 2024a), * means that the results are run by us (single run). The suffix number (e.g. ‘25k’) indicates the maximum context window of the model. The ‘SE’ prefix indicates Self-Extend is applied to this model and the prefix ‘SEF’ indicates that our Self-Extend with Logistic Growth Function is applied. The best performance in each section will be in bold LLMsaSingle-Document QA Multi-Document QA Summarization Few-shot Learning Synthetic Code NarrativeQAQasperMultiField-enHotpotQA 2WikiMQAMusique GovReport QMSum MultiNewsTRECTriviaQA SAMSumPassageCountPassageReLcc RepoBench-PSE vs SELFLlama-2-7B-chat-4k* 18.7 19.2 36.8 25.4 32.8 9.4 27.3 20.8 25.8 61.5 77.8 40.7 2.1 9.8 52.4 43.8 SE-Llama-2-7B-chat-16k* 21.69 25.02 35.21 34.34 30.24 14.13 27.32 21.35 25.78 69.50 81.99 40.96 5.66 5.83 60.60 54.33 SE-Llama-2-7B-chat-25k* 21.37 26.68 34.63 35.47 30.46 15.51 27.51 21.30 25.87 68.50 78.79 41.29 3.90 3.50 59.69 53.83 SELF-Llama-2-7B-chat-100k* 17.4 25.74 37.61 36.30 31.37 13.11 27.9 21.81 27.54 69.50 76.97 40.85 6.16 6.0 60.49 51.55Other MethodsLongChat1.5-7B-32k* 16.9 27.7 41.4 31.5 20.6 9.7 30.8 22.7 26.4 63.5 82.3 34.2 1.0 30.5 53.0 55.3 together/llama-2-7b-32k* 15.65 10.49 33.43 12.36 12.53 6.19 29.28 17.18 22.12 71.0 87.79 43.78 1.0 23.0 63.79 61.77 CLEX-7B-16k* 18.05 23.68 44.62 28.44 19.53 9.15 32.52 22.9 25.55 68 84.92 42.82 0 11.5 59.01 56.87 CodeLLaMA-7B-16k* 22.93 30.69 43.37 33.05 27.93 14.2 28.43 24.18 26.84 70 84.97 43.43 2 13.5 64.35 55.87 SE-Llama-2-7B-chat-16k* 21.69 25.02 35.21 34.34 30.24 14.13 27.32 21.35 25.78 69.50 81.99 40.96 5.66 5.83 60.60 54.33 SE-Llama-2-7B-chat-25k* 21.37 26.68 34.63 35.47 30.46 15.51 27.51 21.30 25.87 68.50 78.79 41.29 3.90 3.50 59.69 53.83 SELF-Llama-2-7B-chat-100k* 17.4 25.74 37.61 36.30 31.37 13.11 27.9 21.81 27.54 69.50 76.97 40.85 6.16 6.0 60.49 51.55Fixed ModelsGPT-3.5-Turbo-16k* 23.6 43.3 52.3 51.6 37.7 26.9 29.5 23.4 26.7 68.0 91.4 41.7 4.5 71.0 54.7 53.6 XGen-7B-8k* 18 18.1 37.7 29.7 21.1 10.3 27.3 20.5 26.2 65.5 77.8 25.3 2.1 8.5 38.6 38.6 InternLM-7B-8k* 12.1 16.7 23.4 28.7 22.8 9.0 9.7 15.9 22.8 52.0
https://arxiv.org/abs/2505.17296v1
77.8 21.2 3.0 6.0 44.1 28.8 ChatGLM2-6B-32k* 21.1 31.5 46.2 45.1 34.0 21.9 32.4 24.0 26.5 62.5 78.7 36.3 1.5 77.0 55.6 49.9 ChatGLM3-6B-32k* 26.0 43.3 51.7 54.4 44.9 40.4 36.8 23.9 27.9 79.0 87.1 38.2 2.0 99.0 57.66 54.76 Baichuan-13B-4k* 0.07 17.55 17.28 3.29 15 0.1 6.8 1.71 23.1 20.05 20.06 5.77 0.06 0.5 47.98 16.58 ALiBi-7B-4k* 0.04 8.13 17.87 2.73 8 1.33 5.31 1.64 25.55 9.25 8.83 4.67 0 1.27 46.69 18.54 ference in CodeU is not significant as the most correct models could answer only 1 correctly compare to 0 for model that apply SELF. •Llama-2-13B: the SELF version seems to per- form worse than the SE version in all tasks. We could not come up with the reason why there is such a difference despite Llama-2-7B and Llama- 2-13B having the same architecture. •Qwen-7B: There is a significant improvement compared to SE. There is an improvement com- pared to the raw model, but not significant. •Deepseek-R1-Distill-Qwen-7B4: For this rea- soning model, we forced the model to reason before giving answer by adding the open tag <think> . An significant decrease in the accu- racy of models after applying Self-Extend (both SE and SELF) can be observed. We suspect the Reinforcement Learning process to improve reasoning ability does have influence on this ef- fect as reinforcement learning was applied on exact position and the model’s reasoning ability wasn’t optimized for group attention. We also witnessed models applying SE and SELF often stuck in reasoning loop which did not happen for the original model. However, this needs to be researched on more before making conclusion. 4We modified the evaluation function for MCQ. After the reasoning process, models often start their conclusion with "Answer:", in which the original code (An et al., 2023) will as- sume the answer to be "A" because "A" is the first capitalized letter6 Related Works Long context models. Many recent large lan- guage models support extended context lengths, such as GPT-4 (Achiam et al., 2023), Claude, Qwen (Yang et al., 2025a), LLaMA (Grattafiori et al., 2024), and Phi (Abdin et al., 2024). Notably, models like Qwen2.5-7B-Instruct-1M (Team, 2025; Yang et al., 2025b) and Llama-3.1- Nemotron-8B-UltraLong-1M-Instruct (Xu et al., 2025) are capable of handling context windows up to 1 million tokens, enabling long-range reason- ing across extremely lengthy inputs. Multiple long context models exist optimized for long queries. LongChat (Li et al., 2023a; Bai et al., 2024b) is a long context LLM designed for long context con- versations. To train their model, LongChat is tested against their own long context testing suite and is trained with a context size of 32K. CLEX (Chen et al., 2023a) is a long context LLM that works by using differential equations to scale positional em- beddings to better support longer prompts. Code- Llama (Rozière et al., 2024) is a LLM model based on Llama 2 optimized for long context prompt per- formance. CodeLlama works by training the model on a longer context length of 16K. Long context extension methods. Most mod- els increase the context length through fine-tuning, which still does
https://arxiv.org/abs/2505.17296v1
not solve the problem of attention having a minimal affect at large relative distances. To solve this problem, other extension methods use a similar system where the position encod- ings modify the relative positions. Models with 7 Table 3: Performance comparison of Llama2-7B, Llama2-13B, Qwen-7B and DeepSeek-R1-Distill-Qwen-7B before and after applying SE and SELF on LEval (An et al., 2023). The figure also includes performance of other fixed models on LEval. * means that the results are reported by LongLM (Jin et al., 2024a), * means that the results are run by us (single run). The best performance between the original, SE and SELF will be in bold . Model Coursera TOEFL QuALITY CodeU SFiction Avg. Llama-2-7b-chat* 29.21 51.67 37.62 1.11 60.15 35.95 SE-Llama-2-7b-chat* 35.76 55.39 41.09 1.11 57.81 38.23 SELF-Llama-2-7b-chat* 36.19 56.88 41.09 0.00 60.94 39.02 Llama-2-13b-chat* 35.75 60.96 42.57 1.11 60.15 40.11 SE-Llama-2-13b-chat* 38.95 66.17 41.09 1.11 60.15 41.49 SELF-Llama-2-13b-chat* 37.93 64.31 39.11 0.00 57.03 39.68 Qwen-7B* 52.18 79.18 65.35 0.00 63.28 52.00 SE-Qwen-7B* 53.20 78.07 59.41 0.00 57.03 49.54 SELF-Qwen-7B* 53.34 80.67 66.83 4.44 62.5 53.56 Reasoning Model DeepSeek-R1-Distill-Qwen-7B* 58.43 66.54 48.01 2.22 60.16 47.07 SE-DeepSeek-R1-Distill-Qwen-7B* 54.21 66.17 40.59 6.66 62.4 45.81 SELF-DeepSeek-R1-Distill-Qwen-7B* 40.27 58.74 37.5 1.11 50.78 37.68 Fixed Models Claude1.3-100k* 60.03 83.64 73.76 17.77 72.65 65.97 GPT-4-32k 75.58 84.38 82.17 25.55 74.99 73.11 Turbo-16k-0613* 63.51 78.43 61.38 12.22 64.84 60.73 Chatglm2-6b-8k* 43.75 53.90 40.59 2.22 54.68 34.69 XGen-7b-8k (2k-4k-8k)* 26.59 44.23 35.15 1.11 48.43 26.41 Chatglm2-6b-8k* 42.15 54.64 44.05 2.22 54.68 35.95 Chatglm2-6b-32k* 47.81 55.01 45.04 2.22 57.02 39.01 XGen-7b-8k* 29.06 42.37 33.66 3.33 41.40 27.63 MPT-7b-65k* 25.23 17.84 25.24 0.00 39.06 19.22 different context extension methods and their per- formance is mentioned in the above section above specific model performance. These include RoPE- based techniques such as Position Interpolation (PI) (Chen et al., 2023c), NTK (Peng and Ques- nelle, 2023), YaRN (Peng et al., 2023b), and Self- Extend (Jin et al., 2024b); attention-architecture- based methods such as StreamingLLM (Xiao et al., 2023), LM-Infinite (Han et al., 2024b), Lon- gLoRA (Chen et al., 2023d), Inf-LLM (Xiao et al., 2024a), and Landmark (Mohtashami and Jaggi, 2023); as well as retrieval- and compression-based approaches such as Retrievers (Xu et al., 2023), LongLLMLingua (Jiang et al., 2023), and context compression (Li et al., 2023b). 7 Conclusion We are successfully able to implement group at- tention with a custom function and apply it witha logistic growth function. From our analysis, we can conclude that SELF works better than SE when dealing with the same capacity, and SELF’s behav- ior when increasing group size is more predictable. Our logistic capacity model for grouping to- kens yielded minor to major increases in most tests across LEval5. The method performed best on Llama-7b and Qwen-7B. On LongBench, our method performed better across most tests or saw only minor decreases in performance. By group- ing tokens using a combined constant and logistic growth positional embedding layer, we allow the LLM to consider tokens at a far distance while keeping nearby tokens more relevant. SELF in- creases LLM prompt performance without sac- rificing runtime performance nor
https://arxiv.org/abs/2505.17296v1
modifying the prompt. 5With the exception of CodeU, a test where all methods performed poorly 8 Limitations •Although theoretically, SE and SELF has the basically same runtime complexity, SELF re- quires more complicated computations like ln instead of just FLOOR . As a result, running SELF takes a longer time than running SE. •The grouping method was not tested in the variety of LLMs (only tested on LLama2- 7B, Llama2-13B, Qwen-7B and Deepseek- R1-Distill-Qwen-7B) nor the variety of bench- marks. •On reasoning models, running SELF the model would sometimes get stuck in a loop while thinking causing an unpredictable an- swer. This led to degraded performance com- pared to the raw model and SE. •SELF still struggled on the CodeU benchmark compared to other models and would some- times produce nonsensical outputs. References Marah Abdin, Jyoti Aneja, Hany Awadalla, Ahmed Awadallah, Ammar Ahmad Awan, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Jianmin Bao, Harkirat Behl, and 1 others. 2024. Phi-3 technical report: A highly capable language model locally on your phone. arXiv preprint arXiv:2404.14219 . Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, and 1 others. 2023. Gpt-4 techni- cal report. arXiv preprint arXiv:2303.08774 . Chenxin An, Shansan Gong, Ming Zhong, Xingjian Zhao, Mukai Li, Jun Zhang, Lingpeng Kong, and Xipeng Qiu. 2023. L-eval: Instituting standard- ized evaluation for long context language models. Preprint , arXiv:2307.11088. Tao Bai, Jinqi Luo, Jun Zhao, Bihan Wen, and Qian Wang. 2021. Recent advances in adversar- ial training for adversarial robustness. Preprint , arXiv:2102.01356. Yushi Bai, Xin Lv, Jiajie Zhang, Hongchang Lyu, Jiankai Tang, Zhidian Huang, Zhengxiao Du, Xiao Liu, Aohan Zeng, Lei Hou, Yuxiao Dong, Jie Tang, and Juanzi Li. 2024a. Longbench: A bilingual, multitask benchmark for long context understanding. Preprint , arXiv:2308.14508. Yushi Bai, Shangqing Tu, Jiajie Zhang, Hao Peng, Xi- aozhi Wang, Xin Lv, Shulin Cao, Jiazheng Xu, Lei Hou, Yuxiao Dong, and 1 others. 2024b. Longbenchv2: Towards deeper understanding and reasoning on realistic long-context multitasks. arXiv preprint arXiv:2412.15204 . Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, and 12 others. 2020. Lan- guage models are few-shot learners. Preprint , arXiv:2005.14165. Guanzheng Chen, Xin Li, Zaiqiao Meng, Shangsong Liang, and Lidong Bing. 2023a. Clex: Continuous length extrapolation for large language models. arXiv preprint arXiv:2310.16450 . Shouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. 2023b. Extending context window of large language models via positional interpolation. Preprint , arXiv:2306.15595. Shouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. 2023c. Extending context window of large language models via positional interpolation. arXiv preprint arXiv:2306.15595 . Yukang Chen, Shengju Qian, Haotian Tang, Xin Lai, Zhijian Liu, Song Han, and Jiaya Jia. 2023d. Lon- glora: Efficient fine-tuning of long-context large lan- guage models. arXiv preprint arXiv:2309.12307 . Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Car- bonell, Quoc V . Le, and Ruslan Salakhutdinov.
https://arxiv.org/abs/2505.17296v1
2019. Transformer-xl: Attentive language models beyond a fixed-length context. Preprint , arXiv:1901.02860. DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhi- hong Shao, Zhuoshu Li, Ziyi Gao, and 181 others. 2025. Deepseek-r1: Incentivizing reasoning capa- bility in llms via reinforcement learning. Preprint , arXiv:2501.12948. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, and 1 others. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 . Chi Han, Qifan Wang, Hao Peng, Wenhan Xiong, Yu Chen, Heng Ji, and Sinong Wang. 2024a. Lm- infinite: Zero-shot extreme length generalization for large language models. Preprint , arXiv:2308.16137. Chi Han, Qifan Wang, Hao Peng, Wenhan Xiong, Yu Chen, Heng Ji, and Sinong Wang. 2024b. Lm- infinite: Zero-shot extreme length generalization for large language models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 3991–4008. 9 Huiqiang Jiang, Qianhui Wu, Xufang Luo, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. 2023. Longllmlingua: Accelerating and enhancing llms in long context scenarios via prompt compression. arXiv preprint arXiv:2310.06839 . Hongye Jin, Xiaotian Han, Jingfeng Yang, Zhimeng Jiang, Zirui Liu, Chia-Yuan Chang, Huiyuan Chen, and Xia Hu. 2024a. Llm maybe longlm: Self- extend llm context window without tuning. Preprint , arXiv:2401.01325. Hongye Jin, Xiaotian Han, Jingfeng Yang, Zhimeng Jiang, Zirui Liu, Chia-Yuan Chang, Huiyuan Chen, and Xia Hu. 2024b. Llm maybe longlm: Self- extend llm context window without tuning. Preprint , arXiv:2401.01325. Dacheng Li, Rulin Shao, Anze Xie, Ying Sheng, Lian- min Zheng, Joseph E. Gonzalez, Ion Stoica, Xuezhe Ma, and Hao Zhang. 2023a. How long can open- source llms truly promise on context length? Yucheng Li, Bo Dong, Chenghua Lin, and Frank Guerin. 2023b. Compressing context to enhance inference efficiency of large language models. arXiv preprint arXiv:2310.06201 . Jiashuo Liu, Zheyan Shen, Yue He, Xingxuan Zhang, Renzhe Xu, Han Yu, and Peng Cui. 2023. To- wards out-of-distribution generalization: A survey. Preprint , arXiv:2108.13624. Amirkeivan Mohtashami and Martin Jaggi. 2023. Landmark attention: Random-access infinite con- text length for transformers. arXiv preprint arXiv:2305.16300 . Bowen Peng and Jeffrey Quesnelle. 2023. Ntk-aware scaled rope allows llama models to have extended (8k+) context size without any fine-tuning and mini- mal perplexity degradation. Bowen Peng, Jeffrey Quesnelle, Honglu Fan, and En- rico Shippole. 2023a. Yarn: Efficient context win- dow extension of large language models. Preprint , arXiv:2309.00071. Bowen Peng, Jeffrey Quesnelle, Honglu Fan, and Enrico Shippole. 2023b. Yarn: Efficient context window extension of large language models. arXiv preprint arXiv:2309.00071 . Ofir Press, Noah A. Smith, and Mike Lewis. 2022. Train short, test long: Attention with linear bi- ases enables input length extrapolation. Preprint , arXiv:2108.12409. Jack W Rae, Anna Potapenko, Siddhant M Jayakumar, Chloe Hillier, and Timothy P Lillicrap. 2019. Com- pressive transformers for long-range sequence mod- elling. arXiv preprint . Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten
https://arxiv.org/abs/2505.17296v1
Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, JérémyRapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, and 7 others. 2024. Code llama: Open foundation models for code. Preprint , arXiv:2308.12950. Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, and Yunfeng Liu. 2023. Roformer: En- hanced transformer with rotary position embedding. Preprint , arXiv:2104.09864. Qwen Team. 2025. Qwen2.5-1m: Deploy your own qwen with context length up to 1m tokens. Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, and 49 oth- ers. 2023a. Llama 2: Open foundation and fine-tuned chat models. Preprint , arXiv:2307.09288. Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, and 1 others. 2023b. Llama 2: Open foun- dation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 . Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2023. Attention is all you need. Preprint , arXiv:1706.03762. Chaojun Xiao, Pengle Zhang, Xu Han, Guangxuan Xiao, Yankai Lin, Zhengyan Zhang, Zhiyuan Liu, and Maosong Sun. 2024a. Infllm: Training-free long- context extrapolation for llms with an efficient con- text memory. In The Thirty-eighth Annual Confer- ence on Neural Information Processing Systems . Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, and Mike Lewis. 2023. Efficient streaming language models with attention sinks. arXiv preprint arXiv:2309.17453 . Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, and Mike Lewis. 2024b. Efficient stream- ing language models with attention sinks. Preprint , arXiv:2309.17453. Wenhan Xiong, Jingyu Liu, Igor Molybog, Hejia Zhang, Prajjwal Bhargava, Rui Hou, Louis Martin, Rashi Rungta, Karthik Abinav Sankararaman, Barlas Oguz, Madian Khabsa, Han Fang, Yashar Mehdad, Sharan Narang, Kshitiz Malik, Angela Fan, Shruti Bhosale, Sergey Edunov, Mike Lewis, and 2 others. 2023. Ef- fective long-context scaling of foundation models. Preprint , arXiv:2309.16039. Chejian Xu, Wei Ping, Peng Xu, Zihan Liu, Boxin Wang, Mohammad Shoeybi, and Bryan Catanzaro. 2025. From 128k to 4m: Efficient training of ultra- long context large language models. arXiv preprint . 10 Peng Xu, Wei Ping, Xianchao Wu, Lawrence McAfee, Chen Zhu, Zihan Liu, Sandeep Subramanian, Evelina Bakhturina, Mohammad Shoeybi, and Bryan Catan- zaro. 2023. Retrieval meets long context large lan- guage models. arXiv preprint arXiv:2310.03025 . Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mt5: A massively multilin- gual pre-trained text-to-text transformer. Preprint , arXiv:2010.11934. An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, and 1 others. 2025a. Qwen3 technical report. arXiv preprint arXiv:2505.09388 . An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, Guanting Dong, Hao- ran Wei, Huan Lin, and Jialong Tang... 2024. Qwen2 technical report. Preprint
https://arxiv.org/abs/2505.17296v1
, arXiv:2407.10671. An Yang, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoyan Huang, Jiandong Jiang, Jianhong Tu, Jianwei Zhang, Jingren Zhou, Junyang Lin, Kai Dang, Kexin Yang, Le Yu, Mei Li, Minmin Sun, Qin Zhu, Rui Men, Tao He, and 9 others. 2025b. Qwen2.5-1m technical report. arXiv preprint arXiv:2501.15383 . Jiajin Zhang, Hanqing Chao, Amit Dhurandhar, Pin- Yu Chen, Ali Tajer, Yangyang Xu, and Pingkun Yan. 2022a. When neural networks fail to gener- alize? a model sensitivity perspective. Preprint , arXiv:2212.00850. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher De- wan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mi- haylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022b. Opt: Open pre-trained transformer language models. Preprint , arXiv:2205.01068. Dawei Zhu, Nan Yang, Liang Wang, Yifan Song, Wen- hao Wu, Furu Wei, and Sujian Li. 2024. Pose: Effi- cient context window extension of llms via positional skip-wise training. Preprint , arXiv:2309.10400. 11
https://arxiv.org/abs/2505.17296v1
arXiv:2505.17306v1 [cs.CL] 22 May 2025Refusal Direction is Universal Across Safety-Aligned Languages Xinpeng Wang∗1,2, Mingyang Wang∗1,2,3, Yihong Liu∗1,2, Hinrich Schütze1,2, Barbara Plank1,2 1LMU Munich2Munich Center for Machine Learning3Bosch BCAI {xinpeng, mingyang, yihong, bplank} @cis.lmu.de Abstract Refusal mechanisms in large language models (LLMs) are essential for ensuring safety. Recent research has revealed that refusal behavior can be mediated by a sin- gle direction in activation space, enabling targeted interventions to bypass refusals. While this is primarily demonstrated in an English-centric context, appropriate refusal behavior is important for any language, but poorly understood. In this paper, we investigate the refusal behavior in LLMs across 14 languages using PolyRefuse , a multilingual safety dataset created by translating malicious and benign English prompts into these languages. We uncover the surprising cross-lingual universality of the refusal direction: a vector extracted from English can bypass refusals in other languages with near-perfect effectiveness, without any additional fine-tuning. Even more remarkably, refusal directions derived from any safety-aligned language transfer seamlessly to others. We attribute this transferability to the parallelism of refusal vectors across languages in the embedding space and identify the underly- ing mechanism behind cross-lingual jailbreaks. These findings provide actionable insights for building more robust multilingual safety defenses and pave the way for a deeper mechanistic understanding of cross-lingual vulnerabilities in LLMs.1 1 Introduction LLMs are increasingly deployed across a wide range of real-world applications [Kaddour et al., 2023, Yang et al., 2024, Raza et al., 2025]. To ensure their safe use, LLMs are expected to exhibit arefusal mechanism , the ability to obey to non-harmful request but refuse harmful, unethical, or policy-violating requests [Bai et al., 2022]. This capability is typically instilled via reinforcement learning from human feedback (RLHF) [Ouyang et al., 2022, Christiano et al., 2017, Dai et al., 2024] and other alignment strategies [Yuan et al., 2024, Wallace et al., 2024, Xu et al., 2024]. Despite these efforts, LLMs remain vulnerable to jailbreak attacks, including adversarial prompt engineering [Wei et al., 2023, Zou et al., 2023, Liu et al., 2024a, Tao et al., 2024], where carefully crafted inputs trigger unsafe outputs, and targeted fine-tuning [Yang et al., 2023, Lermen et al., 2024, Zhan et al., 2024], which undermines safety constraints through parameter updates. Notably, cross-lingual jailbreaks have emerged as a growing concern [Yong et al., 2024, Li et al., 2024, Deng et al., 2024], where prompts in non-English languages bypass refusal mechanisms that succeed in English, raising critical questions about the multilingual refusal mechanism in LLMs. Recent work has revealed that refusal behavior in LLMs is encoded within the model’s activation space [Arditi et al., 2024, Wang et al., 2025b]. Specifically, a low-dimensional subspace – often well-approximated by a single vector known as the refusal direction – captures the model’s tendency to refuse certain prompts. This insight has enabled controlled bypassing or reinforcement of refusals 1We make our code publicly available at https://github.com/mainlp/Multilingual-Refusal . ∗Equal Contribution. Preprint. Under review. through simple vector operations. However, these findings have largely been limited to English, leaving a critical question unanswered: How universal are refusal directions across languages? Refusal is a
https://arxiv.org/abs/2505.17306v1
core pragmatic function present in all human languages, although its surface form may vary across linguistic and cultural contexts [Brown, 1987, Beebe et al., 1990]. Prior work suggests that LLMs often share internal representations across languages [Artetxe et al., 2020, Wei et al., 2021, Hua et al., 2024, Brinkmann et al., 2025] and often rely on English as an implicit pivot in their reasoning processes [Wendler et al., 2024, Wang et al., 2025a, Yong et al., 2025]. These findings motivate our hypothesis: that refusal, as a pragmatic feature, may also generalize across languages – either in an English-centric way (i.e., the refusal direction learned in English transfers to other languages ), or more strongly, universally (i.e., refusal directions derived from any language covered within the LM’s abilities are approximately equivalent ). To evaluate this hypothesis, we perform a series of activation-based interventions across multiple languages. To enable this cross-linguistic analysis, we develop PolyRefuse , a dataset containing translated harmful prompts across 14 linguistically diverse languages. We first extract refusal directions with English prompts and assess their effectiveness in modulating refusal behavior in other languages. We then derive refusal directions from three typologically diverse safety-aligned languages and assess their cross-lingual transferability.2Our experiments support the hypothesis, demonstrating a certain universality of refusal directions across safety-aligned languages . To better understand the underlying cause of this transferability and why cross-lingual jailbreaks still succeed, we analyze the geometric structure of refusal directions and harmfulness representations across languages in the models’ embedding space. We find that refusal vectors are approximately parallel across languages, explaining the effectiveness of cross-lingual vector-based interventions. However, models often fail to separate harmful and harmless prompts in non-English languages. This insufficient separation weakens refusal signals and leaves models vulnerable to jailbreaks. These findings contribute to a deeper mechanistic understanding of how LLMs encode and generalize refusal behavior across languages. By revealing the language-agnostic nature of refusal directions, we also provide actionable insights for developing stronger, more reliable multilingual safety defenses. 2 Related Work LLM Safety and Refusal Mechanism In AI safety research, various efforts have been made to prevent LLMs from responding to malicious queries. Notable approaches include supervised fine-tuning (SFT) [Bianchi et al., 2024] and reinforcement learning from human feedback (RLHF) [Bai et al., 2022]. To evaluate the effectiveness of these safety measures, researchers have developed comprehensive safety evaluation datasets. While these datasets initially focused on English [Zou et al., 2023, Mazeika et al., 2024, Xie et al., 2025], recent work has expanded to include multilingual evaluations, revealing concerning vulnerabilities in non-English contexts [Shen et al., 2024, Yong et al., 2024, Wang et al., 2024]. Furthermore, researchers have begun investigating the internal mechanisms that enable LLMs to recognize and refuse harmful requests. Studies examining model representations have identified specific “refusal directions” in the embedding space [Arditi et al., 2024, Marshall et al., 2025]. However, these mechanistic interpretability studies have predominantly focused on English, leaving cross-lingual aspects of refusal mechanisms largely unexplored. This paper addresses this gap by investigating how refusal mechanisms function across different languages. Multilingual Alignment. A central goal in multilingual
https://arxiv.org/abs/2505.17306v1
natural language processing (NLP) is to develop language-agnostic representations that enable generalization across linguistic boundaries – commonly referred to as cross-lingual transfer [Libovický et al., 2020, Wei et al., 2021, Chang et al., 2022]. Early research primarily focuses on aligning static word embeddings using bilingual dictionaries or parallel corpora [Lample et al., 2018a,b]. With the rise of pretrained language models (PLMs) such as mBERT [Devlin et al., 2019] and XLM-R [Conneau et al., 2020], language-agnosticity has been shown to emerge implicitly from shared vocabulary and other linguistic features [Pires et al., 2019]. To further enhance cross-lingual alignment, techniques such as contrastive learning have been applied during or after pretraining [Chi et al., 2021, Wu et al., 2022, Liu et al., 2024b, Xhelili et al., 2024]. Despite these advancements, recent studies reveal that decoder-only LLMs – typically 2We refer to languages that exhibit stable and robust refusal responses – i.e., those resistant to jailbreak attempts and aligned with safety objectives – as safety-aligned languages (cf. §3.1). 2 trained on English-dominated corpora – often rely on English as an implicit pivot during reasoning and decision-making [Wendler et al., 2024, Schut et al., 2025, Wang et al., 2025a, Yong et al., 2025]. However, it remains unclear whether language-agnosticity generalizes to more functional or pragmatic behaviors, such as refusal. Our work addresses this open question by investigating the universality of refusal mechanisms across languages. By analyzing both refusal directions and representational geometry, we provide new insights into how multilingual alignment, or its failure, affects safety-critical behaviors in LLMs. 3 Background 3.1 Refusal Direction Extraction Following Zou et al. [2025], Arditi et al. [2024] and [Wang et al., 2025b], we utilize the method difference-in-means [Belrose, 2023] to identify refusal directions within model activations. The extraction method computes mean activation differences between harmful prompt contexts Dharmful and benign prompt contexts Dharmless at specific layer land token position i: ri,l=vharmful i,l−vharmless i,l (1) where the mean activations are calculated as: vharmful i,l =1 D(train) harmful X t∈D(train) harmfulxi,l(t),vharmless i,l =1 D(train) harmless X t∈D(train) harmlessxi,l(t) (2) with xi,l(t)representing the residual stream activation at the Transformer’s [Vaswani et al., 2017] token position iand layer lwhen processing text t. The candidate refusal vectors are obtained by collecting the difference-in-means vectors across all layers at final instruction token positions, such as the [/INST] token for Llama2 [Touvron et al., 2023]. The most effective refusal vector is then identified by evaluating the reduction in refusal behavior after ablating each candidate from the residual stream and choosing the one with the most reduction in refusal behavior, measured by the drop of refusal score after ablating the vector [Arditi et al., 2024]. The refusal score calculates the probability difference between refusal-associated tokens R(e.g., ‘Sorry’, ‘I’ for English) and non-refusal tokens V\R , calculated at the initial token position of the model’s generation: Refusal Score = log X t∈Rpt! −log X t∈V\Rpt  (3) To identify refusal tokens R, we queried the model with both harmful and harmless prompts in each language , then selected the most frequent initial tokens that appeared distinctively in responses to
https://arxiv.org/abs/2505.17306v1