text string | source string |
|---|---|
Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 3835–3855, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.213. URL https://aclanthology.org/2023.acl-long.213/ . [27] W. K. Kirchner. Age differences in short-term retention of rapidly changing information. Journal of experimental psychology , 55(4):352, 1958. [28] T. Ko ˇciský, J. Schwarz, P. Blunsom, C. Dyer, K. M. Hermann, G. Melis, and E. Grefenstette. The NarrativeQA reading comprehension challenge. Transactions of the Association for Computational Linguistics , 6:317–328, 2018. doi: 10.1162/tacl_a_00023. URL https:// aclanthology.org/Q18-1023/ . 12 [29] K. Kogkalidis, J.-P. Bernardy, and V . Garg. Algebraic positional encodings. In The Thirty- eighth Annual Conference on Neural Information Processing Systems , 2024. URL https: //openreview.net/forum?id=PfOeAKxx6i . [30] K. Krohn and J. Rhodes. Algebraic theory of machines. i. prime decomposition theorem for finite semigroups and machines. Transactions of the American Mathematical Society , 116(2): 450–464, 1965. [31] Y . Kuratov, A. Bulatov, P. Anokhin, I. Rodkin, D. I. Sorokin, A. Sorokin, and M. Burtsev. BABILong: Testing the limits of LLMs with long context reasoning-in-a-haystack. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track , 2024. URL https://openreview.net/forum?id=u7m2CG84BQ . [32] W. Kwon, Z. Li, S. Zhuang, Y . Sheng, L. Zheng, C. H. Yu, J. E. Gonzalez, H. Zhang, and I. Stoica. Efficient memory management for large language model serving with pagedattention. Proceedings of the 29th Symposium on Operating Systems Principles , 2023. URL https: //api.semanticscholar.org/CorpusID:261697361 . [33] Y . Leviathan, M. Kalman, and Y . Matias. Selective attention improves transformer. In The Thirteenth International Conference on Learning Representations , 2025. URL https:// openreview.net/forum?id=v0FzmPCd1e . [34] B. Z. Li, Z. C. Guo, and J. Andreas. (how) do language models track state?, 2025. URL https://arxiv.org/abs/2503.02854 . [35] S. Li, F. Xue, C. Baranwal, Y . Li, and Y . You. Sequence Parallelism: Long Sequence Training from System Perspective. In A. Rogers, J. Boyd-Graber, and N. Okazaki, editors, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , Toronto, Canada, July 2023. Association for Computational Linguistics. [36] Z. Lin, E. Nikishin, X. He, and A. Courville. Forgetting transformer: Softmax attention with a forget gate. In The Thirteenth International Conference on Learning Representations , 2025. URL https://openreview.net/forum?id=q2Lnyegkr8 . [37] Z. Lin, J. Obando-Ceron, X. O. He, and A. Courville. Adaptive computation pruning for the forgetting transformer, 2025. URL https://arxiv.org/abs/2504.06949 . [38] B. Liu, J. T. Ash, S. Goel, A. Krishnamurthy, and C. Zhang. Exposing attention glitches with flip-flop language modeling. In Thirty-seventh Conference on Neural Information Processing Systems , 2023. URL https://openreview.net/forum?id=VzmpXQAn6E . [39] B. Liu, J. T. Ash, S. Goel, A. Krishnamurthy, and C. Zhang. Transformers learn shortcuts to automata, 2023. URL https://arxiv.org/abs/2210.10749 . [40] H. Liu, M. Zaharia, and P. Abbeel. Ring Attention with Blockwise Transformers for Near- Infinite Context. ArXiv , abs/2310.01889, 2023. [41] X. Liu, H. Yan, S. Zhang, C. An, X. Qiu, and D. Lin. Scaling laws of rope-based extrapolation, 2024. URL https://arxiv.org/abs/2310.05209 . [42] A. Liutkus, O. Cıfka, S.-L. Wu, U. Simsekli, Y .-H. Yang, and G. Richard. Relative | https://arxiv.org/abs/2505.16381v1 |
positional encoding for transformers with linear complexity. In International Conference on Machine Learning , pages 7067–7079. PMLR, 2021. [43] I. Loshchilov and F. Hutter. Fixing weight decay regularization in adam. In International Conference on Learning Representations (ICLR) , 2018. https://openreview.net/forum? id=rk6wfqLU- . [44] A. Mathiasen, F. Hvilshøj, J. R. Jørgensen, A. Nasery, and D. Mottin. Faster Orthogonal Parameterization with Householder Matrices. In ICML Workshop on Invertible Neural Networks and Normalizing Flows , 2020. [45] A. Mathiasen, F. Hvilshøj, J. R. Jørgensen, A. Nasery, and D. Mottin. What if Neural Networks had SVDs?, Sept. 2020. URL http://arxiv.org/abs/2009.13977 . arXiv:2009.13977 [cs]. 13 [46] W. Merrill and A. Sabharwal. The expressive power of transformers with chain of thought. arXiv preprint arXiv:2310.07923 , 2023. [47] W. Merrill and A. Sabharwal. A logic for expressing log-precision transformers, 2023. URL https://arxiv.org/abs/2210.02671 . [48] W. Merrill, J. Petty, and A. Sabharwal. The Illusion of State in State-Space Models, Apr. 2024. URL http://arxiv.org/abs/2404.08819 . arXiv:2404.08819 [cs]. [49] M. Milakov and N. Gimelshein. Online normalizer calculation for softmax, 2018. URL https://arxiv.org/abs/1805.02867 . [50] D. Paperno, G. Kruszewski, A. Lazaridou, Q. N. Pham, R. Bernardi, S. Pezzelle, M. Baroni, G. Boleda, and R. Fernández. The LAMBADA dataset: Word prediction requiring a broad dis- course context, June 2016. URL http://arxiv.org/abs/1606.06031 . arXiv:1606.06031 [cs]. [51] G. Penedo, H. Kydlí ˇcek, L. B. Allal, A. Lozhkov, M. Mitchell, C. Raffel, L. V . Werra, and T. Wolf. The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale. Nov. 2024. URL https://openreview.net/forum?id=n6SCkn2QaG#discussion . [52] B. Peng, J. Quesnelle, H. Fan, and E. Shippole. Yarn: Efficient context window extension of large language models, 2023. URL https://arxiv.org/abs/2309.00071 . [53] B. Peng, R. Zhang, D. Goldstein, E. Alcaide, H. Hou, J. Lu, W. Merrill, G. Song, K. Tan, S. Utpala, N. Wilce, J. S. Wind, T. Wu, D. Wuttke, and C. Zhou-Zheng. Rwkv-7 "goose" with expressive dynamic state evolution, 2025. URL https://arxiv.org/abs/2503.14456 . [54] N. Prakash, T. R. Shaham, T. Haklay, Y . Belinkov, and D. Bau. Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity Tracking. Oct. 2023. URL https://openreview.net/ forum?id=8sKcAWOf2D . [55] O. Press, N. A. Smith, and M. Lewis. Train short, test long: Attention with linear biases enables input length extrapolation, 2022. URL https://arxiv.org/abs/2108.12409 . [56] Z. Qin, W. Sun, K. Lu, H. Deng, D. Li, X. Han, Y . Dai, L. Kong, and Y . Zhong. Linearized relative positional encoding. Transactions on Machine Learning Research , 2023. ISSN 2835- 8856. URL https://openreview.net/forum?id=xoLyps2qWc . [57] M. N. Rabe and C. Staats. Self-attention Does Not Need $O(n^2)$ Memory, Oct. 2022. URL http://arxiv.org/abs/2112.05682 . arXiv:2112.05682 [cs]. [58] J. W. Rae, A. Potapenko, S. M. Jayakumar, C. Hillier, and T. P. Lillicrap. Compressive transformers for long-range sequence modelling. In International Conference on Learning Representations , 2020. URL https://openreview.net/forum?id=SylKikSYDH . [59] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y . Zhou, W. Li, and P. J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research , 21(140):1–67, 2020. [60] K. Sakaguchi, R. L. Bras, | https://arxiv.org/abs/2505.16381v1 |
C. Bhagavatula, and Y . Choi. Winogrande: An adversarial winograd schema challenge at scale. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020 , pages 8732–8740. AAAI Press, 2020. URL https://aaai.org/ojs/index.php/AAAI/article/view/6399 . [61] I. Schlag, K. Irie, and J. Schmidhuber. Linear transformers are secretly fast weight programmers, 2021. URL https://arxiv.org/abs/2102.11174 . [62] I. Schlag, K. Irie, and J. Schmidhuber. Linear Transformers Are Secretly Fast Weight Program- mers. In M. Meila and T. Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event , volume 139 of Proceedings of Machine Learning Research , pages 9355–9366. PMLR, 2021. 14 [63] R. Schreiber and C. Van Loan. A storage-efficient wy representation for products of householder transformations. SIAM Journal on Scientific and Statistical Computing , 10(1):53–57, 1989. [64] J. Shah, G. Bikshandi, Y . Zhang, V . Thakkar, P. Ramani, and T. Dao. FlashAttention- 3: Fast and Accurate Attention with Asynchrony and Low-precision. Nov. 2024. URL https://openreview.net/forum?id=tVConYid20&referrer=%5Bthe% 20profile%20of%20Tri%20Dao%5D(%2Fprofile%3Fid%3D~Tri_Dao1) . [65] P. Shaw, J. Uszkoreit, and A. Vaswani. Self-attention with relative position representations, 2018. URL https://arxiv.org/abs/1803.02155 . [66] Y . Shen, Z. Zhang, T. Cao, S. Tan, Z. Chen, and C. Gan. Moduleformer: Modularity emerges from mixture-of-experts, 2023. URL https://arxiv.org/abs/2306.04640 . [67] J. Siems, T. Carstensen, A. Zela, F. Hutter, M. Pontil, and R. Grazzi. Deltaproduct: Increasing the expressivity of deltanet through products of householders, 2025. URL https://arxiv. org/abs/2502.10297 . [68] B. F. Spector, S. Arora, A. Singhal, A. Parthasarathy, D. Y . Fu, and C. Re. Thunderkittens: Simple, fast, and $\textit{Adorable}$ kernels. In The Thirteenth International Conference on Learning Representations , 2025. URL https://openreview.net/forum?id=0fJfVOSUra . [69] J. Su, Y . Lu, S. Pan, A. Murtadha, B. Wen, and Y . Liu. Roformer: Enhanced transformer with rotary position embedding, 2023. URL https://arxiv.org/abs/2104.09864 . [70] S. Tan, S. Yang, A. Courville, R. Panda, and Y . Shen. Scaling stick-breaking attention: An efficient implementation and in-depth study. In The Thirteenth International Conference on Learning Representations , 2025. URL https://openreview.net/forum?id=r8J3DSD5kF . [71] A. Terzi ´c, M. Hersche, G. Camposampiero, T. Hofmann, A. Sebastian, and A. Rahimi. On the expressiveness and length generalization of selective state-space models on regular languages. InProceedings of the AAAI Conference on Artificial Intelligence , 2025. [72] P. Tillet, H.-T. Kung, and D. D. Cox. Triton: an intermediate language and compiler for tiled neural network computations. In T. Mattson, A. Muzahid, and A. Solar-Lezama, editors, Proceedings of the 3rd ACM SIGPLAN International Workshop on Machine Learning and Programming Languages, MAPL@PLDI 2019, Phoenix, AZ, USA, June 22, 2019 , pages 10–19. ACM, 2019. doi: 10.1145/3315508.3329973. [73] A. E. Tomás Dominguez and E. S. Quintana Orti. Fast Blocking of Householder Reflectors on Graphics Processors. In 2018 26th Euromicro International Conference on Parallel, Distributed and Network-based Processing (PDP) , pages 385–393, Mar. 2018. doi: 10.1109/PDP2018.2018. 00068. URL https://ieeexplore.ieee.org/document/8374491 . ISSN: 2377-5750. [74] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, | https://arxiv.org/abs/2505.16381v1 |
A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need, 2023. URL https://arxiv.org/abs/1706.03762 . [75] A. Veisi and A. Mansourian. Context-aware biases for length extrapolation, 2025. URL https://arxiv.org/abs/2503.08067 . [76] S. Yang. Deltanet explained (part ii), 2024. URL https://sustcsonglin.github.io/ blog/2024/deltanet-2/ . Accessed: 2025-03-26. [77] S. Yang and Y . Zhang. FLA: A Triton-Based Library for Hardware-Efficient Implementations of Linear Attention Mechanism, Jan. 2024. URL https://github.com/sustcsonglin/ flash-linear-attention . original-date: 2023-12-20T06:50:18Z. [78] S. Yang, B. Wang, Y . Shen, R. Panda, and Y . Kim. Gated Linear Attention Transformers with Hardware-Efficient Training. CoRR , abs/2312.06635, 2023. doi: 10.48550/ARXIV .2312.06635. URL https://doi.org/10.48550/arXiv.2312.06635 . arXiv: 2312.06635. [79] S. Yang, B. Wang, Y . Zhang, Y . Shen, and Y . Kim. Parallelizing linear transformers with the delta rule over sequence length. In Proceedings of NeurIPS , 2024. 15 [80] S. Yang, J. Kautz, and A. Hatamizadeh. Gated delta networks: Improving mamba2 with delta rule. In The Thirteenth International Conference on Learning Representations , 2025. URL https://openreview.net/forum?id=r8H7xhYPwz . [81] H. P. Zeiger. Cascade synthesis of finite-state machines. In SWCT , 1965. URL https: //api.semanticscholar.org/CorpusID:30796093 . [82] R. Zellers, A. Holtzman, Y . Bisk, A. Farhadi, and Y . Choi. HellaSwag: Can a machine really finish your sentence? In A. Korhonen, D. Traum, and L. Màrquez, editors, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 4791–4800, Florence, Italy, 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1472. URL https://aclanthology.org/P19-1472 . [83] Y . Zhang, W. Du, D. Jin, J. Fu, and Z. Jin. Finite state automata inside transformers with chain-of-thought: A mechanistic study on state tracking, 2025. URL https://arxiv.org/ abs/2502.20129 . [84] C. Zheng, Y . Gao, H. Shi, M. Huang, J. Li, J. Xiong, X. Ren, M. Ng, X. Jiang, Z. Li, and Y . Li. DAPE: Data-adaptive positional encoding for length extrapolation. In The Thirty- eighth Annual Conference on Neural Information Processing Systems , 2024. URL https: //openreview.net/forum?id=rnUEUbRxVu . [85] C. Zheng, Y . Gao, H. Shi, J. Xiong, J. Sun, J. Li, M. Huang, X. Ren, M. Ng, X. Jiang, Z. Li, and Y . Li. Dape v2: Process attention score as feature map for length extrapolation, 2024. URL https://arxiv.org/abs/2410.04798 . [86] Y . Zhou, U. Alon, X. Chen, X. Wang, R. Agarwal, and D. Zhou. Transformers can achieve length generalization but not robustly, 2024. URL https://arxiv.org/abs/2402.09371 . 16 A Representation Power of Transformers with PaTH Attention We state two theorem which illustrate the representation power of transformers equipped with PaTH attention. The first theorem shows that a PaTH attention layer can solve the problem of tracking iterative swaps on 5 elements, which is an NC1-complete under AC0reductions. This theorem and its proof is an adaptation of Theorem 2 of Peng et al. [53]. Theorem 2.1. A one-layer PaTH transformer with two attention heads and lognprecision can solve anNC1-complete problem under AC0-reductions. Proof. As in Lemma 2 of Peng et al. [53], consider the task of deciding whether niterative swappings of 5 elements encodes the identity permutation. This task consists of an input sequence c=c0c1. . . cn | https://arxiv.org/abs/2505.16381v1 |
of length n+ 1, # [a1↔b1] [a2↔b2]. . .[an↔bn] where c0= # is the start token and c1= [a1↔b1], . . . , c n= [an↔bn]are “tokens” which indicates that position anis swapped with position bnat time n. (Hence there are 20 such possible swap tokens of the form [x↔y]for all pairwise x, y∈ {1, . . . , 5}such that x̸=y.) Given this sequence, we show that there is a one-layer PaTH transformer with two attention heads that outputs a 1if the sequence encodes the identity permutation, and −1otherwise. As noted by previous works [48,53], this suffices since there is an AC0-reduction from a well-known NC1-complete problem (i.e., iterated multiplication of S5) to this task. We first embed the #and all 20 [x↔y]tokens to distinct one-hot vectors. Given a token u∈Σ and its associated one-hot vector u, we choose the key/query/value/PaTH projection matrices (i.e., Wk,Wq,Wv,Ww∈R6×21) matrices for the first attention head such that Wku=k[u] =1{u= #}(e1+ 2e2+ 3e3+ 4e4+ 5e5−e6), Wqu=q[u] =n(e1+ 2e2+ 3e3+ 4e4+ 5e5+ 54.5e6), Wwu=w[u] = (ex−ey)/√ 2forv= [x↔y],and0ifv= #, Wvu=v[u] =1{u= #}e1 β= 2. (Hence, the query vectors and βare input-independent.) In this case, as in Lemma 1 of [ 53] the one-step PaTH transformation is a true Householder transformation with H[u] =I−2w[u]w[u]⊤∈R6×6 and effectively swaps xwithy. Now suppose the initial list is [1,2,3,4,5], and let π(i)be the i-th element of the final permuted list after the nswaps. We then have (k[c0]⊤nY s=1Hs) = 5X i=1ieπ(i)! −e6!⊤ , and the attention logit from nto0is given by s0=k[c0]⊤nY s=1Hsq[cn] =n 5X i=1iπ(i)−54.5! . By the rearrangement inequality, we further have 5X i=1iπ(i)≤5X i=1i2= 55, with equality holding if and only if i=π(i)for all i. Therefore s0>0.5nif the final list is the same as the initial list (i.e., identity permutation), and s0<−0.5notherwise. Because k[u] =0for all u̸= # , we further have that the attention logits slfor all l >0is 0. The attention weight for the first position is then given by a0=exp(s0) exp(s0)+n, which is greater than1 n+1ifs0>0(i.e., permutation is 17 identity) and less than1 n+1otherwise. Since the value vector is e1forc0and0otherwise, the output of this attention head is given by nX l=0alv[cl] =exp(s0) exp(s0) +ne1. The second attention head is data-independent and uses Wk=Wq=Ww=0, and the same value matrix Wvas above. This results in the output of this second attention head always being1 n+1e1 regardless of the input. Concatenating the output from these two heads gives the vector exp(s0) exp(s0) +n,0,0,0,0,0,1 n+ 1,0,0,0,0,0 , i.e., 12 dimension vector with the first dimension asexp(s0) exp(s0)+nand the 7th dimension as1 n+1. We can now have an output projection layer with matrix Wothat subtracts the 7th dimension from the 1st dimension (i.e., [1,0,0,0,0,0,−1,0,0,0,0,0]in the first row). The first dimension of this output vector will be positive if the permutation is identity, and negative otherwise. We can then use the FFN layer with a sign(·)nonlinearty (or a steep tanh function) to clamp this output to {−1,+1}. We do not explicitly need the lognprecision assumption here but the construction here can be represented in lognprecision while preserving the same functionality. We include | https://arxiv.org/abs/2505.16381v1 |
this assumption to ensure that we are using same or weaker precision assumption with previous works on the circuit complexity of transformers (Merrill and Sabharwal [47], Chen et al. [7]and refs. therein). We can make the proof simpler in the above if we incorporate a O(logn)assumption since in this case the output of softmax is 1when the final list is the same as the original list and is 0otherwise (i.e., there is no need for the second attention head). Theorem A.1. For any n, there is a two-layer PaTH transformer with O(logn)precision can solve the flip-flop language modeling (FFLM) task with accuracy greater than 1−1/n100for all inputs up to length n. Proof. Recall that in FFLM, there are five types of input w, i, r, 0, 1 . We will now present a construction of the two-layer transformer with PaTH attention. The token embeddings are given by emb( w) =e1+e6 emb( r) =e2+e6 emb( i) =e3+e6 emb( 0) =e4+e6 emb( 1) =e5+e6 where eiis the one-hot i-th basis vector. The first attention layer will implement a one-hot attention from the bit tokens 0and1to their corresponding instruction tokens. To achieve this, we will have the matrices Wk,Wq,Ww,Wv such that: Wkh= (h1+h2+h3)e1, Wqh=nh6e1, Wwh= (h1+h2+h3)e1+ (h4+h5)e2, Wvh=h1e7+h2e8+h3e9, β= 1. Then the transition matrix is given by H=I−e1e⊤ 1,if input is { w,r,i} I−e2e⊤ 2,if input is { 0,1}, 18 i.e., the transition matrix projects the first dimension to 0for the instruction tokens { w,r,i} and projects the second dimension to 0the bit tokens { 0,1}. Similarly, the key vector kiise1if the i-th token is an instruction token, and 0otherwise. Therefore when the i-th token is 0or1, k⊤ jiY s=j+1Hsqi̸= 0 if and only if j=i−1, and in this case it equals to n. Because we are considering an O(logn) precision transformer, the attention score after softmax becomes 1-hot for every bit token. After this attention layer, the 7-th to 9-th dimension of the bit tokens now encode the type of instruction of the previous token. The first FFN layer will map the 1to9dimensions of 0and1tokens to be a one-hot embedding for each value and corresponding instruction type, FFN( h)i= 0, i̸∈ {10,11,12}, FFN( h)10=1{h4= 1, h7= 1}, FFN( h)11=1{h5= 1, h7= 1}, FFN( h)12= 1,otherwise . With 1{·}being the indicator function. Specifically, the 10-th dimension will be 1for every 0 following a wand the 11-th dimension will be 1for every 1following a w. The second attention layer will operate on the 10-th and 11-th dimensions of the input embedding and implement the following: Wkh= (h10+h11)e1 Wqh=nh6e1 Wwh= (h10+h11)e1+h12e2 Wvh=h8e13+h9e14 β(h) =1{h8+h9>0} Here we assume that we can use a step function for β(or alternatively, we can use a steep-enough logistic function for it to be effectively a step function under the precision considered). This shows that for every token that is not a 0or1that follows w, the transition matrix is identity; for 0or1that follows w, the transition matrix is a matrix that projects the first dimension to 0. Then for any i≥2, k⊤ jiY s=j+1Hsqi̸= 0 if and only if jis the largest token | https://arxiv.org/abs/2505.16381v1 |
that is a 0or1that follows wwithj≤i. This jis guaranteed to exist because in FFLM, the first token is always w. In this case, this term equals n. Using the same argument as the first layer, the attention becomes one-hot and the output of attention encode the value of last 0or1token following a w. By the definition of flip-flop, this is the current state. The second FFN layer will operate on the 13-th and 14-th dimensions of the input, FFN( h)i= 0, i̸∈ {15,16}, FFN( h)15=1(h2= 1, h6= 1), FFN( h)16=1(h2= 1, h6= 1). Specifically, the 15-th and 16-th dimension of the output will encode the state value for each r token. After this layer, dimensions 1, 3, 4, 5, 15, and 16 of the embedding becomes one-hot, each corresponding to a different output distribution in FFLM. Finally, the LM head will map dimensions 1, 3, 4, 5, 15, and 16 to their corresponding next-token probability before softmax. Concretely, WLMh=(Te4+Te5)(h1+h3) +(Te1+Te2+Te3)(h4+h5) +ne4h15+ne5h16. HereT≈lognis an appropriate number such that softmax over Te4+Te5andTe1+Te2+Te3 yields a uniform distribution with error smaller than 1/n101. 19 Task Example Evaluation Focus Task 1: Single Supporting FactMary went to the bathroom. John moved to the hallway. Mary travelled to the office. Q: Where is Mary? A: officeIdentify a single explicit fact from context. Task 2: Two Sup- porting FactsJohn is in the playground. John picked up the football. Bob went to the kitchen. Q: Where is the football? A: playgroundCombine two clues to infer an object’s location. Task 3: Three Supporting FactsJohn picked up the apple. John went to the office. John went to the kitchen. John dropped the apple. Q: Where was the apple before the kitchen? A: officeTrack object movement and temporal order. Task 4: Two Ar- gument RelationsOffice is north of bedroom. Bedroom is north of bathroom. Kitchen is west of garden. Q1: What is north of bedroom? A: office Q2: What is bedroom north of? A: bathroomReason over spatial rela- tionships. Task 5: Three Ar- gument RelationsMary gave the cake to Fred. Fred gave the cake to Bill. Jeff was given the milk by Bill. Q1: Who gave the cake to Fred? A: Mary Q2: Who did Fred give the cake to? A: BillTransitive reasoning over possession chains. Table 4: Descriptions and examples of the first five bAbI tasks. Each task highlights a specific reasoning skill required for successful question answering. B Experiment continued Figure 5: BABILong performance breakdowns. QA1: Single supporting fact. QA2: Two supporting facts. QA3: Three supporting facts. QA4: Two arg relations. QA5: Three arg relations. Hyperparameter settings. All models are trained with AdamW [ 43], using a cosine learning rate schedule with a 1B-token warmup. The peak learning rate is 1e-3, with both initial and final rates set to 3e-5. We apply a weight decay of 0.01 and gradient clipping of 1.0. The batch size is 2M tokens. Parameters are initialized with a standard deviation of 0.02. Each 760M model is trained on 8 H100 GPUs for 2-3 days. For synthetic tasks, we use A100 GPUs, completing training within several hours. BABILong | https://arxiv.org/abs/2505.16381v1 |
Figure 5 presents the performance breakdown across sub-tasks and sequence lengths. Task descriptions are provided in Table 4. 20 LongBench-E Detailed results are presented in Table 5. Category Dataset0–4k 4–8k 8k-16k FoX FoX-PaTH PaTH RoPE FoX FoX-PaTH PaTH RoPE FoX FoX-PaTH PaTH RoPE QA2wikimqa 21.0 23.7 28.7 23.9 15.3 22.5 20.8 0.9 9.4 8.4 7.3 0.1 hotpotqa 20.3 16.2 19.0 25.2 9.3 16.1 22.8 0.8 5.6 7.7 8.8 0.4 multifieldqa_en 39.1 39.6 38.6 18.0 24.9 31.4 27.2 5.1 16.0 19.5 19.2 1.9 qasper 22.4 24.6 25.9 15.1 14.9 19.8 16.8 1.8 7.0 10.1 10.6 1.9 Summarizationmulti_news 9.1 6.9 12.1 10.2 7.3 9.8 9.6 3.1 6.1 8.3 8.3 1.7 gov_report 14.4 10.2 22.3 12.4 14.5 13.6 17.9 4.9 5.9 11.9 11.6 2.5 Few-shottrec 35.0 36.7 40.0 23.3 27.5 26.3 35.0 1.2 20.6 26.3 20.0 0.0 triviaqa 33.2 28.9 36.0 21.8 18.2 27.6 32.0 2.8 13.7 31.6 18.4 0.4 samsum 21.4 27.1 26.8 19.3 16.9 27.6 23.6 3.2 9.1 15.7 15.6 0.7 Codelcc 19.2 21.4 22.3 22.1 18.8 23.3 18.6 7.9 18.2 18.9 19.0 4.8 repobench-p 21.8 22.7 27.3 14.6 18.4 22.5 22.7 9.2 17.5 19.3 19.2 7.6 Average 23.4 23.5 27.2 18.7 16.9 21.9 22.5 3.7 11.7 16.1 14.4 2.0 Table 5: Performance comparison grouped by task category. Each bolded value indicates the best model score for the respective dataset and length bucket. 21 | https://arxiv.org/abs/2505.16381v1 |
arXiv:2505.16385v1 [cs.CL] 22 May 2025Semantic Pivots Enable Cross-Lingual Transfer in Large Language Models Kaiyu He1,3, Tong Zhou1, Yubo Chen1,2, Delai Qiu4, Shengping Liu4, Kang Liu1,2,5, Jun Zhao1,2 1The Key Laboratory of Cognition and Decision Intelligence for Complex Systems Institute of Automation, Chinese Academy of Sciences 2School of Artificial Intelligence, University of Chinese Academy of Sciences 3University of Science and Technology Beijing 4Unisound Al Technology Co,Ltd5Shanghai Artificial Intelligence Laboratory kaiyu_he398@163.com, tong.zhou@ia.ac.cn {yubo.chen, kliu, jzhao}@nlpr.ia.ac.cn, {qiudelai, liushengping}@unisound.com Abstract Large language models (LLMs) demonstrate remarkable ability in cross-lingual tasks. Un- derstanding how LLMs acquire this ability is crucial for their interpretability. To quantify the cross-lingual ability of LLMs accurately, we propose a Word-Level Cross-Lingual Trans- lation Task. To find how LLMs learn cross- lingual ability, we trace the outputs of LLMs’ intermediate layers in the word translation task. We identify and distinguish two distinct be- haviors in the forward pass of LLMs: co- occurrence behavior and semantic pivot behav- ior. We attribute LLMs’ two distinct behaviors to the co-occurrence frequency of words and find the semantic pivot from the pre-training dataset. Finally, to apply our findings to im- prove the cross-lingual ability of LLMs, we reconstruct a semantic pivot-aware pre-training dataset using documents with a high proportion of semantic pivots. Our experiments validate the effectiveness of our approach in enhancing cross-lingual ability. Our research contributes insights into the interpretability of LLMs and offers a method for improving LLMs’ cross- lingual ability. 1 Introduction Multilingual large language models (MLLMs) demonstrate excellent cross-lingual ability (Le Scao et al., 2023; AI@Meta, 2024; Dang et al., 2024) in various cross-lingual tasks, even in some low-resource languages (Lai et al., 2023b). However, several critical questions remain unresolved: how to quantify the cross-lingual abilities of LLMs, how the cross-lingual abilities are established, and how to enhance the model’s cross-lingual abilities. Quantifying the cross-lingual ability of LLMs is crucial to understanding how they acquire cross- lingual ability. Prior evaluations of LLMs have pri- marily focused on multilingual capabilities (Lewiset al., 2019). Existing studies evaluating the cross- lingual abilities mainly focus on machine transla- tion tasks (Goyal et al., 2021). The discontinuity of cross-lingual ability scores measured in sentence- level translation tasks hinders the analysis of the model’s cross-lingual ability, especially for early checkpoints with limited or poor linguistic capabil- ities. To address this shortcoming, our work intro- duces a word-level translation task and proposes a continuous metric from LLMs’ output logits to accurately quantify their cross-lingual ability. The interpretability of LLMs is crucial for ex- plaining how LLMs learn cross-lingual ability. Chen et al. (2024) describes the entire process of LLMs acquiring new language capabilities. Wendler et al. (2024) discover that Llama-2 models internally use English as a pivot language. Zhong et al. (2024) further demonstrates that the LLMs’ pivotal language is correlated with the pre-training corpus. However, these works do not attribute the LLMs’ cross-lingual ability to the details of the pre-training corpus. We aim to explore the intrinsic relationship between LLMs’ cross-lingual ability and the content of the pre-training corpus. It remains a challenge to propose a method that can | https://arxiv.org/abs/2505.16385v1 |
effectively improve cross-lingual ability. Pre- vious studies utilize extensive parallel corpora to improve LLMs’ cross-lingual ability (Yang et al., 2023). However, using excessive parallel data can have adverse effects, potentially due to catastrophic forgetting (Xu et al., 2023). Based on our findings, we aim to effectively improve cross-lingual ability by reconstructing a pre-training dataset. In this paper, to quantify the cross-lingual ability of LLMs accurately, we propose a Cross-Lingual Word Translation Dataset (CLWTD) to evalu- ate LLMs’ cross-lingual ability, including four lan- guages and a corresponding metric. By focusing on word-level translation tasks, we simplify the task of analyzing LLMs’ internal states and pre-training corpus search. We assess the cross-lingual ability of LLMs with different sizes and series and acquire corresponding cross-lingual ability score matrices. To explain how models acquire cross-lingual abil- ity, we analyze the output of LLMs’ intermediate layers using logit lens and discover LLMs’ two distinct behaviors : co-occurrence behavior and semantic pivot behavior. Different from Wendler et al. (2024), we focus on the relation between in- termediate layer outputs, the input, and the final output. With the help of the tools WIMBD (Elazar et al., 2023) and Infini-gram (Liu et al., 2024a), we obtain the frequency of strings’ co-occurrence in the pre-training dataset and the corresponding documents. We attribute LLMs’ two behaviors to the co-occurrence frequency of strings. To find the semantic pivots from the pre-training dataset when LLM exhibits the semantic pivot behavior, we provide a method to select the tokens with a high relative co-occurrence proportion with both the source word and the target word. Finally, to ap- ply our findings to enhancing LLMs’ cross-lingual ability, we construct a dataset with a high propor- tion of semantic pivots . We build the adjacency matrix to obtain a set of semantic pivots and se- lect our pre-training data by ranking documents based on the proportion of the tokens that belong to a set of semantic pivots. We train the OLMo-1B model in our dataset and compare it to two base- line models, demonstrating the effectiveness of our approach in improving cross-lingual ability. In summary, our key contributions are as fol- lows: •We propose a Cross-Lingual Word Translation Dataset (CLWTD) to evaluate LLMs’ cross- lingual ability. Furthermore, we compare our metrics with chrF++, and the results demon- strate that our method can precisely reflect the cross-lingual ability of models. •We identify and distinguish LLMs’ two behav- iors. We attribute LLMs’ two distinct behav- iors to the co-occurrence frequency of words and metric AUC score. We find the semantic pivot from the pre-training dataset and dis- cover that its probability exhibits a trend of initially increasing and then decreasing. •We construct a semantic pivot-aware pre- training dataset and effectively improve the model’s cross-lingual ability. The model trained on our dataset achieves a 0.013 im- provement over the original published check-point model and a 0.005 improvement over the multilingual dataset baseline. 2 Related Work 2.1 Multilingual Evaluation Multilingual large language models (MLLMs) are trained on datasets comprising multiple languages, including mBERT (Devlin, 2018), mBART (Liu, 2020), XLM-R (Conneau et al., | https://arxiv.org/abs/2505.16385v1 |
2019), mT5 (Xue et al., 2021), XGLM (Lin et al., 2022), Aya Ex- panse (Dang et al., 2024), BLOOM (Le Scao et al., 2023). These MLLMs are evaluated by various tasks, including classification task (Conneau et al., 2018; Edoardo M. Ponti, 2020; Lin et al., 2022), question answering (Artetxe et al., 2020), and natural language generation (Hasan et al., 2021). MEGA is the first multilingual benchmark for eval- uating generative AI (Ahuja et al., 2023). Lai et al. (2023a) provides a multilingual holistic evaluation in 7 diverse NLP tasks. Qi et al. (2023) measures the cross-lingual consistency of the models. For generative tasks, existing works primarily rely on string comparison as the metric for evaluation, like BLUE (Papineni et al., 2002) and CHRF (Popovi ´c, 2015). To improve MLLM’s performance in this task, Wei et al. (2023); Sanh et al. (2021) fine-tune MLLMs using samples of various NLP tasks. They enhance the performance of MLLMs on unseen tasks. Shi et al. (2024); Mu et al. (2024) do contin- ual pre-training to enhance MLLMs’ ability to per- form specific tasks. Huang et al. (2024) transform- ing the input language of a model effectively im- proves MLLMs’ performance. Zhang et al. (2023); Upadhayay and Behzadan (2023) train MLLMs to process non-English instructions by initially in- terpreting them in English before generating a re- sponse in the target language. 2.2 Interpretability of Multilingual Large Language Models Understanding how MLLMs handle multilingual capabilities and process inputs from different lan- guages plays a crucial role in the interpretability of MLLMs (Zhao et al., 2024). Blevins et al. (2022); Chen et al. (2024) explain the patterns in the dy- namic training process of MLLMs. Some works explore models through neuron-level interpretation. Zhang et al. (2024) finds a core region of multi- lingual capability. Tang et al. (2024); Liu et al. (2024b) identify language-agnostic neurons and language-specific neurons. Bhattacharya and Bojar (2023) investigating component-based interpreta- tion discovered distinct patterns of multilingual processing of the model’s feed-forward network in the sub-layers. Parametric Probing Interpretation is widely used in the interpretability of multilin- gual models, such as probing classifiers (Starace et al., 2023), cross-lingual fine-tuning (Vuli ´c et al., 2023), and linear probes (de Varda and Marelli, 2024). Several studies analyze the language behav- ior of MLLMs during the intermediate stages of static reasoning (Bhattacharya and Bojar, 2023). Wendler et al. (2024) discover that the language transformation occurs in the intermediate layers, when Llama-2 does the non-English tasks. They apply a tool, logit lens (Nostalgebraist), used to de- code the model’s output of the intermediate layer. By using the logit lens, the outputs of an interme- diate layer forward pass the model’s head directly, which is applied in the final layer of LLM, and get the logit of the outputs. The probability distribution of the intermediate layers can be obtained using softmax on the logit. However, prior work does not focus on the role of pre-training datasets in establishing LLMs’ cross-lingual capabilities. 3 Cross-Lingual Ability Existing approaches to measuring cross-lingual ability including machine translation (Goyal et al., 2021; Zhang et al., | https://arxiv.org/abs/2505.16385v1 |
2020; Edoardo M. Ponti, 2020), question answering (Lewis et al., 2019), common- sense reasoning (Lin et al., 2021), etc. However, these sentence-level evaluation methods rely on discrete decoding and string matching, making it difficult to discern subtle differences in cross- lingual ability between models. In contrast, the more atomic and continuous word-level logits can not only faithfully reflect the cross-lingual abilities of models but also discern tiny differences between these abilities. Thus, we propose the Cross-Lingual Word Translation Dataset (CLWTD) and a corre- sponding metric to evaluate LLMs’ cross-lingual ability. 3.1 Word-Level Cross-Lingual Translation Task To evaluate the cross-lingual word-level capabili- ties of LLMs, we propose a Cross-Lingual Word Translation Dataset (CLWTD), which includes 2,000 parallel words and four languages. It offersstrong scalability and enables more precise mea- surement of cross-lingual ability by mapping it into a continuous scale. This allows for the accurate measurement of even weakly performing LLMs. We evaluate LLMs’ cross-lingual ability on a word translation task using the CLWTD, the details shown in Appendix A. Given a source language word, the task requires the MLLM to generate its translation in the target language. We calculate the loss when LLMs generate the target language word ywithNtokens and transfer it to the possibility P(y). loss(y) =−1 NNX i=1log(P(yi)) P(y) =e−loss (y) P(yi)denotes the probability when LLMs generate the i-th token yiof the target language word. We observe a continuous decrease in loss as the model generates a sequence of tokens. To mitigate potential biases introduced by long target word to- kens, we introduce a continuous output probabil- ity calibration method. We design nine distractors w′∈D′based on the source language word wto evaluate and adjust the bias. Following the method for calculating the probability using the source lan- guage word P(w), we calculate the probability of each distractor being translated into the target lan- guage word P(w′)by only replacing the source language words with distractors, quantifying the bias introduced by sequentially generated tokens. The cross-lingual ability score is measured by sub- tracting the average probability of the distractors from the probability of the source language word. Cross-lingual ability is defined as the mean value of these scores. Score (w) =P(w)−P w′∈D′P(w′) |D′| In the zero-shot setting, LLMs struggle to con- sistently generate outputs in the desired format and exhibit relatively limited performance. By adopting a few-shot learning strategy, LLMs can imitate the format of shots, leading to more consistent output generation. In order to enable the model to adapt to the word translation task, when calculating the loss of the word translation task each time, we designed a five-shot prompt template shown in Appendix B and randomly selected five words with their correct translations from the dataset as examples. To re- duce the fluctuation introduced by examples, we measure five times for each word and calculate the average score. We present a scalable approach for constructing our dataset. Our dataset encompasses four lan- guages: English, Chinese, French, and Japanese. English and Chinese are chosen due to their dom- inance in the pre-training corpora of LLM. Ad- ditionally, we include French and | https://arxiv.org/abs/2505.16385v1 |
Japanese due to their structural similarities to Chinese and En- glish, respectively. Firstly, we select English words from the Oxford Dictionary and then translate them into the other four languages using Microsoft’s translation tool1. Then, we utilize DeepSeek2to generate distractors that exhibit a degree of sim- ilarity to the source word. The presence of dis- tractors, which may include translations of the target word, can lead to non-unique solutions for word translation tasks. To minimize ambiguity, we formulate multiple-choice questions and evaluate DeepSeek’s generated options against the correct translations. Finally, we only retain the data in which the model’s responses matched the target language word as our final dataset. 3.2 Analysis We evaluate the cross-lingual ability of several large language models, including the BLOOM se- ries with 560M, 1.7B, 3B, and 7.1B parameters (Le Scao et al., 2023), the Qwen2.5 series with 0.5B, 1.5B, 3B, and 7B parameters (Team, 2024), the Llama-3 model with 8B parameters (AI@Meta, 2024), the OLMo series with 7B parameters, step 400k (Groeneveld et al., 2024), and the Aya Ex- panse model with 8B parameters (Dang et al., 2024). The results of the cross-lingual evaluation are presented in Figure 1. The cross-lingual ability score matrices of the remaining models are pre- sented in Appendix A. With an increase in model size, the cross- language ability continuously increases. The OLMo-7B model, trained on the Dolma Dataset - a pre-training dataset predominantly in English with minor documents in other languages, shows bet- ter cross-lingual ability in English-French. Mean- while, the cross-lingual score matrix exhibits good symmetry, enabling the cross-lingual ability scores can be compared both horizontally and vertically. Aya Expanse 8B, a model known for cross-lingual 1https://www.bing.com/translator 2https://www.deepseek.com/ability, also demonstrates more powerful capabili- ties on this dataset. This also further validates the effectiveness of our Cross-Lingual Word Transla- tion Task. To demonstrate the superiority of our method in evaluating cross-lingual model capabil- ities, we compute scores for ours and chrF++ on flores across OLMo-7B models’ checkpoints and present results for language pairs representing low, medium, and high levels of cross-lingual abilities. Our metric demonstrates a more gradual increase and a more obvious change, particularly in lan- guage pairs with lower cross-lingual scores (e.g., Japanese-French). This implies that it can be used to compare models with limited cross-lingual abili- ties. 4 Semantic Pivot behavior Previous works (Wendler et al., 2024; Zhong et al., 2024) analyzing intermediate layer outputs only focus on the language of the outputs, neglecting the crucial relationship between these outputs and the model’s input and final output. To investigate the source of cross-lingual ability and inference mechanisms in LLMs, we discover and distinguish LLMs’ two distinct behaviors when LLMs do word translation tasks. 4.1 Explore Potential Mode of Inference To facilitate the interpretation of the model’s inter- nal mechanisms, we apply the logit lens , which can decode the model’s hidden output and provide tokens’ probability distribution in intermediate lay- ers, and summarize them into two behaviors. To mitigate concerns that the token is a fluke of the model’s internal representation, we mainly focus on the result of | https://arxiv.org/abs/2505.16385v1 |
the token with the highest output probability in all layers as the semantic pivot. •Co-occurrence behavior: LLM directly in- fers the output from the input (input →out- put). •Semantic pivot behavior: LLM firstly ob- tains the semantic pivot in the intermediate layer from the input and then infers the output from the semantic pivot (input →semantic pivot→output). We present the logit lens of the OLMo model performing the word translation tasks (e.g., English to Chinese) and decode the tokens into words in Figure 2. When the OLMo model translates "biol- ogy" from English to Chinese as shown in a, the 0 2 4 6 8 10 model size (B)0.00.10.20.30.40.50.60.7scoreQwen2.5 BLOOM OLMoLlama-3aya-expanse Qwen2.5Qwen2.5Qwen2.5 BLOOMBLOOMCross-Lingual Ability en fr ja zh T arget Languageen fr ja zhSource Language0 0.46 0.22 0.3 0.48 0 0.18 0.24 0.2 0.16 0 0.24 0.22 0.17 0.19 0OLMo-7B-ckpt400 0.00.10.20.30.4 50 100 150 200 250 300 350 400 train step/K0.0/00.1/100.2/200.3/300.4/400.5/50ours/flores Evolution of Cross-Lingual Performance Across The Checkpoints avg ours avg flores en-fr ours en-fr flores ja-zh ours ja-zh flores ja-fr ours ja-fr floresFigure 1: Different series of models’ distribution of cross-language ability scores shown in left. Models belonging to the same family are consistently represented by the same color. The specific distribution of the OLMo-7B-0424 model’s cross-lingual ability score matrix is shown in middle . The changes in our metric and flores scores during the training process of the OLMo-7B are shown in right . Different shapes are used to distinguish different tasks, and different colors are used to distinguish different language pairs. "avg" represents the cross-lingual ability averaged across all language pairs. en fr zh ja en 0.64 0.65 0.62 fr 0.63 0.58 0.54 zh 0.62 0.54 0.49 ja 0.60 0.52 0.51 Table 1: AUC score on each language pair using source word and target word co-occurrence frequency. Source languages are listed in the first column and target lan- guages are listed in the first row. source word directly does the forward pass to the target word between layers. This demonstrates that the model employs the co-occurrence behavior dur- ing translation. When LLM translates "bronchitis" from English to Chinese as shown in b, the token corresponding to the word "lung" has the highest probability in all layers. LLM utilizes this token for transition at the fourth layer from the bottom, demonstrating that LLM reasons by leveraging se- mantic pivots. 4.2 Distinguish Between the Two Behaviors To determine the behaviors adopted by models dur- ing inference, we establish a criterion based on a token with the highest probability across all layers to distinguish between the two behaviors. Specifi- cally, if this token is present in either the inputs or the outputs, we classify the model’s inference as the co-occurrence behavior. Otherwise, we categorize it as the semantic pivot behavior. To understand the underlying learning mecha- nisms that drive these two behaviors, we suppose that high-frequency co-occurring words tend to ex- hibit co-occurrence behavior, while models for low-frequency contributing words rely on semantic piv- ots. Thus, we conduct a co-occurrence frequency analysis. We use WIMBD (Elazar et al., 2023), a set of | https://arxiv.org/abs/2505.16385v1 |
tools for analyzing and revealing the con- tent of large-scale datasets, to get the number of documents containing the source word and the tar- get word in the Dolma dataset. Recognizing that identical strings for source and target words pre- clude accurate co-occurrence counts, we exclude instances where the source and target words have the same string. We identify data that satisfy the co-occurrence behavior as positive examples and calculate the AUC score, a standard metric for bi- nary classification tasks. The AUC score can be expressed as: P={x|x∈semantic pivot behavior } D={x|x∈direct behavior } AUC =P x∈Drank (x)−|D|(|D|+1) 2 |D||P| |P|and|D|represent the number of data inferred by LLM via semantic pivot behavior and direct behavior. rank (x)presents the rank of the number of documents containing both the source word and the target word for data xamong all datasets. The result is shown in Table 1. The majority of AUC scores exceed 0.5, meaning that the co- occurrence behavior is more prevalent in transla- tion tasks where the source and target words fre- quently co-occur, while semantic pivot behavior is more characteristic of translation tasks with low- frequency co-occurrence. Co-occurrence frequency in the three non-English languages doesn’t exhibit inputinputoutput output semantic pivot none inputnone (a) direct (b) pivotFigure 2: The example of the model’s logit lens at the first output token in 31 layers. The leftside illustrates the model’s forward pass process of the co-occurrence behavior, while the right side depicts the process of the semantic pivot behavior. strong discriminative capability in the three non- English low-resource languages. 4.3 Find Semantic Pivots From Dataset To explore how models learn cross-lingual abili- ties through semantic pivots, we conduct a token co-occurrence proportion analysis . We suppose that the semantic pivot may have a high relative co- occurrence proportion with both the source word and the target word. To accurately quantify co- occurrence proportion, we select the translation task data where the source word and target word have no overlapping tokens and satisfy semantic pivot behavior. Next, for each word translation task, we independently sample 2,000 documents from the documents containing the source language word, denoted as docs, and the documents contain- ing the target language word, denoted as doctusing Infini-gram API Liu et al. (2024a). Then, we count the frequency of tokens in these documents to es- timate the co-occurrence proportion of all tokens with the source word and the target word through- out the entire pre-training process. For some high- 24 25 26 27 28 29 30 31 Layer0.000.050.100.150.200.250.30ProbabilityPivot Probability (English to Chinese) semantic pivot set target wordFigure 3: The semantic pivots set of probability in the last eight layers. The x-axis represents the OLMo-7B’s layer index, and the y-axis indicates the total probability of all tokens in the semantic pivot set and the target word’s tokens. frequency tokens, due to their widespread occur- rence in multiple documents, co-occurrence fre- quency fails to capture their unique characteristic. To eliminate the impact of the high-frequency oc- currence of tokens in the background on the search for semantic pivots, we subtract the token’s fre- | https://arxiv.org/abs/2505.16385v1 |
quency in the background Frefrom the frequency of its co-occurrence with the source word Frexs and the target word Frext. We select the smaller of these two adjusted relative frequencies F(x)as the basis for ranking. F(x) = min Frexs−Fre,Frext−Fre We select the top 50 as candidate semantic pivots and prompt Deepseek to filter out meaningless to- kens and retain others as a potential set of semantic pivots. Spivot=Filter LLM (Top50({x|F(x)})) In order to verify the rationality of the seman- tic pivots we find, we further calculate the total probability of all semantic pivots in our set in each layer as shown in Figure 3 (e.g., English to Chi- nese). The probability of the semantic pivot set exhibits a trend of initially increasing, peaking at the 28th layer, and subsequently decreasing. This phenomenon suggests that during the translation process, the intermediate layer states of the model, to some extent, first transfer to the semantic pivots we are looking for, and then transform into the fi- nal outputs. Although the probability in each layer is not more than 0.2, it significantly surpasses the average probability of a 50-token set, which is ap- proximately50 50253, indicating the set of semantic pivots we find has a higher proportion compared to other tokens. 5 Application Even though we have gained insights into how the model learns semantic pivots from the training dataset and applies them to perform word transla- tion tasks, how to apply them to improve the cross- lingual ability score remains a challenge. Based on findings in the semantic pivot behavior, we propose a method for constructing a semantic pivot-aware pre-training dataset to enhance the model’s cross- lingual ability. 5.1 How to Find Semantic Pivot in Pre-training Dataset Based on the discovery in Section 4, the semantic pivots tend to exhibit a high relative co-occurrence proportion with both the source and target words, compared to the other tokens. Therefore, we in- troduce a semantic pivot document purification method to filter out tokens of both types with low relative co-occurrence frequency. We construct an adjacency matrix from the pre- training data to count the co-occurrence times be- tween tokens. We set a frequency threshold as 10% of the average probability of token occurrence within a single document to determine whether two tokens have a high-frequency co-occurrence. Then, for each token, we count the number of tokens that have a high co-occurrence frequency with it. For any token with no high-frequency co-occurring counterpart, we categorize it as a low-frequency token with a low relative co-occurrence frequency and filter it out. We sort the remaining tokens ac- cording to the number of their high co-occurrence frequency tokens. We consider the top 10% of to- kens as high-frequency tokens with low relative co-occurrence frequency and filter them out. We re- gard the remaining tokens as our semantic pivot set. Finally, we compute the number of tokens belong- ing to the semantic pivot set within each document and select the top-ranked documents as a high se- mantic pivot proportion document to construct our pre-training dataset. We illustrate the process of | https://arxiv.org/abs/2505.16385v1 |
constructing this pre-training data in Figure 4. 5.2 Experiments To validate the effectiveness of our proposed method for enhancing cross-lingual ability, we con-fr zh ja doc Original 0.1% 0.3% 0.2% 34k Multilingual 1.7% 4.3% 2.5% 502k Ours 1.0% 6.0% 1.5% 450k Table 2: The distribution of the language in three types of datasets and the number of documents containing one of three non-English languages. struct two datasets as baselines: an Original dataset and a Multilingual dataset. We train and evalu- ate the model using two baseline datasets and our dataset, constructed by the method in Section 5.1. For the Original dataset, we duplicated the pre- training data from the corresponding checkpoints of the original model. For the multilingual dataset, we extract training documents in the three lan- guages, French, Chinese, and Japanese, from the pre-training dataset of 15 checkpoints, following the corresponding checkpoint. For the remain- ing documents, we use the English documents from the original checkpoint for padding. For our dataset, we extract training documents in French, Chinese, and Japanese from the pre-training data of 25 checkpoints, as multilingual candidate doc- uments. We construct a cross-lingual adjacency matrix on a subset of multilingual documents and employ our method to rank the multilingual can- didate documents. We selected the top-ranked ap- proximately 450,000 documents for our dataset, ensuring the aggregate proportion of low-resource languages aligns with that of multilingual datasets. To achieve more accurate language identification and filtering of documents, we divided each docu- ment into sliding chunks with a chunk size of 256 and a sliding step of 128. Then we utilize fasttext (Joulin et al., 2016) to identify the language of each chunk and select the result with the highest proba- bility as the language of that chunk. The distribu- tion details of languages in the three datasets are presented in Table 2. Currently, there is a scarcity of publicly available multilingual large language models with open-source checkpoint parameters and pre-training data. We choose the OLMo model with a size of 1B, a model trained on the open- source pre-training dataset Dolma, and publish the training data of each step. In order to better repli- cate the original pre-training process and conduct a fair comparison, we utilize the training code and parameters provided by allenai (Groeneveld et al., 2024). We separately used the three datasets to con- tinue the training for 1000 steps at the 605th check- +1…… Originally from Asia and Europe, theapple is a fruit that helps prevent diseases. It has vita mins ……count the degreelow-frequency co-occurrence 10% 90% semantic pivots setDoc: [ … that … fruit … vita … ] Doc: [ … Chie … Asia … that … ] Doc: [ … the … lung … disea … ]… … … …×√× 983 864 346Count semantic pivot -aware pre-training datasettopk …document Add the co - occurrence times zero itemsdocuments…………55 𝟓𝟎10 𝟒𝟎 𝟏𝟕𝟐𝟎 7 high-frequency co-occurrencefruit apple thatAsia vita fruit apple thatAsia vitaAsia fruit apple vita that …… ……10 25 17 0 55 …… …… that 55…… fruit 25 apple 17 Asia 10…… ……………… vita 0…… ………… | https://arxiv.org/abs/2505.16385v1 |
token degreesortFigure 4: The process of constructing a semantic pivot-aware pre-training dataset used to improve the model’s cross-lingual ability. SettingTarget Language en fr zh ja Original dataset avg: 0.1420en 0.31 0.13 0.18 fr 0.34 0.08 0.11 zh 0.09 0.05 0.13 ja 0.11 0.06 0.12 Multilingual dataset avg: 0.1506en 0.33 0.13 0.19 fr 0.33 0.09 0.11 zh 0.09 0.06 0.15 ja 0.12 0.08 0.14 Our dataset avg: 0.1552en 0.32 0.13 0.20 fr 0.36 0.09 0.12 zh 0.10 0.06 0.15 ja 0.14 0.08 0.13 Table 3: The cross-lingual ability score matrices of the model continue training on three datasets, mentioned in Section 5.2 point. This checkpoint is located at the late of the training process, where the model’s performance change remains relatively stable after training. 5.3 Analysis We evaluate models’ cross-lingual ability after training on three datasets, showing the results of the cross-lingual ability matrices in Table 3. The mul- tilingual dataset has more multilingual documents than the original training data. The model trained on a multilingual dataset exhibits stronger cross- lingual ability across almost all language pairs thanthe original model, meaning that more multilingual documents can effectively enhance cross-lingual ability. The model trains on our dataset achieves a 0.013 improvement over the original dataset base- line and a 0.005 improvement over the multilingual dataset baseline, even though the multilingual pro- portion in this dataset aligns with that of a multilin- gual dataset. The results indicate that our method of filtering documents with a high proportion of semantic pivots and using them to construct pre- training data effectively and comprehensively im- proves cross-lingual capabilities. 6 Conclusions In this paper, to quantify the cross-lingual abil- ity of LLMs accurately, we propose a Word-Level Cross-lingual Ability Task and evaluate it on LLMs. To find how LLMs learn cross-lingual ability, we identify and distinguish LLMs’ two behaviors. We attribute LLMs’ two behaviors to the co-occurrence frequency of strings and calculate AUC score. We use token co-occurrence proportion analysis to identify semantic pivots from the pre-training dataset. Finally, to improve the model’s cross- lingual ability based on our findings, we do se- mantic pivot document purification to reconstruct the pre-training dataset rather than incorporating domain-specific data. Our experiment effectively improves the model’s cross-lingual ability com- pared to the two baselines. Limitations Due to the limited availability of open-source large language models with public checkpoints of pre- training data, training parameters, and pre-training code, we only validate our method on the OLMo- 1B model. We are unable to conduct our exper- iments on larger language models with stronger cross-lingual ability. References Kabir Ahuja, Harshita Diddee, Rishav Hada, Milli- cent Ochieng, Krithika Ramesh, Prachi Jain, Ak- shay Nambi, Tanuja Ganu, Sameer Segal, Mohamed Ahmed, Kalika Bali, and Sunayana Sitaram. 2023. MEGA: Multilingual evaluation of generative AI. InProceedings of the 2023 Conference on Empir- ical Methods in Natural Language Processing , pages 4232–4267, Singapore. Association for Computa- tional Linguistics. AI@Meta. 2024. Llama 3 model card. Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of mono- lingual representations. In Proceedings of the 58th Annual Meeting of the Association | https://arxiv.org/abs/2505.16385v1 |
for Computational Linguistics , pages 4623–4637, Online. Association for Computational Linguistics. Sunit Bhattacharya and Ond ˇrej Bojar. 2023. Unveil- ing multilinguality in transformer models: Exploring language specificity in feed-forward networks. In Proceedings of the 6th BlackboxNLP Workshop: An- alyzing and Interpreting Neural Networks for NLP , pages 120–126, Singapore. Association for Compu- tational Linguistics. Terra Blevins, Hila Gonen, and Luke Zettlemoyer. 2022. Analyzing the mono- and cross-lingual pretraining dynamics of multilingual language models. In Pro- ceedings of the 2022 Conference on Empirical Meth- ods in Natural Language Processing , pages 3575– 3590, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Jiawei Chen, Wentao Chen, Jing Su, Jingjing Xu, Hongyu Lin, Mengjie Ren, Yaojie Lu, Xianpei Han, and Le Sun. 2024. The rise and down of babel tower: Investigating the evolution process of mul- tilingual code large language model. arXiv preprint arXiv:2412.07298 . Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. CoRR , abs/1911.02116. Alexis Conneau, Ruty Rinott, Guillaume Lample, Ad- ina Williams, Samuel R. Bowman, Holger Schwenk,and Veselin Stoyanov. 2018. Xnli: Evaluating cross- lingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natu- ral Language Processing . Association for Computa- tional Linguistics. John Dang, Shivalika Singh, Daniel D’souza, Arash Ahmadian, Alejandro Salamanca, Madeline Smith, Aidan Peppin, Sungjin Hong, Manoj Govindassamy, Terrence Zhao, Sandra Kublik, Meor Amer, Viraat Aryabumi, Jon Ander Campos, Yi-Chern Tan, Tom Kocmi, Florian Strub, Nathan Grinsztajn, Yannis Flet-Berliac, Acyr Locatelli, Hangyu Lin, Dwarak Talupuru, Bharat Venkitesh, David Cairuz, Bowen Yang, Tim Chung, Wei-Yin Ko, Sylvie Shang Shi, Amir Shukayev, Sammie Bae, Aleksandra Piktus, Ro- man Castagné, Felipe Cruz-Salinas, Eddie Kim, Lu- cas Crawhall-Stein, Adrien Morisot, Sudip Roy, Phil Blunsom, Ivan Zhang, Aidan Gomez, Nick Frosst, Marzieh Fadaee, Beyza Ermis, Ahmet Üstün, and Sara Hooker. 2024. Aya expanse: Combining re- search breakthroughs for a new multilingual frontier. Preprint , arXiv:2412.04261. Andrea Gregor de Varda and Marco Marelli. 2024. The emergence of semantic units in massively multilin- gual models. In Proceedings of the 2024 Joint In- ternational Conference on Computational Linguis- tics, Language Resources and Evaluation (LREC- COLING 2024) , pages 15910–15921, Torino, Italia. ELRA and ICCL. Jacob Devlin. 2018. Bert: Pre-training of deep bidi- rectional transformers for language understanding. arXiv preprint arXiv:1810.04805 . Olga Majewska Edoardo M. Ponti, Goran Glava s. 2020. XCOPA: A multilingual dataset for causal common- sense reasoning. arXiv preprint . Yanai Elazar, Akshita Bhagia, Ian Magnusson, Abhi- lasha Ravichander, Dustin Schwenk, Alane Suhr, Pete Walsh, Dirk Groeneveld, Luca Soldaini, Sameer Singh, et al. 2023. What’s in my big data? arXiv preprint arXiv:2310.20707 . Naman Goyal, Cynthia Gao, Vishrav Chaudhary, Peng- Jen Chen, Guillaume Wenzek, Da Ju, Sanjana Kr- ishnan, Marc’Aurelio Ranzato, Francisco Guzmán, and Angela Fan. 2021. The flores-101 evaluation benchmark for low-resource and multilingual ma- chine translation. Dirk Groeneveld, Iz Beltagy, Pete Walsh, Akshita Bha- gia, Rodney Kinney, Oyvind Tafjord, A. Jha, Hamish Ivison, Ian Magnusson, Yizhong Wang, Shane Arora, David Atkinson, Russell Authur, Khyathi Raghavi Chandu, Arman Cohan, | https://arxiv.org/abs/2505.16385v1 |
Jennifer Dumas, Yanai Elazar, Yuling Gu, Jack Hessel, Tushar Khot, William Merrill, Jacob Daniel Morrison, Niklas Muennighoff, Aakanksha Naik, Crystal Nam, Matthew E. Peters, Valentina Pyatkin, Abhilasha Ravichander, Dustin Schwenk, Saurabh Shah, Will Smith, Emma Strubell, Nishant Subramani, Mitchell Wortsman, Pradeep Dasigi, Nathan Lambert, Kyle Richardson, Luke Zettlemoyer, Jesse Dodge, Kyle Lo, Luca Soldaini, Noah A. Smith, and Hanna Hajishirzi. 2024. Olmo: Accelerating the science of language models. arXiv preprint . Tahmid Hasan, Abhik Bhattacharjee, Md. Saiful Is- lam, Kazi Mubasshir, Yuan-Fang Li, Yong-Bin Kang, M. Sohel Rahman, and Rifat Shahriyar. 2021. XL- sum: Large-scale multilingual abstractive summariza- tion for 44 languages. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 , pages 4693–4703, Online. Association for Computa- tional Linguistics. Yue Huang, Chenrui Fan, Yuan Li, Siyuan Wu, Tianyi Zhou, Xiangliang Zhang, and Lichao Sun. 2024. 1+ 1> 2: Can large language models serve as cross- lingual knowledge aggregators? arXiv preprint arXiv:2406.14721 . Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, and Tomas Mikolov. 2016. Fasttext.zip: Compressing text classification models. arXiv preprint arXiv:1612.03651 . Viet Dac Lai, Nghia Ngo, Amir Pouran Ben Veyseh, Hieu Man, Franck Dernoncourt, Trung Bui, and Thien Huu Nguyen. 2023a. ChatGPT beyond En- glish: Towards a comprehensive evaluation of large language models in multilingual learning. In Find- ings of the Association for Computational Linguis- tics: EMNLP 2023 , pages 13171–13189, Singapore. Association for Computational Linguistics. Viet Dac Lai, Nghia Trung Ngo, Amir Pouran Ben Veyseh, Hieu Man, Franck Dernoncourt, Trung Bui, and Thien Huu Nguyen. 2023b. Chatgpt beyond en- glish: Towards a comprehensive evaluation of large language models in multilingual learning. arXiv preprint arXiv:2304.05613 . Teven Le Scao, Angela Fan, Christopher Akiki, El- lie Pavlick, Suzana Ili ´c, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. 2023. Bloom: A 176b- parameter open-access multilingual language model. Patrick Lewis, Barlas Oguz, Ruty Rinott, Sebas- tian Riedel, and Holger Schwenk. 2019. Mlqa: Evaluating cross-lingual extractive question answer- ing. arXiv preprint arXiv:1910.07475 , arXiv: 1910.07475. Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Na- man Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O’Horo, Jeff Wang, Luke Zettle- moyer, Zornitsa Kozareva, Mona Diab, Veselin Stoy- anov, and Xian Li. 2022. Few-shot learning with multilingual generative language models. In Proceed- ings of the 2022 Conference on Empirical Methods in Natural Language Processing , pages 9019–9052, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Na- man Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O’Horo, Jeff Wang, Luke Zettle- moyer, Zornitsa Kozareva, Mona T. Diab, Veselin Stoyanov, and Xian Li. 2021. Few-shot learn- ing with multilingual language models. CoRR , abs/2112.10668. Jiacheng Liu, Sewon Min, Luke Zettlemoyer, Yejin Choi, and Hannaneh Hajishirzi. 2024a. Infini-gram: Scaling unbounded n-gram language models to a tril- lion tokens. arXiv preprint arXiv:2401.17377 . Weize Liu, Yinlong Xu, Hongxia Xu, Jintai Chen, Xum- ing Hu, and | https://arxiv.org/abs/2505.16385v1 |
Jian Wu. 2024b. Unraveling babel: Ex- ploring multilingual activation patterns of llms and their applications. In Proceedings of the 2024 Con- ference on Empirical Methods in Natural Language Processing , pages 11855–11881. Y Liu. 2020. Multilingual denoising pre-training for neural machine translation. arXiv preprint arXiv:2001.08210 . Yongyu Mu, Peinan Feng, Zhiquan Cao, Yuzhang Wu, Bei Li, Chenglong Wang, Tong Xiao, Kai Song, Ton- gran Liu, Chunliang Zhang, et al. 2024. Revealing the parallel multilingual learning within large lan- guage models. In Proceedings of the 2024 Confer- ence on Empirical Methods in Natural Language Processing , pages 6976–6997. Nostalgebraist. interpreting gpt: the logit lens. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics , pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Maja Popovi ´c. 2015. chrf: character n-gram f-score for automatic mt evaluation. In Proceedings of the tenth workshop on statistical machine translation , pages 392–395. Jirui Qi, Raquel Fernández, and Arianna Bisazza. 2023. Cross-lingual consistency of factual knowl- edge in multilingual language models. arXiv preprint arXiv:2310.10378 . Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. 2021. Multitask prompted training en- ables zero-shot task generalization. arXiv preprint arXiv:2110.08207 . Haizhou Shi, Zihao Xu, Hengyi Wang, Weiyi Qin, Wenyuan Wang, Yibin Wang, Zifeng Wang, Sayna Ebrahimi, and Hao Wang. 2024. Continual learning of large language models: A comprehensive survey. arXiv preprint arXiv:2404.16789 . Giulio Starace, Konstantinos Papakostas, Rochelle Choenni, Apostolos Panagiotopoulos, Matteo Rosati, Alina Leidinger, and Ekaterina Shutova. 2023. Prob- ing LLMs for joint encoding of linguistic categories. InFindings of the Association for Computational Lin- guistics: EMNLP 2023 , pages 7158–7179, Singapore. Association for Computational Linguistics. Tianyi Tang, Wenyang Luo, Haoyang Huang, Dong- dong Zhang, Xiaolei Wang, Xin Zhao, Furu Wei, and Ji-Rong Wen. 2024. Language-specific neurons: The key to multilingual capabilities in large language models. arXiv preprint arXiv:2402.16438 . Qwen Team. 2024. Qwen2.5: A party of foundation models. Bibek Upadhayay and Vahid Behzadan. 2023. Taco: En- hancing cross-lingual transfer for low-resource lan- guages in llms through translation-assisted chain-of- thought processes. arXiv preprint arXiv:2311.10797 . Ivan Vuli ´c, Goran Glavaš, Fangyu Liu, Nigel Collier, Edoardo Maria Ponti, and Anna Korhonen. 2023. Probing cross-lingual lexical knowledge from mul- tilingual sentence encoders. In Proceedings of the 17th Conference of the European Chapter of the As- sociation for Computational Linguistics , pages 2089– 2105, Dubrovnik, Croatia. Association for Computa- tional Linguistics. Xiangpeng Wei, Haoran Wei, Huan Lin, Tianhao Li, Pei Zhang, Xingzhang Ren, Mei Li, Yu Wan, Zhiwei Cao, Binbin Xie, et al. 2023. Polylm: An open source polyglot large language model. arXiv preprint arXiv:2307.06018 . Chris Wendler, Veniamin Veselovsky, Giovanni Monea, and Robert West. 2024. Do llamas work in english? on the latent language of multilingual transformers. arXiv preprint arXiv:2402.10588 . Haoran Xu, Young Jin Kim, Amr Sharaf, and Hany Hassan Awadalla. 2023. A paradigm shift in machine translation: Boosting translation perfor- | https://arxiv.org/abs/2505.16385v1 |
mance of large language models. arXiv preprint arXiv:2309.11674 . Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies , pages 483–498, On- line. Association for Computational Linguistics. Wen Yang, Chong Li, Jiajun Zhang, and Chengqing Zong. 2023. Bigtranslate: Augmenting large language models with multilingual translation ca- pability over 100 languages. arXiv preprint arXiv:2305.18098 . Biao Zhang, Philip Williams, Ivan Titov, and Rico Sen- nrich. 2020. Improving massively multilingual neu- ral machine translation and zero-shot translation. Inword type n v adj total CLWTD 1423 173 404 2000 Table 4: The part-of-speech distribution of words in CLWTD used in our research Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics , pages 1628– 1639, Online. Association for Computational Linguis- tics. Zhihan Zhang, Dong-Ho Lee, Yuwei Fang, Wenhao Yu, Mengzhao Jia, Meng Jiang, and Francesco Bar- bieri. 2023. Plug: Leveraging pivot language in cross-lingual instruction tuning. arXiv preprint arXiv:2311.08711 . Zhihao Zhang, Jun Zhao, Qi Zhang, Tao Gui, and Xuanjing Huang. 2024. Unveiling linguistic re- gions in large language models. arXiv preprint arXiv:2402.14700 . Yiran Zhao, Wenxuan Zhang, Guizhen Chen, Kenji Kawaguchi, and Lidong Bing. 2024. How do large language models handle multilingualism? arXiv preprint arXiv:2402.18815 . Chengzhi Zhong, Fei Cheng, Qianying Liu, Junfeng Jiang, Zhen Wan, Chenhui Chu, Yugo Murawaki, and Sadao Kurohashi. 2024. Beyond english-centric llms: What language do multilingual language mod- els think in? arXiv preprint arXiv:2408.10811 . A Dataset Composition In our work, we evaluate LLMs’ cross-lingual abil- ity on a word translation task using the CLWTD. We analyzed the part-of-speech distribution of all words in the CLWTD shown in Table 4. B Models’ Cross-Lingual Ability We provide details of the model’s cross-lingual score matrix, we evaluate in Figure 1 shown in Figure 5, including the BLOOM series with 560M, 1.7B, 3B, and 7.1B parameters, the Qwen2.5 series with 0.5B, 1.5B, 3B, and 7B parameters, the Llama- 3 model with 8B parameters, the OLMo series with 7B parameters, step 400k, and the Aya Expanse model with 8B parameters. To compare our method with the Sentence-level machine translation task, we compute scores for our proposed method and chrF++ on the FLORES dataset of models’ checkpoints. We present results for language pairs representing low, medium and high levels of cross-lingual transfer ability in Fig- ure 6 en fr ja zh T arget Languageen fr ja zhSource Language0 0.61 0.21 0.46 0.6 0 0.18 0.4 0.22 0.2 0 0.27 0.37 0.32 0.2 0bloom-7b1 0.00.10.20.30.40.50.6 en fr ja zh T arget Languageen fr ja zhSource Language0 0.56 0.15 0.39 0.56 0 0.13 0.34 0.18 0.16 0 0.21 0.33 0.29 0.16 0bloom-3b 0.00.10.20.30.40.5 en fr ja zh T arget Languageen fr ja zhSource Language0 0.5 0.11 0.33 0.52 00.097 0.28 0.14 0.12 0 0.15 0.27 0.24 0.13 0bloom-1b7 0.00.10.20.30.40.5 en fr ja zh T arget Languageen fr ja zhSource Language0 0.73 0.38 0.41 0.63 0 | https://arxiv.org/abs/2505.16385v1 |
0.32 0.34 0.39 0.38 0 0.3 0.32 0.34 0.24 0llama3 0.00.10.20.30.40.50.60.7 en fr ja zh T arget Languageen fr ja zhSource Language0 0.73 0.54 0.59 0.66 0 0.48 0.52 0.51 0.49 0 0.52 0.43 0.43 0.44 0aya-expanse-8b 0.00.10.20.30.40.50.60.7 en fr ja zh T arget Languageen fr ja zhSource Language0 0.46 0.22 0.3 0.48 0 0.18 0.24 0.2 0.16 0 0.24 0.22 0.17 0.19 0OLMo-7B-ckpt400 0.00.10.20.30.4 en fr ja zh T arget Languageen fr ja zhSource Language0 0.7 0.4 0.58 0.67 0 0.36 0.51 0.5 0.47 0 0.49 0.48 0.46 0.33 0Qwen2.5-7B 0.00.10.20.30.40.50.60.7 en fr ja zh T arget Languageen fr ja zhSource Language0 0.65 0.31 0.49 0.64 0 0.28 0.42 0.46 0.39 0 0.41 0.44 0.38 0.27 0Qwen2.5-3B 0.00.10.20.30.40.50.6 en fr ja zh T arget Languageen fr ja zhSource Language0 0.58 0.24 0.42 0.59 0 0.21 0.36 0.36 0.3 0 0.34 0.4 0.31 0.21 0Qwen2.5-1.5B 0.00.10.20.30.40.5 en fr ja zh T arget Languageen fr ja zhSource Language0 0.36 0.12 0.27 0.44 00.094 0.2 0.18 0.13 0 0.21 0.26 0.15 0.14 0Qwen2.5-0.5B 0.00.10.20.30.4Figure 5: The specific distribution of the model’s cross-lingual ability score matrix. The title describes the model we evaluate. The vertical axis represents the source language; the horizontal axis represents the target language. C Prompt Design In Section 3.1, we prompt DeepSeek to generate distractors. we show the detail of the prompt in Figure 7. We design the five-shot prompt template to cal- culate the model’s cross-lingual ability. We show an example of a prompt requiring the model to translate the word "beef" from English to Chinese in Figure 8. To explore how models learn cross-lingual abili- ties through semantic pivots and discover the poten- tial semantic pivot, we prompt Deepseek to filter out meaningless tokens. The prompt is shown in Figure 9 D Result of Finding Semantic Pivots In Section 4.3, we conduct a token co-occurrence proportion analysis, to explore how models learncross-lingual abilities through semantic pivots. We show the semantic pivot set of probability in the last eight layers in a part of the language pairs in Figure 10. 50 100 150 200 250 300 350 400 train step/K0.0/00.1/100.2/200.3/300.4/400.5/50ours/flores Evolution of Cross-Lingual Performance Across OLMo-7B's Checkpoints avg ours avg flores en-fr ours en-fr flores ja-zh ours ja-zh flores ja-fr ours ja-fr flores 0 50 100 150 200 250 300 train step/K0.0/00.1/100.2/200.3/300.4/400.5/500.6/60ours/flores Evolution of Cross-Lingual Performance Across BLOOM-7b1's Checkpoints avg ours avg flores en-fr ours en-fr flores ja-zh ours ja-zh flores ja-fr ours ja-fr floresFigure 6: The changes in our metric and flores scores during the training process. The result of OLMo-7B are shown in left, and the result of bloom-7b1 are shown in right Different shapes are used to distinguish different tasks, and different colors are used to distinguish different language pairs. "avg" represents the cross-lingual ability averaged across all language pairs. ### Instruction: You are a question designer for word translation tasks. Based on the provided word, design nine distractors, ensuring the final translation question has a unique correct answer. Requirements for distractors: 1. The distractors must not be valid translations of the given word in any context. | https://arxiv.org/abs/2505.16385v1 |
2. All distractors must be not repeat. 3. Ensure the distractors are linguistically correct. 4. Each distractor must be a single word. ### Steps to follow: 1. Under "answer", list all possible translations of the word. 2. Under "options", list the nine distractors. Make sure that distractors must never match any translation of the given word. ### Input: {language[lang1]}: {word} {language[lang2]} answer: {answer} {language[lang2]} options: ### Response: Figure 7: The prompt we use to generate the distractors Please translate words from English to Chinese. English: awareness Chinese: 意识 English: algorithm Chinese: 算法 English: businesslike Chinese: 务实 English: chicory Chinese: 菊苣 English: impending Chinese: 即将 English: beef Chinese: Figure 8: The example of a five-shot prompt used to calculate the model’s cross-lingual ability ### Positioning: You are a data analyst. Please determine which elements in the list are related to the target word and provide a detailed analysis process. ### Requirements: 1. Analyze all elements in the given list and determine whether they have a strong correlation with the target word and are not meaningless characters. 2. The final answer should be a list containing all elements related to the target word. 3. The output format of the final answer should be: Answer: [...] ### Example: Target word: apple - 苹果 List: [" fruit", "(", "banana", "appl", "red"] Response: Analysis: 1. " fruit" – contains "apple". 2. "("– unrelated to "apple". 3. "banana" – a type of fruit, like "apple". 4. "appl" – a prefix of "apple". 5. "red" – describes the color of an apple. Answer: [" fruit", "banana", "appl", "red"] ### Target Word: {word1} - {word2} ### List: {list} ### Response: Figure 9: The prompt we design to filter out words irrelevant to the source word and the target word. 24 25 26 27 28 29 30 31 Layer0.000.050.100.150.200.250.30ProbabilityPivot Probability (English to French) semantic pivot set target word 24 25 26 27 28 29 30 31 Layer0.0000.0250.0500.0750.1000.1250.1500.1750.200ProbabilityPivot Probability (English to Japanese) semantic pivot set target word 24 25 26 27 28 29 30 31 Layer0.000.050.100.150.200.250.30ProbabilityPivot Probability (English to Chinese) semantic pivot set target word 24 25 26 27 28 29 30 31 Layer0.000.050.100.150.200.250.300.35ProbabilityPivot Probability (French to English) semantic pivot set target word 24 25 26 27 28 29 30 31 Layer0.000.050.100.150.200.25ProbabilityPivot Probability (French to Chinese) semantic pivot set target word 24 25 26 27 28 29 30 31 Layer0.0000.0250.0500.0750.1000.1250.1500.175ProbabilityPivot Probability (French to Japanese) semantic pivot set target word 24 25 26 27 28 29 30 31 Layer0.000.050.100.150.200.25ProbabilityPivot Probability (Chinese to English) semantic pivot set target word 24 25 26 27 28 29 30 31 Layer0.000.050.100.150.20ProbabilityPivot Probability (Chinese to French) semantic pivot set target word 24 25 26 27 28 29 30 31 Layer0.0000.0250.0500.0750.1000.1250.1500.1750.200ProbabilityPivot Probability (Japanese to English) semantic pivot set target word Figure 10: The semantic pivots set of probability in the last eight layers. The x-axis represents the OLMo-7B’s layer index, and the y-axis indicates the total probability of all tokens in the semantic pivots set. The title shows the source language and the target language. | https://arxiv.org/abs/2505.16385v1 |
arXiv:2505.16392v1 [cs.CL] 22 May 2025Resource for Error Analysis in Text Simplification: New Taxonomy and Test Collection Benjamin Vendeville Université de Bretagne Occidentale Lab-STICC (UMR CNRS 6285) Brest, France benjamin.vendeville@univ-brest.frLiana Ermakova Université de Bretagne Occidentale HCTI, Brest France Brest, France liana.ermakova@univ-brest.frPierre De Loor ENIB Lab-STICC (UMR CNRS 6285) Brest, France deloor@enib.fr Abstract The general public often encounters complex texts but does not have the time or expertise to fully understand them, leading to the spread of misinformation. Automatic Text Simplification (ATS) helps make information more accessible, but its evaluation methods have not kept up with advances in text generation, especially with Large Language Models (LLMs). In particular, recent studies have shown that current ATS metrics do not correlate with the presence of errors. Manual inspections have further revealed a variety of errors, underscoring the need for a more nuanced evaluation framework, which is currently lacking. This resource paper addresses this gap by introducing a test collection for detecting and classifying errors in simplified texts. First, we propose a taxonomy of errors, with a formal focus on information distortion. Next, we introduce a parallel dataset of automatically simplified scientific texts. This dataset has been human-annotated with labels based on our proposed taxonomy. Finally, we analyze the quality of the dataset, and we study the performance of existing models to detect and classify errors from that taxonomy. These contributions give researchers the tools to better evaluate errors in ATS, develop more reliable models, and ultimately improve the quality of automatically simplified texts. CCS Concepts •Information systems →Specialized information retrieval ; Information extraction ;Clustering and classification ;Test collections . Keywords Automatic Text Simplification; Error Classification; Hallucinations; Test Collection; Natural Language Processing; Large Language Mod- els ACM Reference Format: Benjamin Vendeville, Liana Ermakova, and Pierre De Loor. 2025. Resource for Error Analysis in Text Simplification: New Taxonomy and Test Col- lection. In Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’25), July 13–18, 2025, Padua, Italy. ACM, New York, NY, USA, 10 pages. https://doi.org/10. 1145/3726302.3730304 This work is licensed under a Creative Commons Attribution 4.0 International License. SIGIR ’25, Padua, Italy ©2025 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-1592-1/2025/07 https://doi.org/10.1145/3726302.37303041 Introduction The internet has made it easier for everyone to share and access information. This has had a big impact both in very general ar- eas, but also in very technical fields such as science. While this democratization has increased availability, it has also highlighted a critical problem: complex language and the lack of background knowledge are challenging obstacles for the general public to un- derstand scientific documents. To solve this, the field of Automatic Text Simplification (ATS) aims to develop and investigate meth- ods to facilitate comprehension of complex documents. Though not new, the field has grown rapidly with advances in natural lan- guage processing and large language models (LLMs) [ 16]. Despite the impressive capabilities of LLMs in text generation, they often introduce various errors [ 19]. These errors can include fluency issues (e.g., incorrect syntax, punctuation, or grammar), hallucina- tions (e.g., generating unsupported or false | https://arxiv.org/abs/2505.16392v1 |
information) [ 8], and simplification-specific mistakes such as misrepresenting claims or overgeneralizing concepts [6, 7]. Current efforts in evaluating errors in open-ended generation primarily focus on truthfulness with respect to the source document or world knowledge [ 8,11]. While effective for identifying certain issues, this approach fails to capture the full range of possible er- rors [ 6,7]. Error detection requires a task-specific approach [ 10], which has yet to be systematically explored in ATS research. The only related work [ 9] proposes an edit-level evaluation, where some edits (e.g., repetitions, grammar errors, contradictions) are labeled as errors. However, this typology omits certain types of er- roneous generations, loosely defines others, and does not align with commonly used terms such as faithfulness orfactual hallucination . Besides, the evaluation relies on aligning spans between source and simplified texts, a time-consuming process that becomes even more complex when moving from sentence-level to document-level sim- plification. These limitations point to the need for a comprehensive taxonomy and annotation framework specifically designed for ATS, which, to our knowledge, has not yet been established. Problem. The lack of a comprehensive framework for defining, detecting, and evaluating error detection methods in ATS limits progress, new resources are needed. Contributions. In this resource paper, we make the following con- tributions to support researchers in building better error detection methods for ATS: Taxonomy: A new taxonomy of errors in ATS with a focus on information distortion. Test Collection: A test collection based on our taxonomy, ac- companied by a detailed annotation scheme. SIGIR ’25, July 13–18,2025, Padua, Italy Benjamin Vendeville, Liana Ermakova, and Pierre De Loor D1. Oversimplification of conceptsD1.1 Overgeneralization of Concepts D2. Content relevanceErrorA.Fluency B. Alignment C.Information C3. Topic shiftC1. Factuality hallucination C2. Faithfulness hallucinationB1. Format misalignement B1. Prompt misalignementA5. RedundancyA1. Random generation A2. Answer syntax incorrect A3. Answer contradicts itself A4. Simple punctuation / grammar errors D1.2 Overspecification of Concepts D2.1. Loss of Informative Content D2.2. Out-of-Scope GenerationD.Simplification Figure 1: Structure of our typology of errors in text simplification. Showcase: We showcase error detection methods for identify- ing and classifying errors using our test collection. The current state of the art in ATS lacks a formal definition of er- rors and evaluation tools. To the best of our knowledge, this work provides the first comprehensive taxonomy of ATS errors. In partic- ular, we put an emphasis on formally defining errors related to this task. These resources address key challenges in the field, making it now possible for researchers to more systematically categorize ATS errors and develop more effective tools to detect and avoid them. We will use this test collection to introduce a task on error detection at the SimpleText shared task at CLEF 2025 on automatic text simplification [3]. The full taxonomy with examples and proper definitions is pro- vided in the Appendix. This annotation scheme is designed to be used in a variety of simplification strategies by users with or with- out expertise in text simplification. The annotated dataset is de- rived from sentences automatically simplified using various models, which were extracted from system runs | https://arxiv.org/abs/2505.16392v1 |
submitted to the SimpleText track at CLEF 2024 [ 5,7]. There is a range of evaluation measures for text simplification against reference simplifications, yet none exhibits very high agreement with human labels of simplification quality [ 1]. Moreover, earlier analysis revealed a range of informa- tion distortion issues in these runs, which were ignored by standard ATS evaluation measures [ 4,5]. Other papers have observed similar issues in the output of generative text simplification models [12]. Our resources aim to set a new standard for detecting and eval- uating errors in ATS. As IR advances and its applications expand, the demand for tools to manage complex information, especially scientific data, will only increase. We expect ATS research to grow, with our work serving as a foundation for future developments. In Section 2, we introduce our taxonomy and the methodology used to develop it, using a formal, fact-based approach to identifying errors in ATS. Section 3 describes our data annotation process based on this taxonomy and presents the results. In Section 4, we examine how existing error detection measures perform on our annotateddata. We conclude with the implications of these results and a discussion of potential directions for future research. 2 Developing the Error Taxonomy In Text Simplification, the goal is to make text more understand- able to a target audience, but this goal may mean different things depending on the audience [ 17], including different characteristics (syntactic, lexical, . . . ) or different levels (5th grade, 3rd grade, . . . ). We define here errors that appear regardless of the specific simplifi- cation goal, as well as errors that are the result of misunderstanding the simplification goal. 2.1 Structure of the Taxonomy The taxonomy is shown in Figure 1. It is a tree composed of the four greater error categories, each with its specific errors. We define these four greater types of errors, based on where that error comes from, and what made the model make the error: •A. Fluency. Is the answer provided in a correct form that a fluent speaker would speak? •B. Alignment. Is the format of the answer correct? •C. Information. Is the information provided accurate and relevant to the input? •D. Simplification. Does the response focus on simplifica- tion? In the following subsections, we discuss each main error category and its subtypes in detail. 2.2 Fluency Errors Errors in this category relate to the capacity of the model to generate human-looking text. The evaluation of the fluency of text is a well- researched area [15]. In particular, we identify five types of errors that appear in ATS: Random generation At least part of the answer is just a ran- dom string of words or numbers. Contradiction Answer contradicts itself. Resource for Error Analysis in Text Simplification: New Taxonomy and Test Collection SIGIR ’25, July 13–18,2025, Padua, Italy Faithfulness hallucination Topic shiftFactuality hallucinationTrue facts False facts Source text Topical informationFacts contrary to Source text Simplification Figure 2: Diagram illustrating the sets and intersections in- volved in defining information errors. Simple punctuation / grammar errors Answer has | https://arxiv.org/abs/2505.16392v1 |
punctu- ation errors that don’t hinder comprehension. Redundancy Repeated sentences, parts of sentences, or groups of sentences that do not need to be repeated. More details about these errors, including examples, can be found in Appendix A.1 Fluency. 2.3 Alignment Errors Alignment errors occur when the model fails to follow structured prompts or generate responses in the expected format. LLMs learn to interpret tags (e.g., <query>, <answer>), handle one-shot prompts, and maintain proper formatting (e.g., brackets, quotes). Misalign- ment can cause parsing issues or lead to cascading errors. These errors typically fall into two categories: Format Misalignment Missing tags or symbols used for for- matting, such as JSON symbols (", }) or starting tags like <answer>. Prompt Misalignment The model generates unnecessary ad- ditional content such as another question, a different source text, or an unrelated answer. More details about these errors, including examples, can be found in Appendix A.2 Alignment. 2.4 Information Errors In text simplification, some errors can arise from the model’s treat- ment of the information in the source text. In particular, we are interested in two questions: Topicality Is the answer on topic ? Truthfulness Is the answer true? To answer these questions systematically, we adopt an approach based on sets of facts (defined as tuples of {subject, relation, target}) as the unit of information. Let 𝐹𝑠𝑟𝑐be the set of facts in the source text and𝐹𝑔𝑒𝑛those in the generated text. We have: 𝐹𝑠𝑟𝑐={𝑓𝑠|𝑓𝑠=(𝑠𝑢𝑏𝑗𝑠,𝑟𝑒𝑙𝑠,𝑜𝑏𝑗𝑠)} 𝐹𝑔𝑒𝑛={𝑓𝑔|𝑓𝑔=(𝑠𝑢𝑏𝑗𝑔,𝑟𝑒𝑙𝑔,𝑜𝑏𝑗𝑔)} Where each triplet (𝑠𝑢𝑏𝑗,𝑟𝑒𝑙,𝑜𝑏𝑗)consists of a subject, a relation, and an object identified in the source 𝑆or generated text 𝐺. Based on this definition, we represent the sets of facts in Figure 2. Fig- ure 2 illustrates the relationship between these sets ( source textandsimplification ) and the two key questions we explore: topicality andtruthfulness . In the following subsections, we will define and investigate related errors. 2.4.1 Topicality. Simplification models have a strong tendency to generate text that contains off-topic facts [ 6]. We define on-topic facts as such: Let 𝑅𝑆denote the set of all subjects, relations, and objects related to the source text topic: 𝑅𝑆={𝑥|𝑥is a subject, relation, or object related to the topic }. We define𝐹𝑡𝑜𝑝𝑖𝑐 as the set of all facts (subj,rel,obj)s.t.subj,rel,obj∈ 𝑅𝑆: 𝐹𝑡𝑜𝑝𝑖𝑐={(subj,rel,obj)|subj,rel,obj∈𝑅𝑆}. From this, we define the following error: Topic Shift Generated facts that are off-topic. Formally, we define the set of topic shift errors as: 𝐸𝑟𝑟𝑜𝑟𝑡𝑜𝑝𝑖𝑐=𝐹𝑔𝑒𝑛\𝐹𝑡𝑜𝑝𝑖𝑐 2.4.2 Truthfulness. To ensure that the simplified document re- mains accurate, we examine its truthfulness in relation to two distinct sets of facts: 𝐹𝑡𝑟𝑢𝑒 The set of true facts 𝐹𝑓𝑎𝑙𝑠𝑒 The set of false facts We assume the source document is entirely truthful. A false fact in the simplified text can then arise in two ways: Contradictory to the source These facts directly contradict information in 𝐹𝑠𝑟𝑐. Formally, we define the set of contradic- tory facts as: 𝐹𝑐𝑜𝑛𝑡={𝑓𝑐𝑜𝑛𝑡|∃𝑓𝑠𝑟𝑐∈𝐹𝑠𝑟𝑐, 𝑓𝑠𝑟𝑐contradicts 𝑓𝑐𝑜𝑛𝑡}. Contradictory to general knowledge These are hallucinated facts that conflict with widely accepted truths. For simplicity, off-topic facts 𝑓∉𝐹𝑡𝑜𝑝𝑖𝑐 are treated as topicality errors rather than truthfulness errors, even if they are false. We then define two errors: Faithfulness hallucinations Generated | https://arxiv.org/abs/2505.16392v1 |
facts that contradict the source document. Formally, we define the set of faithful- ness hallucination Errors as: 𝐸𝑟𝑟𝑜𝑟𝑓𝑎𝑖𝑡ℎ𝑓𝑢𝑙𝑛𝑒𝑠𝑠 =𝐹𝑔𝑒𝑛∩𝐹𝑡𝑜𝑝𝑖𝑐∩𝐹𝑐𝑜𝑛𝑡 Factuality hallucinations Generated facts that contradict gen- eral knowledge. Formally, we define the set of factuality hallucination Errors as: 𝐸𝑟𝑟𝑜𝑟𝑓𝑎𝑐𝑡𝑢𝑎𝑙𝑖𝑡𝑦 =𝐹𝑔𝑒𝑛∩𝐹𝑡𝑜𝑝𝑖𝑐∩𝐹𝑓𝑎𝑙𝑠𝑒\𝐹𝑐𝑜𝑛𝑡 More details about these errors, including examples, can be found in Appendix A.3 Information. 2.5 Simplification Errors In text simplification, the goal is to remove unimportant information and add important details (e.g., adding definitions or context) while reformulating the content for simplicity. However, errors can arise when these operations are performed incorrectly. In particular, we are interested in two questions: •Does the answer contain important information? •Are the performed reformulations correct? SIGIR ’25, July 13–18,2025, Padua, Italy Benjamin Vendeville, Liana Ermakova, and Pierre De Loor Potential clarificationOut of scope Loss of Informative Content SummarizationClarificationSource text Important InformationSimplification Figure 3: Diagram illustrating the sets and intersections in- volved in defining simplification errors. As with Information errors, we take a formal approach to study- ing these questions. 2.5.1 Importance. Important facts are those that are essential for understanding the central point or purpose of the source document. What we classify as important, however, depends on the goal of the simplification. For instance: •If the goal is to simplify a scientific document for language learners, certain technical details might not be important and could be omitted. •If the goal is to help a non-expert understand the document’s content, these same technical details may become crucial. This highlights that the importance of information varies based on the target audience and purpose. As with Information errors, we define𝐹𝑠𝑟𝑐as the set of facts in the source text and 𝐹𝑔𝑒𝑛as the set of facts in the generated text. We then consider the set 𝐹𝑖𝑚𝑝of facts defined as important by the simplification goal, and make the following assumption: •Every important information is true: 𝐹𝑖𝑚𝑝⊂𝐹𝑡𝑟𝑢𝑒 •Every important information is on topic: 𝐹𝑖𝑚𝑝⊂𝐹𝑡𝑜𝑝𝑖𝑐 Figure 3 illustrates the relationship between these source text andsimplification sets, and importance. We then define a source text of facts 𝐹𝑠𝑟𝑐as maximally simple if: 𝐹𝑠𝑟𝑐=𝐹𝑖𝑚𝑝 We then observe that for every source text that is not maximally simplified, we have one or multiple of the following : •Not every fact included in the source text is necessarily important. ∃𝑓𝑠𝑟𝑐∈𝐹𝑠𝑟𝑐,𝑓𝑠𝑟𝑐∉𝐹𝑖𝑚𝑝 •There may be important information (e.g. a definition) that is not included in the source text. ∃𝑓𝑖𝑚𝑝∈𝐹𝑖𝑚𝑝,𝑓𝑖𝑚𝑝∉𝐹𝑠𝑟𝑐 Therefore, we can identify five different sets: •Out-of-scope generation: Facts in the generation that are new and not important 𝐸𝑟𝑟𝑜𝑟𝑠𝑐𝑜𝑝𝑒=𝐹𝑔𝑒𝑛\𝐹𝑖𝑚𝑝 •Loss of informative content: Source facts that are impor- tant and absent from the generation 𝐸𝑟𝑟𝑜𝑟𝐿𝑜𝑠𝑠=𝐹𝑠𝑟𝑐∩𝐹𝑖𝑚𝑝\𝐹𝑔𝑒𝑛 Simplification source text Overgeneralization OverspecificationSource Text SimplificationSimplification Source textFigure 4: Diagram illustrating the sets and intersections in- volved in defining concept oversimplification errors. •Summarization :Source facts that are not important and removed from the generation 𝑆𝑢𝑚𝑚 =𝐹𝑠𝑟𝑐\𝐹𝑖𝑚𝑝\𝐹𝑔𝑒𝑛 •Clarification :Facts in the generation that are important and new 𝐶𝑙𝑎𝑟𝑖𝑓 =𝐹𝑔𝑒𝑛∩𝐹𝑖𝑚𝑝\𝐹𝑠𝑟𝑐 •Potential Clarification :Facts that are important and not included in the source text or the generation 𝐶𝑙𝑎𝑟𝑖𝑓𝑝𝑜𝑡𝑒𝑛𝑡𝑖𝑎𝑙 =𝐹𝑖𝑚𝑝\𝐹𝑠𝑟𝑐\𝐹𝑔𝑒𝑛 Here, we identify two errors ( Out-of-scope generation andLoss of informative content ) and three other transformations that do | https://arxiv.org/abs/2505.16392v1 |
not qualify as errors. The two errors we define here arise from the problem of evaluating what constitutes important information in the context of the specific simplification goal . Therefore, we decided to group them under the Content relevance subcategory. In the end, these errors will need to be more properly defined based on a proper definition of the needs of each target audience. The three other transformations ( Summarization ,Clarification , and Potential Clarification ) can be studied as a way to measure the quality of the simplification. 2.5.2 Reformulation. Some errors arise not from the addition or subtraction of facts, but from the reformulation . In that context, as shown in Figure 4, we may have elements of facts that are replaced by a more general or more specific concept: Generalization Replacing a concept with a more general one Specification Replacing a concept with a more specific one More formally, let 𝐸represent the set of elements in a fact: thesubject ,relation , or object . We define the source fact 𝑓𝑠𝑟𝑐= (𝑒1,𝑒2,𝑒𝑠𝑟𝑐)∈𝐹𝑠𝑟𝑐and the generated fact 𝑓𝑔𝑒𝑛=(𝑒1,𝑒2,𝑒𝑔𝑒𝑛)∈ 𝐹𝑔𝑒𝑛such that only one element differs between them. We then have the following substitution errors: Overgeneralization Replacing a concept with a more general one, which causes the simplified fact to contradict the source. Formally: 𝑒𝑠𝑟𝑐⊂𝑒𝑔𝑒𝑛, 𝑓𝑠𝑟𝑐̸=⇒𝑓𝑔𝑒𝑛 Overspecification Replacing a concept with a more general one, which causes the simplified fact to contradict the source. Formally: 𝑒𝑠𝑟𝑐⊃𝑒𝑔𝑒𝑛, 𝑓𝑠𝑟𝑐̸=⇒𝑓𝑔𝑒𝑛 Resource for Error Analysis in Text Simplification: New Taxonomy and Test Collection SIGIR ’25, July 13–18,2025, Padua, Italy Table 1: Error Type Distribution: True vs. False Counts and Percentage of True Cases Error Label #Total #True #False %True No error 2,659 820 1,839 30.84 A. Fluency A1. Random generation 2,659 142 2,517 5.34 A2. Syntax error 2,659 191 2,468 7.18 A3. Contradiction 2,659 23 2,636 0.86 A4. Punctuation/ grammar error2,659 241 2,418 9.06 A5. Redundancy 2,659 112 2,547 4.21 B. Alignment B1. Format misalignment 2,659 47 2,612 1.77 B2. Prompt misalignment 2,659 96 2,563 3.61 C. Information C1. Factuality hallucination 2,659 23 2,636 0.86 C2. Faithfulness hallucination 2,659 360 2,299 13.54 C3. Topic shift 2,659 152 2,507 5.72 D. Simplification D1.1. Overgeneralization 2,659 306 2,353 11.51 D1.2. Overspecification 2,659 136 2,523 5.11 D2.1. Loss of Info. Content 2,659 520 2,139 19.56 D2.2. Out-of-Scope Gen. 2,659 418 2,241 15.72 A certain level of generalization or specification is often neces- sary when simplifying text for specific audiences. However, exces- sive simplification can result in errors, such as overgeneralization or overspecification, where important nuances are lost or altered. To address this, we classify these errors under the broader category ofOversimplification . More details about these errors, including examples, can be found in Appendix A.4 Simplification. 3 Test Collection We use this taxonomy to build a test collection of errors in ATS. This collection is composed of a dataset of sentences extracted from scientific abstracts across diverse fields, including medicine, tele- coms, and artificial intelligence. The simplifications were gathered during the SimpleText Track at CLEF 2024, where participants then used a variety of methods to generate simplifications, all of them based | https://arxiv.org/abs/2505.16392v1 |
on LLMs. We have collected a total of 2,659 individual annotations from ten annotators, as detailed in Table 1. The annotations reveal a high error rate of 69.16% in the existing simplifications, with high variability in the occurrence of specific error types. Notably, Con- tradiction appears infrequently (0.86% occurrence), while Loss of informative content is highly prevalent, accounting for 19.56% of the sentences. These findings show that while text generation models have improved, challenges in fluency remain. Simplification-related errors continue to occur at a high rate, suggesting that further improvements are needed in this area. To evaluate inter-annotator agreement, we gathered a separate set of 104 instances, each annotated by five of the annotators. The instances were selected from the subset of data annotated by the lead author to ensure a balanced representation of each error class within the test dataset. This dataset also included eight duplicatedTable 2: Annotator Consistency Rate Annotator A B C D E Consistency Rate 0.78 1 0.56 1 0.78 instances to measure annotator self-consistency. As shown in Ta- ble 2, the consistency rates varied across annotators, with only two out of five achieving perfect consistency. While annotator C achieved low consistency (0.56), we decided to keep their anno- tations in the dataset to increase the data size. Users who prefer higher quality can choose to exclude their annotations. Table 3 presents Cohen’s Kappa scores for error classes across annotator pairs. Undefined values were set to 1 in cases of com- plete agreement. Cohen’s 𝜅indicates the level of inter-annotator agreement, with scores above 0.60 generally considered substan- tial [ 14]. Fluency errors show slight to moderate agreement ( 𝜅= 0.19-0.44) across all pairs. Alignment errors exhibit high variability, with agreement ranging from 0.25 to 1. Information errors shows only slight to fair agreement accross the board ( 𝜅= 0.06-0.40). Sim- plification errors scores range from slight to moderate ( 𝜅= 0.14-49). Table 3 also displays Fleiss’ Kappa and unanimous agreement. Agreement is highest for Alignment ( 𝜅= 0.45) and Aggregated A errors (𝜅= 0.38), suggesting clearer definitions or easier detec- tion. In contrast, Information and Simplification errors show lower reliability (𝜅= 0.02–0.26), indicating more subjective or ambigu- ous judgments. These results are very diverse and highlight both consistent and challenging areas in error classification. This test collection provides a valuable resource for analyzing the challenges of LLM-based text simplification. The high prevalence of errors, particularly in information loss and fluency, underscores the need for further improvements in automatic simplification models. The inter-annotator agreement analysis highlights the importance of annotator training, as expertise significantly impacts the identifi- cation of certain error types. By making this test collection available, we aim to facilitate future research in evaluating and enhancing simplification systems, contributing to more reliable and compre- hensible scientific text generation. 4 Showcase Evaluating ATS requires accurate error detection. The question remains, then, whether current error detection models can reliably capture errors as defined by our taxonomy. This showcase will highlight the strengths and limitations of current approaches and guide improvements in ATS evaluation. 4.1 Methodology Our showcase of error | https://arxiv.org/abs/2505.16392v1 |
detection models focuses on three key ques- tions: •Can they accurately detect the presence of errors in ATS? •Can they effectively identify greater types of errors in ATS? •Can they effectively identify specific types of errors in ATS? Given that hallucination has been one of the most extensively studied errors in recent years [ 8,11] with sometimes imprecise definition of hallucinations, we will use hallucination detection models to evaluate the performance of existing tools. These models SIGIR ’25, July 13–18,2025, Padua, Italy Benjamin Vendeville, Liana Ermakova, and Pierre De Loor Table 3: Cohen’s Kappa scores for error classes across all annotator pairs, along with Fleiss’ Kappa scores and percentage of unanimous annotations. Error Class Fleiss’ 𝜅 Unanimous %Cohen’s Kappa scores for Annotator pair AB AC AD AE BC BD BE CD CE DE No error 0.34 38.9 0.49 0.30 0.16 0.48 0.66 0.28 0.73 0.23 0.57 0.16 A. Fluency 0.38 67.3 0.44 0.23 0.22 0.44 0.40 0.37 0.75 0.19 0.40 0.37 B. Alignment 0.45 76.8 0.37 0.58 1.00 0.38 0.25 0.37 0.79 0.58 0.27 0.38 C. Information 0.02 47.3 0.06 0.22 0.14 0.07 0.17 0.40 0.38 0.04 0.27 0.28 D. Simplification 0.26 25.2 0.26 0.24 0.14 0.40 0.49 0.19 0.45 0.20 0.45 0.12 are well-suited for identifying factual or faithfulness inconsisten- cies, making them an ideal starting point for testing the broader framework of error detection in ATS. In addition, we will use In particular, we will use the following pre-trained models: •Transformer-based models trained on synthetic data –FactCC : Based on BERT, also trained on synthetic sum- marizations [13]. –LENS : Based on RoBERTa-large, trained on manually an- notated simplifications [9]. •Question-Generation & Question-Answering (QGQA) mod- els –FEQA : Uses two T5 models to assess faithfulness via question- answering [2]. –QAGS : Another QGQA-based model leveraging two T5 models [18]. •Fact-based models –Factacc : Uses Named Entity Recognition and a Relation Classifier to verify factual consistency [8]. In addition, we will also study the results of BERTScore [ 20] as it is often used as a benchmark for ATS performance. To evaluate error detection, since about half of our dataset is error-free, we use AUROC to measure how well models distinguish between correct and erroneous simplifications. For rarer, specific error types, we rely on AUPRC. This will help us understand the strengths and limitations of hallucination detection in ATS. 4.2 Results & Analysis 4.2.1 Binary Error Detection AUROC. Table 4 summarizes the bi- nary error detection performance as measured by AUROC for the evaluated models. FactCC achieves the highest AUROC of 0.68, indicating a relatively strong ability to distinguish between error and non-error instances. QAGS and FEQA both record moderate performance with AUROC values of 0.51 and 0.52, while FactAcc trails at 0.44. Notably, BERTScore shows a very low AUROC of 0.23, suggesting that its capability for ranking errors in a binary setting is limited. 4.2.2 Greater Error Type Detection. Table 4 presents the AUPRC scores for detecting broader error types—fluency, alignment, infor- mativeness, and simplification errors—along with their prevalence. FactCC consistently achieves the highest scores across all cate- gories, with | https://arxiv.org/abs/2505.16392v1 |
AUPRC of 0.24 for fluency, 0.16 for alignment, 0.29 for information, and 0.51 for simplification. Despite these relative improvements, the overall performance remains low. In particular,Table 4: Greater Error Detection AUPRC and Binary Detec- tion "Any Error" AUROC for evaluated models. Metric A. Fluency B. Alignment C. Information D. Simplification No Error %True 21.96 5.15 19.10 43.89 30.84 #True 584 137 508 1167 822 BERTScore 0.20 0.02 0.13 0.36 0.23 QAGS 0.22 0.05 0.19 0.45 0.51 FEQA 0.23 0.05 0.20 0.44 0.52 FactCC 0.24 0.16 0.29 0.51 0.68 FactAcc 0.24 0.05 0.19 0.42 0.44 LENS 0.13 0.03 0.19 0.53 0.37 detecting Alignment errors is challenging for every model. Surpris- ingly, even on fluency and alignment errors, which might be easier to detect, models exhibit low detection performance. 4.2.3 Detection for Individual Error Types. Table 5 reports the AUPRC scores for each model across individual error types, grouped into four main categories: Fluency, Alignment, Information, and Simplification. The table also lists the percentage of true instances and the absolute number of true cases for each error type. For Flu- ency errors, FactCC generally achieves the highest AUPRC scores, though the absolute values remain very low. A similar trend is observed for Alignment and Information errors, where FactCC out- performs the other models. For Simplification errors, the highest scores are shown by LENS. Overall, although FactCC consistently shows relatively higher performance, the results always show very poor performance, high- lighting the need for improved error detection methods. Moreover, the very low instance counts for certain error types, such as Con- tradiction, prompt misalignment, and factual hallucination, limit the robustness of the measure. 5 Conclusion We introduced here the first taxonomy of errors in Automatic Text Simplification and built a test collection through the annotation of real-world ATS examples. Our findings show that errors are still Resource for Error Analysis in Text Simplification: New Taxonomy and Test Collection SIGIR ’25, July 13–18,2025, Padua, Italy Table 5: Caption: Error analysis table presenting various error types and measure scores across multiple metrics. The table reports the AUPRC for BERTScore, QAGS, FEQA, FactCC, FactAcc, and LENS, along with the number of true instances. Errors are grouped into greater categories. Error Type #Total #True %True BERTScore ↑QAGS ↑FEQA ↑FactCC ↑FactAcc ↑LENS ↑ A Fluency A1. Random generation 2,659 142 5.34 0.0275 0.0520 0.0584 0.1113 0.0514 0.0288 A2. Syntax error 2,659 191 7.18 0.0544 0.0776 0.0831 0.0846 0.0716 0.0431 A3. Contradiction 2,659 23 0.86 0.0066 0.0118 0.0130 0.0208 0.0086 0.0094 A4. Punctuation/grammar errors 2,659 241 9.06 0.1371 0.0883 0.0952 0.0710 0.1506 0.0542 A5. Redundancy 2,659 112 4.21 0.0279 0.0419 0.0453 0.0626 0.0404 0.0278 B Alignment B1. Format misalignement 2,659 47 1.77 0.0061 0.0178 0.0176 0.0633 0.0170 0.0184 B2. Prompt misalignement 2,659 96 3.61 0.0213 0.0349 0.0400 0.1200 0.0352 0.0209 C Information C1. Factuality hallucination 2,659 23 0.86 0.0059 0.0099 0.0175 0.0105 0.0084 0.0093 C2. Faithfulness hallucination 2,659 360 13.54 0.1080 0.1421 0.1397 0.1704 0.1327 0.1577 C3. Topic shift 2,659 152 5.72 0.0306 0.0565 0.0617 0.1498 0.0572 0.0485 D Simplification D1.1. Overgeneralization 2,659 306 11.51 0.1011 0.1154 0.1157 0.1210 0.1102 0.1871 | https://arxiv.org/abs/2505.16392v1 |
D1.2. Overspecification 2,659 136 5.11 0.0404 0.0542 0.0464 0.0563 0.0517 0.0540 D2.1. Loss of Informative Content 2,659 520 19.56 0.1712 0.1970 0.1848 0.2066 0.1899 0.2252 D2.2. Out-of-Scope Generation 2,659 418 15.72 0.1056 0.1657 0.1803 0.2488 0.1517 0.2010 prevalent in ATS and that existing methods fail to detect them reliably. While our taxonomy provides a structured approach to error classification, effective annotation requires careful selection and training of annotators. The test collection will be used for the SimpleText shared task at CLEF 2025 and will be published freely after the CLEF 2025 evaluation cycle [ 3] in a .csv format on GitHub.1The repository also provides annotation scheme and the code for analyzing the annotated dataset introduced in this paper. By releasing our test collection, taxonomy annotation scheme, and analysis code, we aim to support further research in this area. Future work could explore automatic evaluation aligned with our taxonomy, improved annotator training, and the use of synthetic data to augment test collections and facilitate the development of ATS models that better mitigate these errors. Acknowledgments We thank master students in translation and technical writing from the University of Brest for their participation in data annota- tion. This research was funded by the French National Research Agency (ANR) under the projects ANR-22-CE23-0019-01 and ANR- 19-GURE-0001 (program Investissements d’avenir integrated into France 2030). A Taxonomy A.1 A. Fluency Category focus: is the answer provided in a correct language that a fluent speaker would speak, regardless of the correctness or rele- vance of the answer? A1. Random generation 1https://github.com/bVendeville/Salted•Definition At least part of the answer is just a random string of words/numbers •Example: –Source: In the modern era of automation and robotics, autonomous vehicles are currently the focus of academic and industrial research. –Simplification: Current academic and industrial research is interested in autonomous vehicles .1.2.3.4.5.6.7 A2. Syntax error •Definition The syntax is incorrect and doesn’t make sense. •Example: –Source: In the modern era of automation and robotics, autonomous vehicles are currently the focus of academic and industrial research. –Simplification: In time now of robot and auto, cars that drive self are study much by school and work people. A3. Contradiction •Definition Answer contradicts itself. •Example: –Source: In the modern era of automation and robotics, autonomous vehicles are currently the focus of academic and industrial research. –Simplification: In today’s age of automation and robot- ics, autonomous vehicles are both widely researched and completely ignored by academics and industry. A4. Simple punctuation or grammar errors •Definition Answer has punctuation errors that don’t hinder comprehension. •Example: SIGIR ’25, July 13–18,2025, Padua, Italy Benjamin Vendeville, Liana Ermakova, and Pierre De Loor –Source: In the modern era of automation and robotics, autonomous vehicles are currently the focus of academic and industrial research. –Simplification: Current academic and industrial research are interested in autonomous vehicles ..................... A5. Redundancy •Definition Repeated sentences, parts of sentences, or groups of sentences that do not need to be repeated. This is an error regardless of the quality of the sentence. •Example: –Source: In the modern era of automation and robotics, autonomous vehicles are currently the focus of | https://arxiv.org/abs/2505.16392v1 |
academic and industrial research. –Simplification: Current academic and industrial research is interested in autonomous vehicles. Current academic and industrial research is interested in autonomous vehicles. A.2 B. Alignment Category focus: Does the answer suggest that the model correctly interpreted the prompt, including tags and format? B1. Format misalignment •Definition Some tags or symbols used for formatting are missing. They can include symbols used for JSON parsing (like here with "" and ) or any "prompt tag" typically <query> <answer> etc. . . •Example: –Source: In the modern era of automation and robotics, autonomous vehicles are currently the focus of academic and industrial research. –Simplification: {"Current academic and industrial research is interested in autonomous vehicles. }" B2. Prompt misalignment •Definition The model generated one or more of the follow- ing: –unneeded prompt tags (like <query> <answer> etc. . . ) that lead to another question/source etc. . . –another question (different or not) –another source text (related or not) –another answer (related or not) •Example: –Source: In the modern era of automation and robotics, autonomous vehicles are currently the focus of academic and industrial research. –Simplification: {"Current academic and industrial research is interested in autonomous vehicles. <Query> simplify this: <example>... A.3 C. Information Category focus: does the answer suggest that the model knows and understands everything needed to simplify the input? C1. Factuality hallucination •Definition The simplification contains facts that are con- trary to (i.e., can be proven wrong from) "general knowledge" but not directly contrary to the input text.•Example: –Source: In the modern era of automation and robotics, autonomous vehicles are currently the focus of academic and industrial research. –Simplification: Current academic and industrial research is interested in autonomous vehicles, which are vehicles that can fly C2. Faithfulness hallucination •Definition The simplification contains facts that are con- trary to (i.e., can be proven wrong from) the input text. •Example: –Source: In the modern era of automation and robotics, autonomous vehicles are currently the focus of academic and industrial research. –Simplification: Current academic and industrial research isnot at all interested in autonomous vehicles. C3. Topic shift •Definition The generation contains at least some informa- tion related to the task (simplification) or the prompt (one- shot encoding) but not to the source document. It can be a text about simplification, or, in the case of one-shot inference, it can be something related to the example given but not to the document that should be simplified. •Example 1: –Source: In the modern era of automation and robotics, autonomous vehicles are currently the focus of academic and industrial research. –Simplification: Simplification, in the context of lan- guage and communication, refers to the process of making text or information easier to understand. •Example 2: –Source: Simplify the following document: <source>In an attempt to achieve the above-mentioned tasks, we propose an imitation learning based, data-driven solution to UAV autonomy for navigating through city streets by learning to fly by imitating an expert pilot.<answer>Researchers propose data-driven solutions allowing drones to autonomous- ly navigate city streets, learning to fly by imitating an expert pilot.<source> In the modern era of automation | https://arxiv.org/abs/2505.16392v1 |
and robotics, autonomous vehicles are currently the focus of academic and industrial research. <answer> –Simplification: We propose a data-driven imitation learning method for UAVs to navigate city streets by mimicking an expert pilot. A.4 D. Simplification Category focus: Does the answer suggest that the model under- stands the task of simplification? D1.1 Overgeneralization of Concepts •Definition The simplification removes some precision and generalizes concepts that shouldn’t be generalized, making them ambiguous and false. This may include: –Replacing entities with the greater category of entities –Using vague or ambiguous pronouns in place of clear sub- jects. Resource for Error Analysis in Text Simplification: New Taxonomy and Test Collection SIGIR ’25, July 13–18,2025, Padua, Italy –Removing the target of a sentence, implying that a fact applies generally when it only applies in a specific case, to a specific entity –Omitting critical context, such as targets, qualifiers, or conditions. –Generalizing numerical or conditional statements into absolutes. •Example 1: Replacing Entities with a Broader Category –Source: Insects like bees and butterflies are vital for polli- nation, which is essential for producing many fruits and vegetables. –Simplification: Insects are vital for pollination. •Example 2: Replacing a Specific Entity with a Pronoun –Source: The study found that aspirin reduced the risk of heart attack in patients over 50 but had no effect on younger individuals. –Simplification: It reduces the risk of heart attack. •Example 3: Removing a Target, Leading to Unwarranted Generality –Source: This vaccine has been shown to be effective in preventing measles in children. –Simplification: This vaccine prevents diseases. D1.2. Overspecification of Concepts •Definition This error occurs when a broad entity or cate- gory in the source text is replaced with a specific example or subcategory during simplification. The source text may intentionally use a general term to avoid unnecessary detail or to maintain flexibility in interpretation. By introducing specificity, the simplified text risks reducing the meaning to an incorrect or unintended entity, misrepresenting the original intent. –Replacing entities with the greater category of entities –Using vague or ambiguous pronouns in place of clear sub- jects. –Removing the target of a sentence, implying that a fact applies generally when it only applies in a specific case, to a specific entity –Omitting critical context, such as targets, qualifiers, or conditions. –Generalizing numerical or conditional statements into absolutes. •Example: –Source: The study examined the impact of climate change on wildlife. –Simplification: The study examined the impact of climate change on polar bears. D2.1. Loss of Informative Content •Definition Simplifications can omit critical information, mak- ing the content uninformative rather than misleading. This omission limits the reader’s understanding of the broader context or key points, leaving them unaware of significant elements like parts of a research question, conclusions, or applications. This includes –A completely empty simplification. –A simplification so general it loses the source’s novelty or explanatory value.–Simplifying only one argument when the source has two independent ones. Note: Deciding what qualifies as "important information" depends on the context. Defining "important information" should consider the audience’s needs (e.g., non-native speak- ers, non-experts, or those with disabilities) and the desired | https://arxiv.org/abs/2505.16392v1 |
simplicity level (e.g., 3rd grade, 10th grade). •Example 1 Replacing Entities with a Broader Category : –Source: Insects like bees and butterflies are vital for polli- nation, which is essential for producing many fruits and vegetables. –Simplification: Insects are vital for pollination. •Example 2 Replacing a Specific Entity with a Pronoun: –Source: The study found that aspirin reduced the risk of heart attack in patients over 50 but had no effect on younger individuals. –Simplification: It reduces the risk of heart attack. •Example 3 Removing a Target, Leading to Unwarranted Generality : –Source: This vaccine has been shown to be effective in preventing measles in children. –Simplification: This vaccine prevents diseases. •Example 4 Generalizing Findings Beyond Their Scope : –Source: In the controlled study, the intervention improved test scores among high school students in urban areas. –Simplification: The intervention improves test scores. D2.2. Out-of-Scope Generation •Definition The generation contains information that is un- related to the task of simplification. The generation may have something to do with the source document to be simplified, but is not about simplifying it. The generation might be: –An opinion about the source document. –A completion of the source document (more information). –Questions about the source document. –A translation of the source document. •Example 1: –Source: In the modern era of automation and robotics, autonomous vehicles are currently the focus of academic and industrial research. –Simplification: Current academic and industrial research is interested in autonomous vehicles. In the show KITT with David Hasselhoff the car is an autonomous ve- hicle and on episode... •Example 2: –Source: In the modern era of automation and robotics, autonomous vehicles are currently the focus of academic and industrial research. –Simplification: Current academic and industrial research is interested in autonomous vehicles. In the show KITT with David Hasselhoff the car is an autonomous vehicle and on episode. . . References [1]Dennis Davari, Liana Ermakova, and Ralf Krestel. 2024. Comparative Analysis of Evaluation Measures for Scientific Text Simplification. In Linking Theory and Practice of Digital Libraries - 28th International Conference on Theory and Practice of Digital Libraries, TPDL 2024, Ljubljana, Slovenia, September 24-27, SIGIR ’25, July 13–18,2025, Padua, Italy Benjamin Vendeville, Liana Ermakova, and Pierre De Loor 2024, Proceedings, Part I (Lecture Notes in Computer Science, Vol. 15177) , Apostolos Antonacopoulos, Annika Hinze, Benjamin Piwowarski, Mickaël Coustaty, Giorgio Maria Di Nunzio, Francesco Gelati, and Nicholas Vanderschantz (Eds.). Springer, 76–91. https://doi.org/10.1007/978-3-031-72437-4_5 [2]Esin Durmus, He He, and Mona Diab. 2020. FEQA: A Question Answer- ing Evaluation Framework for Faithfulness Assessment in Abstractive Sum- marization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault (Eds.). Association for Computational Linguistics, Online, 5055–5070. https://doi.org/10.18653/v1/2020.acl-main.454 [3]Liana Ermakova, Hosein Azarbonyad, Jan Bakker, Benjamin Vendeville, and Jaap Kamps. 2025. CLEF 2025 SimpleText Track - Simplify Scientific Text (and Nothing More). In Advances in Information Retrieval - 47th European Conference on Information Retrieval, ECIR 2025, Lucca, Italy, April 6-10, 2025, Proceedings, Part V (Lecture Notes in Computer Science, Vol. 15576) , Claudia Hauff, Craig Macdonald, Dietmar Jannach, Gabriella | https://arxiv.org/abs/2505.16392v1 |
Kazai, Franco Maria Nardini, Fabio Pinelli, Fabrizio Silvestri, and Nicola Tonellotto (Eds.). Springer, 425–433. https://doi.org/10.1007/ 978-3-031-88720-8_63 [4]Liana Ermakova, Sarah Bertin, Helen McCombie, and Jaap Kamps. 2023. Overview of the CLEF 2023 SimpleText Task 3: Simplification of Scientific Texts. In Working Notes of the Conference and Labs of the Evaluation Forum (CLEF 2023), Thessaloniki, Greece, September 18th to 21st, 2023 (CEUR Workshop Proceedings, Vol. 3497) , Mohammad Aliannejadi, Guglielmo Faggioli, Nicola Ferro, and Michalis Vlachos (Eds.). CEUR-WS.org, 2855–2875. https://ceur-ws.org/Vol-3497/paper-240.pdf [5]Liana Ermakova, Valentin Laimé, Helen McCombie, and Jaap Kamps. 2024. Overview of the CLEF 2024 SimpleText Task 3: Simplify Scientific Text. In Work- ing Notes of the Conference and Labs of the Evaluation Forum (CLEF 2024), Grenoble, France, 9-12 September, 2024 (CEUR Workshop Proceedings, Vol. 3740) , Guglielmo Faggioli, Nicola Ferro, Petra Galuscáková, and Alba García Seco de Herrera (Eds.). CEUR-WS.org, 3147–3162. https://ceur-ws.org/Vol-3740/paper-307.pdf [6]Liana Ermakova, Eric SanJuan, Stéphane Huet, Hosein Azarbonyad, Olivier Augereau, and Jaap Kamps. 2023. Overview of the CLEF 2023 SimpleText Lab: Au- tomatic Simplification of Scientific Texts. In Experimental IR Meets Multilinguality, Multimodality, and Interaction , Avi Arampatzis, Evangelos Kanoulas, Theodora Tsikrika, Stefanos Vrochidis, Anastasia Giachanou, Dan Li, Mohammad Alianne- jadi, Michalis Vlachos, Guglielmo Faggioli, and Nicola Ferro (Eds.). Springer Na- ture Switzerland, Cham, 482–506. https://doi.org/10.1007/978-3-031-42448-9_30 [7]Liana Ermakova, Eric SanJuan, Stéphane Huet, Hosein Azarbonyad, Gior- gio Maria Di Nunzio, Federica Vezzani, Jennifer D’Souza, and Jaap Kamps. 2024. Overview of the CLEF 2024 SimpleText Track: Improving Access to Scientific Texts for Everyone. In Experimental IR Meets Multilinguality, Multimodality, and Interaction: 15th International Conference of the CLEF Association, CLEF 2024, Grenoble, France, September 9–12, 2024, Proceedings, Part II . Springer-Verlag, Berlin, Heidelberg, 283–307. https://doi.org/10.1007/978-3-031-71908-0_13 [8]Ben Goodrich, Vinay Rao, Peter J. Liu, and Mohammad Saleh. 2019. Assess- ing The Factual Accuracy of Generated Text. In Proceedings of the 25th ACMSIGKDD International Conference on Knowledge Discovery & Data Mining (KDD ’19). Association for Computing Machinery, New York, NY, USA, 166–175. https://doi.org/10.1145/3292500.3330955 [9]David Heineman, Yao Dou, Mounica Maddela, and Wei Xu. 2023. Dancing Between Success and Failure: Edit-level Simplification Evaluation Using SALSA. https://doi.org/10.48550/arXiv.2305.14458 arXiv:2305.14458 [cs] [10] Or Honovich, Roee Aharoni, Jonathan Herzig, Hagai Taitelbaum, Doron Kuk- liansy, Vered Cohen, Thomas Scialom, Idan Szpektor, Avinatan Hassidim, and Yossi Matias. 2022. TRUE: Re-evaluating Factual Consistency Evaluation. https: //doi.org/10.48550/arXiv.2204.04991 arXiv:2204.04991 [cs] [11] Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, and Ting Liu. 2023. A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions. https://doi.org/10.48550/arXiv. 2311.05232 arXiv:2311.05232 [cs] [12] Sebastian Joseph, Kathryn Kazanas, Keziah Reina, Vishnesh Ramanathan, Wei Xu, Byron Wallace, and Junyi Jessy Li. 2023. Multilingual Simplification of Medical Texts. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , Houda Bouamor, Juan Pino, and Kalika Bali (Eds.). Association for Computational Linguistics, Singapore, 16662–16692. https: //doi.org/10.18653/v1/2023.emnlp-main.1037 [13] Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the Factual Consistency of Abstractive Text Summariza- tion. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , Bonnie Webber, Trevor Cohn, Yulan He, and | https://arxiv.org/abs/2505.16392v1 |
Yang Liu (Eds.). Association for Computational Linguistics, Online, 9332–9346. https://doi.org/10.18653/v1/2020.emnlp-main.750 [14] J. R. Landis and G. G. Koch. 1977. The Measurement of Observer Agreement for Categorical Data. Biometrics 33, 1 (March 1977), 159–174. [15] Samuel Läubli, Rico Sennrich, and Martin Volk. 2018. Has Machine Translation Achieved Human Parity? A Case for Document-level Evaluation. https://doi. org/10.48550/arXiv.1808.07048 arXiv:1808.07048 [cs] [16] OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, et al .2024. GPT-4 Tech- nical Report. https://doi.org/10.48550/arXiv.2303.08774 arXiv:2303.08774 [cs] [17] Advaith Siddharthan. 2014. A Survey of Research on Text Simplification. ITL - International Journal of Applied Linguistics 165, 2 (Dec. 2014), 259–298. https: //doi.org/10.1075/itl.165.2.06sid [18] Alex Wang, Kyunghyun Cho, and Mike Lewis. 2020. Asking and Answering Questions to Evaluate the Factual Consistency of Summaries. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault (Eds.). Association for Computational Linguistics, Online, 5008–5020. https://doi.org/10.18653/v1/2020. acl-main.450 [19] Xuanxin Wu and Yuki Arase. 2024. An In-depth Evaluation of GPT-4 in Sentence Simplification with Error-based Human Assessment. arXiv:2403.04963 [cs] [20] Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. BERTScore: Evaluating Text Generation with BERT. arXiv:1904.09675 [cs] | https://arxiv.org/abs/2505.16392v1 |
2025-5-23 AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning Yang Chen∗‡, Zhuolin Yang*‡, Zihan Liu, Chankyu Lee, Peng Xu, Mohammad Shoeybi, Bryan Catanzaro, Wei Ping†‡ Abstract Despite recent progress in large-scale reinforcement learning (RL) for reasoning, the training recipe for building high-performing reasoning models remains elusive. Key implementation details of frontier models, such as DeepSeek-R1, including data curation strategies and RL training recipe, are often omitted. Moreover, recent research indicates distillation remains more effective than RL for smaller models. In this work, we demonstrate that large-scale RL can significantly enhance the reasoning capabilities of strong, small- and mid-sized models, achieving results that surpass those of state-of-the-art distillation-based models. We systematically study the RL training process through extensive ablations and propose a simple yet effective approach: first training on math- onlyprompts,thenoncode-onlyprompts. Notably,wefindthatmath-onlyRLnotonlysignificantlyenhancesthe performance of strong distilled models on math benchmarks (e.g., +14.6% / +17.2% on AIME 2025 for the 7B / 14B models), but also code reasoning tasks (e.g., +6.8% / +5.8% on LiveCodeBench for the 7B / 14B models). In addition, extended code-only RL iterations further improve code benchmark performance with minimal or no degradation in math results. We develop a robust data curation pipeline to collect challenging prompts with high-quality, verifiable answers and test cases to enable verification-based RL across both domains. Finally, we identify key experimental insights, including curriculum learning with progressively increasing response lengths and the stabilizing effect of on-policy parameter updates. We find that RL not only elicits the foundational reasoning capabilities acquired during pretraining and supervised fine-tuning (e.g., distillation), but also pushes the limits of the model’s reasoning ability, enabling it to solve problems that were previously unsolvable. We release the model at: https://huggingface.co/nvidia/AceReason-Nemotron-14B . 40 50 60 70 80 LiveCodeBench v5 Pass@1 Accuracy4050607080AIME 2025 Pass@1 AccuracyQwQ-32B DeepSeek-R1-Distill-Qwen-7BDeepSeek-R1-Distill-Qwen-14BDeepSeek-R1-Distill-Qwen-32BDeepSeek-R1-671BLlama-Nemotron-Ultra-253B Llama-Nemotron-Nano-8B-v1Llama-Nemotron-Super-49B-v1 AReal-boba-RL-7BSkywork-OR1-7B Light-R1-7BLight-R1-14B AceReason-Nemotron-7BAceReason-Nemotron-14B DeepCoder-14B OpenCodeReasoning-Nemotron-7B* OpenCodeReasoning-Nemotron-14B*OpenMath-Nemotron-7B*OpenMath-Nemotron-14B*Model Performance * specialized, non-general models Figure 1: Benchmark accuracy of AceReason-Nemotron-7B/14B on AIME25 (avg@64) and LiveCodeBench v5 (2024.08 - 2025.02, avg@8) using 32,768output length. ∗Equal contribution. †Leads the effort. ‡Correspondence to: Yang Chen<yachen@nvidia.com>, Zhuolin Yang<zhuoliny@nvidia.com>, Wei Ping<wping@nvidia.com>. ©2025 NVIDIA. All rights reserved.arXiv:2505.16400v1 [cs.LG] 22 May 2025 AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning Contents 1 Introduction 3 2 Related Work 4 3 Method 4 3.1 Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 3.2 Math-only RL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3.2.1 Data Curation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3.2.2 Training | https://arxiv.org/abs/2505.16400v1 |
process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3.3 Code-only RL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.3.1 Data Curation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.3.2 Training Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 4 Evaluation 7 4.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 4.1.1 Math Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 4.1.2 Code Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 4.2 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 4.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 4.3.1 Response Length Grows alongside Performance . . . . . . . . . . . . . . . . . . . . . . 9 4.3.2 Hard Prompts Drive the Largest Gains . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 4.3.3 Starting from 8K Improves Later Performance . . . . . . . . . . . . . . . . . . . . . . . 10 4.3.4 Interplay of Math-only | https://arxiv.org/abs/2505.16400v1 |
RL and Code-only RL . . . . . . . . . . . . . . . . . . . . . . . . 10 4.3.5 Topic-wise Accuracy Analysis on LiveCodeBench v5 . . . . . . . . . . . . . . . . . . . . 11 4.3.6 False Positive and False Negative Rewards in Code RL Training . . . . . . . . . . . . . . 12 4.3.7 Does RL Improve pass@k or pass@1 over the SFT Model? . . . . . . . . . . . . . . . . 13 4.3.8 Where does RL Improve over the SFT Model? . . . . . . . . . . . . . . . . . . . . . . . 14 5 Conclusion 14 6 Acknowledgement 14 A Appendix 18 A.1 Instruction for evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 A.2 Example of AceReason-Nemotron-14B response to simple query . . . . . . . . . . . . . . . . . 18 A.3 Additional Math-RL Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 A.4 Code-RL Dataset Curation Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2 AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning 1. Introduction Reasoning capabilities are a fundamental component of AI. Since the introduction of OpenAI o1 (OpenAI, 2024), building reasoning models using large-scale reinforcement learning (RL) has attracted significant attention. Remarkable progress has followed the open-sourcing of DeepSeek-R1 (Guo et al., 2025), empowering the open LLM and research communities to develop state-of-the-art reasoning models through RL or distillation. However, key technical details necessary for reproduction, such as data curation strategies and the specific RL training recipe, were omitted from the original DeepSeek-R1 report (Guo et al., 2025), leaving the community scrambling to replicate its success. Subsequent efforts by different teams explored diferent model sizes (e.g., 1.5B (Luo et al., 2025), 7B (Wen et al., 2025),14B(Luoetal.,2025),and32B-only(Yuetal.,2025)),differentinitialcheckpoints(e.g.,basemodels(Yu et al., 2025) and distilled reasoning models (He et al., 2025)), and different target domains (e.g., math (Luo et al., 2025), code (Luo et al., 2025), and physical AI (Azzolini et al., 2025)). Each study demonstrates a potential path to success in specific settings but lacks a conclusive or consistent training recipe. Moreover, both DeepSeek-R1 (Guo et al., 2025) and Llama-Nemotron (Bercovich et al., 2025) report that distillation outperforms RL for small and mid-sized models, recommending RL only for the largest models, such | https://arxiv.org/abs/2505.16400v1 |
as the DeepSeek-V3-671B (Liu et al., 2024) or Llama-3.1-Nemotron-Ultra-253B. The most recent release of Qwen3 adopts a similar strategy (Qwen, 2025). In this work, we demonstrate that large-scale reinforcement learning (RL) can significantly enhance the reason- ing capabilities of strong small- and mid-sized SFT models (DeepSeek-R1-Qwen-Distilled-7B/14B)—achieving performance competitive with state-of-the-art distillation-based results at 7B, and surpassing them at 14B (Ah- mad et al., 2025; Moshkov et al., 2025). Specifically, we make the following contributions: 1.We propose conducting math-only and code-only RL separately: the distilled SFT model is first trained on math-only prompts, followed by training on code-only prompts. This approach was initially motivated by training efficiency considerations, as the average verification time for code is significantly longer than that for math. Subsequently, we found two exciting observations: i)Math-only RL significantly boosts the performance of strong distilled models not only on math benchmarks (e.g., +14.6% / +17.2% on AIME 2025 for the 7B / 14B models), but also on code reasoning tasks (e.g., +6.8% / +5.8% on LiveCodeBench v5 for the 7B / 14B models); see Table 1 for details. ii)Extended iterations of code-only RL lead to minimal or no degradation on math reasoning tasks (e.g., +1.0% / -0.8% on AIME 2024 / 2025 for the 7B model); see Table 4 for details. These observations contrast with domain-specific supervised fine-tuning (SFT), which can lead to catastrophic forgetting and degraded performance on other domains. 2.We develop and share a systematic data curation recipe to collect high quality math problems with verifiable answers, as well as coding descriptions with test cases, ensuring that all data is reliable and testable. We will open-source the dataset for the benefit of the community. 3.To ensure consistent conclusions, we examine the RL training recipe through detailed ablation studies and analysis under state-of-the-art settings. Our findings include: i)Curriculum learning with a progressively increasing maximum response length improves both training efficiency and final accuracy on reasoning benchmarks. ii)On-policy parameter updates stabilize the RL process. iii)RL not only elicits the foundational reasoning capabilities acquired during pretraining and supervised fine-tuning (e.g., distillation), as evidenced by significant improvements in pass @1, but also expands the model’s capabilities to solve previously unsolvable problems, as demonstrated by substantial gains in pass @64. 3 AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning 2. Related Work Training LLMs to reason has been a long-standing research focus (Wei et al., 2022), especially in the domains of code (Chen et al., 2021) and math (Cobbe et al., 2021). In recent years, major development efforts have focused on acquiring reasoning capabilities by training on math and code data during both the pretraining and supervised fine-tuning (SFT) stages (Grattafiori et al., 2024; Guo et al., 2024; Liu et al., 2024; Shao et al., 2024; Yang et al., 2024). Reinforcement learning (RL) has previously been explored for math reasoning using reward models tailored to the math and code domains (Shao et al., 2024; Yang et al., 2024). However, the gains have been limited, largely due to the inherent challenges of reward modeling in mathematical and coding domains (Lightman et al., 2023; Liu | https://arxiv.org/abs/2505.16400v1 |
et al., 2024,). The release of OpenAI o1 (OpenAI, 2024), and especially the open-sourcing of DeepSeek-R1 (Guo et al., 2025), highlights the effectiveness of large-scale RL through rule-based verification. In the case of math problems with deterministic answers, models are required to output the final result in a specific format (e.g., boxed), enabling accurate rule-based verification (e.g., Liu et al., 2024; Yang et al., 2024). For code problems, feedback is provided through compilation and execution against predefined test cases (e.g., Luo et al., 2025; Zeng et al., 2025). Due to the absence of key implementation details in frontier models, such as RL training recipes and data curation strategies, subsequent works have explored and shared data curation methods (He et al., 2025; Luo et al., 2025,), and introduced various techniques to improve and stabilize the widely adopted GRPO training (Shao et al., 2024). These include progressively increasing the maximum response length (He et al., 2025; Luo et al., 2025,), clip-higher to mitigate entropy collapse (Yu et al., 2025), and overlong filtering to avoid penalties from truncated generations within the maximum response length (Yu et al., 2025). Many of these efforts focus exclusively on either the math domain (Chen et al., 2025; Luo et al., 2025; RL Lab, 2025; Yu et al., 2025) or the code domain (Luo et al., 2025; Zeng et al., 2025), highlighting the difficulty of handling heterogeneous prompts and inherent complexity of RL training. Furthermore, the range of reported benchmarks remains limited, typically to AIME 2024 / 2025, and LiveCodeBench (Jain et al., 2024), compared to broader evaluations in frontier reasoning models (Guo et al., 2025; Qwen, 2025). Another line of follow-up work focuses on distilling existing frontier reasoning models, which are originally trained via RL (Guo et al., 2025; Qwen, 2025), through strong-to-weak distillation using rejection sampling (Ah- mad et al., 2025; Bercovich et al., 2025; Moshkov et al., 2025), as prior studies have found that RL yields suboptimal results for smaller models compared to distillation (Bercovich et al., 2025; Guo et al., 2025). In this work, we initiate RL from strong distilled models, and show that it can achieve results that are competitive with or surpass existing state-of-the-art distillation-based approaches on math (Moshkov et al., 2025) and code (Ahmad et al., 2025). 3. Method 3.1. Framework We adopt the GRPO algorithm (Shao et al., 2024), as used in DeepSeek-R1, instead of PPO (Schulman et al., 2017), due to its simplicity and the advantage of not requiring a separate value function model. For each question-answerpair (𝑞, 𝑎), wesamplethepolicymodel 𝜋𝜃oldtogenerateagroupof 𝐺individualrollouts {𝑜𝑖}𝐺 𝑖=1. We assign a reward score 𝑆𝑖=𝑆(𝑜𝑖, 𝑎)to each response 𝑜𝑖, given the oracle answer 𝑎, using a rule-based reward function 𝑆, and compute the advantage as the normalized reward scores across the group {𝑆𝑖}𝐺 𝑖=1. 𝒥GRPO(𝜃) =E(𝑞,𝑎)∼𝒟,{𝑜𝑖}𝐺 𝑖=1∼𝜋𝜃old(·|𝑞) [︃ 1∑︀𝐺 𝑖=1|𝑜𝑖|𝐺∑︁ 𝑖=1|𝑜𝑖|∑︁ 𝑡=1(︃ min(︁ 𝑟𝑖,𝑡(𝜃)ˆ𝐴𝑖,𝑡,clip(︁ 𝑟𝑖,𝑡(𝜃),1−𝜀,1 +𝜀)︁ ˆ𝐴𝑖,𝑡)︁ −𝛽𝐷KL(𝜋𝜃||𝜋ref))︃]︃ ,(1) 4 AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning where 𝑟𝑖,𝑡(𝜃) =𝜋𝜃(𝑜𝑖,𝑡|𝑞, 𝑜𝑖,<𝑡) 𝜋𝜃old(𝑜𝑖,𝑡|𝑞, 𝑜𝑖,<𝑡),ˆ𝐴𝑖,𝑡=𝑆𝑖−mean ({𝑆𝑖}𝐺 𝑖=1) std({𝑆𝑖}𝐺 𝑖=1). (2) Our experiments strictly follow the on-policy training method, performing only one gradient update after the model | https://arxiv.org/abs/2505.16400v1 |
generates a group of 𝐺rollouts, to ensure stable RL training and prevent entropy collapse. We also remove the KL divergence term by setting 𝛽= 0. We started RL experiments from distilled reasoning models such as DeepSeek-R1-Qwen-Distilled-7B/14B (Guo et al., 2025), ensuring that the experiments were conducted under controlled conditions without introducing variations in distillation data or fine-tuning compute. We use RL framework veRL (Sheng et al., 2024) with implementation of token-level loss (Yu et al., 2025) of GRPO and inference engine vLLM (v0.7.3) (Kwon et al., 2023). Our custom modification includes adding math and code reward functions (verifiers) to the implementation. All experiments are conducted using 128 NVIDIA H100 GPUs. Reward functions: •For verification of math problems, we employ a rule-based Python verification function built on top of sympy, following the approach of AceMath (Liu et al., 2024). Specifically, it relies on antlr4-python3- runtime (v4.11.1) and sympy (v1.12). This specific configuration is crucial for ensuring accurate symbolic equivalence. We extract the answer from \\boxed{} appearing after the <\think> token and assign rewards strictly based on the correctness of this answer (1 for correct, 0 for incorrect), without applying any format-based rewards or length penalties. Using a process pool with 64 workers, the average verification time is approximately 3.9seconds per 1,024instances. •For coding problem verification, we utilize a local sandbox verifier, following the code execution tools implemented in the LiveCodeBench repository (Jain et al., 2024). Given the model’s output, we extract the code generated within ‘‘‘python[code]‘‘‘ code block that follows <\think> token. Binary rewards are then assigned based on code execution outcome on full set of test cases. A positive reward will be granted if and only if the extracted code successfully passes all test cases within the specific time limit. Using a process pool with 64 workers, the average verification time for code is approximately 552.4seconds per 1,024instances. Given the significant difference in verification time between math and code, we propose conducting math-only and code-only RL separately. 3.2. Math-only RL 3.2.1. Data Curation We developed a data collection and verification pipeline to generate high-quality mathematical data for RL training. Our dataset combines DeepScaler (Gao et al., 2024; Luo et al., 2025; Min et al., 2024) and NuminaMath (Li et al., 2024), covering algebra, combinatory, number theory, and geometry. We apply 9-gram filtering to avoid contamination with common math benchmarks and implement filtering rules to exclude unsuitable data, such as questions involving multiple sub-questions, multiple-choice or true/false questions, overly long or complex answers, proof-based questions, non-English content, references to figures, or excessively brief prompts. Since NuminaMath data often originates from online sources processed through OCR and parsing tools, it contains considerable noise due to incorrect questions or answers. To address this, we use the DeepSeek-R1 model with up to eight attempts per question, retaining only those that achieve correct majority-voted solutions via a rule-based verifier. Questions that are consistently unsolvable by DeepSeek-R1 often exhibit ambiguity or OCR-related errors upon human inspection and are therefore discarded. We further filter out questions requiring fewer than 2,000R1 response tokens to answer, as we consider these questions | https://arxiv.org/abs/2505.16400v1 |
to be solvable without extensive reasoning, and downsample problems with responses of 2,000–4,000tokens to balance the dataset 5 AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning Table 1: Math-only RL improves code reasoning, demonstrating cross-domain generalization through reinforcement learning. In contrast, math-only SFT can yield poor performance on code benchmark. ModelsAIME24 avg@64AIME25 avg@64LCB v5 avg@8 DeepSeek-R1-Distill-Qwen-7B 55.5 39.0 37.6 AceReason-Nemotron-7B (math-only RL) 69.0 53.6 44.4 (6.8↑) OpenMath-Nemotron-14B (math-only SFT) 76.3 63.0 19.3 DeepSeek-R1-Distill-Qwen-14B 69.7 50.2 53.1 AceReason-Nemotron-14B (math-only RL) 78.6 67.4 58.9 (5.8↑) based on response length. Our final, rigorously verified dataset contains approximately 49,000high-quality math problems suitable for RL training. 3.2.2. Training process RL training can be computationally intensive when involving long CoT reasoning, with around 80% training time spent on generating model outputs. To address this challenge, our RL pipeline focuses on enhancing reliability and efficiency through three primary strategies: 1) strict on-policy training to maintain stable training and prevent entropy collapse, 2) stage-wise length extension from 8K to 32K tokens, and 3) curriculum training using increasingly difficult prompts at later stages. •On-policy training to stabilize entropy loss . The entropy of the policy model serves as a key metric for assessing its ability to explore during RL training. In early experiments, we found applying multiple ( 2or4) gradient updates after model generation with a group of 𝐺rollouts per prompt led to rapid entropy collapse around 100steps (see Figure 3c). In contrast, using exactly one gradient update after model generation, as in original DeepSeek-Math’s GRPO implementation (Shao et al., 2024), consistently prevented collapse. We therefore adopted this strict on-policy approach throughout RL training. •Length extension to accelerate training . Length extension has been shown to be effective for smaller models (e.g., the 1.5B DeepScaler (Luo et al., 2025)), but Wen et al. (2025) reported challenges in scaling to larger models, as training at an 8K response length initially led to degraded performance. In contrast, we were surprised to observe substantial performance improvements when extending training from 8K to16K maximum response length. Based on this, we adopted a stage-wise length extension strategy ( 8K→16K→ 24K→32K) to enable more efficient training, as directly starting from 16K or 24K resulted in suboptimal results (see Figure 3b). •Harder problems to push the model . We used curriculum learning by introducing more difficult prompts during the 24K and 32K stages. As the model mastered easier examples, their advantage reach 0in the GRPO objective. We filtered prompts by model pass rate, filtering out those with pass rate >6/16, which significantly improves model performance (Table 3). Math RL improves code reasoning . In Table 1, we show performing math RL not only improves math reasoning on AIME24/25 but also boosts LiveCodeBench v5 score to 44.4% (6 .8%↑)for 7B and 58.9% (5 .8%↑) for 14B, which already outperforms the very recent code RL model DeepCoder-14B (57.9%)(Luo et al., 2025). Furthermore, we show that math-only RL improves coding performance across all problem topics—not just math-related coding tasks (see Figure 4 in section 4.3.5). This cross-domain generalization is a compelling advantage of reinforcement learning. In contrast, domain-specific supervised fine-tuning | https://arxiv.org/abs/2505.16400v1 |
(SFT) often results in poor performance on other domains. We used a batch size of 128, sampling 𝐺= 8responses per prompt for 8K length training and 16responses otherwise. We adopted a learning rate of 1×10−6with AdamW (Kingma, 2014), and set both the entropy loss coefficient and KL loss coefficient 𝛽to0. 6 AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning 3.3. Code-only RL 3.3.1. Data Curation We curated our code-only RL training dataset from modern competitive programming platforms using strict selection criteria to ensure high-quality coding problems. The dataset includes both function-calling and standard input/output (stdin/stdout) formats and covers a wide range of algorithmic topics, including graph theory, data structures, number theory, greedy algorithms, and more. To ensure stability for RL training, we filtered out problems incompatible with standard output comparison (e.g., multi-solution or interactive problems requiring special judges) or those needing platform-specific templates, thereby minimizing potential false negative reward. Furthermore, we curated strong testcases covering tricky edge cases or extreme cases under input limitations, ensuring that incorrect solutions would fail and thereby eliminating potential false positive reward. As discussed in Sec. 4.3.6, both false positive reward and false negativerewardcanobfuscateRLtrainingbyintroducingnoisyrewardsignals. Togaugedifficulty, weevaluated each problem using DeepSeek-R1-671B with 8 rollouts, assigning a difficulty score from 0 to 8. Problems where the model failed all 8 attempts (level 8) were excluded. Finally, we performed careful benchmark decontamination and problem deduplication across platforms using n-gram context analysis and original URL matching (see App. A.4 for details). After such aggressive filtering process, 8,520problems remained, forming our final training set. 3.3.2. Training Process We apply the two-stage code-only RL pipeline designed to accommodate models of varying scales. The pipeline leverages training sets composed of coding problems within specific difficulty ranges, along with customized settings for maximum response length and sampling temperature. •Stage 1 initiates the code RL process, launching after prior math-only RL to ensure training stability. In Stage 1, training data is constructed by difficulty: problems with difficulty up to level 5are used for 7B model, while problems up to level 7are used for 14B model. We set maximum response length as 24,000, temperature as 0.6and number of rollouts as 8for Stage 1training. •Stage 2 employs the full set of training problems with 32,768maximum response length. In this stage, we implement an epoch-wise filtering strategy by filtering out relatively easy problems w.r.t. prior epoch checkpoints and gradually increasing the sampling temperature from 0.6to1.0, number of rollouts from 8to 16across epochs. This aims to encourage policy convergence while encouraging exploration. We set batch size to 128 and learning rate to 5×10−6with AdamW, continuing training in both stages until policy converges. Regarding the reward function, we adopt the strict rule-based reward: positive reward 1is granted if and only if the generated code successfully passes all test cases for the given problem. As for efficient evaluation, we deploy a parallelized local verifier to check testcase correctness. 4. Evaluation 4.1. Experimental Setup Our experiments start from strong SFT models, DeepSeek-R1-Distill-Qwen-7B and 14B, which are based on the Qwen2.5 model family (Yang et al., 2024) and distilled from DeepSeek-R1 (Guo et | https://arxiv.org/abs/2505.16400v1 |
al., 2025). To ensure consistency and reproducibility, we follow the DeepSeek-R1 evaluation protocol, using a temperature of 0.6, top-pof0.95, and a maximum output length of 32,768tokens. 4.1.1. Math Evaluation We use a diverse math competition benchmarks, including AIME2024, AIME2025, MATH500 (Hendrycks et al., 2021), in addition with HMMT2025 Feb and BRUMO2025 from MathArena (Balunović et al., 2025). Due to the high variance in outputs from reasoning models when using sampling, we report pass @1 performance averaged over𝑘generations (avg @𝑘). For small-scale benchmarks such as AIME, we use 𝑘= 64, following DeepSeek-R1. 7 AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning Table 2: Math and Code reasoning evaluation . We report pass@1 averaged over 𝑘generations (avg@ 𝑘) following the DeepSeek-R1 evaluation framework (template, temperature= 0.6,top_p=0.95, max response length= 32,768). By default, we report self-reported numbers from model developers if they are available. Otherwise,†we evaluate the model using the same evaluation setting, or‡we collected from MathArena or LiveCodeBench leaderboard. ModelsAIME MATH 500 avg@4HMMT 2025 avg@64BRUMO 2025 avg@64LiveCodeBench CodeforcesEvalPlus2024 avg@642025 avg@64v5 avg@8v6 avg@8ELO pass@1Percentile pass@1 avg@4 QwQ-32B 79.5 65.8‡96.0 47.5‡– 63.4 – 1982 97.7 – DeepSeek-R1-671B 79.8 70.0‡97.3 41.7‡80.8‡65.9 – 2029 98.1 – Llama-Nemotron-Ultra-253B 80.8 72.5 97.0 – – 66.3 – – – – o3-mini (low) 60.0 48.3‡95.8 28.3‡66.7†60.9‡– 1918 97.1 – o3-mini (medium) 79.6 76.7‡97.3 53.3‡80.0†67.4‡– 2036 98.1 – AReal-boba-RL-7B 61.9 48.3 93.8†29.4†58.9†34.3†– – – – Skywork-OR1-Math-7B 69.8 52.3 94.4†31.4†60.6†43.6 – – – – OlympicCoder-7B – – – – – 40.7 37.1†– – 79.8† Light-R1-7B 59.1 44.3 92.4†27.6†52.8†40.6†36.4†– – – Light-R1-14B 74.0 60.2 94.6†37.8†67.1†57.9†51.5†– – – DeepCoder-14B (32K-Inference) 71.0†56.1†– – – 57.9 50.4†1922 97.2 85.3† OpenMath-Nemotron-7B 74.8 61.2 – – – – – – – – OpenMath-Nemotron-14B 76.3 63.0 – – – – – – – – OpenMath-Nemotron-32B 76.5 62.5 – – – – – – – – OpenCodeReasoning-Nemotron-7B – – – – – 51.3 46.1†– – 83.4† OpenCodeReasoning-Nemotron-14B – – – – – 59.4 54.1†– – 84.1† Llama-Nemotron-Nano-8B-v1 61.3 47.1 95.4 – – 46.6 46.2†– – 81.2† Llama-Nemotron-Super-49B-v1 67.5 60.0 96.6 – – 45.5 – – – – DeepSeek-R1-Distill-Qwen-7B 55.5 39.0†92.8 26.3†51.2†37.6 34.1†1189 57.4 80.4† DeepSeek-R1-Distill-Qwen-14B 69.7 50.2†93.9 31.7‡61.1†53.1 47.9†1481 85.6 83.9† DeepSeek-R1-Distill-Qwen-32B 72.6 54.9†94.3 33.3‡68.3‡57.2 – 1691 93.2 – DeepSeek-R1-Distill-Llama-70B 70.0 55.0‡94.5 33.3‡66.7‡57.5 – 1633 91.4 – AceReason-Nemotron-7B 69.0 53.6 94.1 33.9 62.2 51.8 44.1 1475 84.8 84.6 AceReason-Nemotron-14B 78.6 67.4 95.0 46.4 72.3 61.1 54.9 2024 98.1 85.7 This choice of 𝑘is critical for obtaining a reliable evaluation, as lower values of 𝑘lead to a significantly higher standard error of the mean (e.g., on AIME2024 @ 16/32/64 : 1 .8/1.2/0.7). To isolate the effects of pre-training, we primarily compare with reasoning models based on either Qwen2.5 or Llama-3.1 at similar parameter scales. These include SFT models (distilled from much larger frontier models) such as Light-R1-7B (Wen et al., 2025), OpenMathReasoning-7/14/32B (Moshkov et al., 2025), and LLaMA-Nemotron-Nano/Super-8/49B (Bercovich et al., 2025), as well as RL models like AReal-boba-RL- 7B (RL Lab, 2025), Skywork-OR1-Math-7B (He et al., 2025), and Light-R1-14B (Wen et al., 2025). For context, we also include frontier reasoning models such as DeepSeek-R1 (Guo et al., 2025), QwQ-32B (Qwen, 2025), | https://arxiv.org/abs/2505.16400v1 |
LLaMA-Nemotron-Ultra-253B (Bercovich et al., 2025), and o3-mini (OpenAI, 2024). 4.1.2. Code Evaluation For coding tasks, we evaluate our AceReason-Nemotron models on LiveCodeBench (Jain et al., 2024) v5 (20240801−20250201 ) and v6 ( 20250201−20250501 ) subsets, containing recently released AtCoder, LeetCode problems. We also report Codeforces ELO and percentile number of our models based on LiveCodeBench Pro dataset (Zheng et al., 2025), which contains Codeforces problems ranging from 202407to202412. We also include evaluations on EvalPlus (Liu et al., 2023, 2024) benchmark. We compare our model with state-of-the-art open-sourced code-gen LLMs of similar parameter scales, in- cluding OlympicCoder-7B (HuggingFace, 2025), Llama-3.1-Nemotron-Nano-8B-v1 (Bercovich et al., 2025), OpenCodeReasoning-7B/14B (Ahmad et al., 2025), DeepCoder-14B (Luo et al., 2025). For further context, we also include strong frontier reasoning models as titled above. 8 AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning 4.2. Main Results From the evaluation results in Table 2, we summarize the key conclusions as follows: •RL significantly improves reasoning capabilities. Our AceReason-Nemotron-7B/14B models show that usingRLsignificantlyimprovesovertheinitialSFTmodels(DeepSeek-R1-Distill-Qwen-7B/14B)onbothmath andcodingtasks. Specifically,formathtasks,ourAceReason-Nemotron-7Bachievesremarkableimprovements over SFT model by increasing 14.5%accuracy on AIME 2024, and 14.6%accuracy on AIME 2025. For coding tasks,itachieves 14.2%and8%accuracyimprovementsoverDeepSeek-R1-Distill-Qwen-7BonLiveCodeBench v5 and v6, respectively. Meanwhile, AceReason-Nemotron-14B improves from 69.7%/50.2%to78.6%/67.4% onAIME24/25,and 53.1%/47.9%to61.1%/54.9%onLiveCodeBenchv5/v6frominitialSFTmodelDeepSeek- R1-Distill-Qwen-14B, even surpassing significantly larger SFT models such as DeepSeek-R1-Distill-Qwen-32B and DeepSeek-R1-Distill-Llama-70B. •AceReason-Nemotron vs. SOTA RL-based reasoning models. While comparing with state-of-the-art open RL-based reasoning models under the same parameter scale, AceReason-Nemotron model still remains its superiority. In math reasoning domain, AceReason-Nemotron-7B model provides competitive results while comparing with strong RL-based models (Skywork-OR1-Math, Light-R1, etc.), while AceReason-Nemotron- 14B provides the best-in-class results. In code generation domain, AceReason-Nemotron models outperform all open-sourced reasoning models with similar parameter scale. Its Math-Code ability remains competitive even comparing with frontier reasoning models, such as QWQ-32B, o3-mini, etc. •AceReason-Nemotron vs. SOTA models through distillation. AceReason-Nemotron-14B shows better math and code performance than the latest SOTA specialized distilled model such as OpenMath-14B/32B by +2.1%/+4.4% on AIME24/25, OpenCodeReasoning-14B by +1.7%/+0.8% on LiveCodeBench v5/v6, demonstrating RL leads to higher upper-bound of model performance than distillation. In contrast, at 7B model scale, AceReason-Nemotron-7B performs competitively with OpenCodeReasoning-7B on coding tasks, while OpenMath-7B appears to have bigger advantage than RL-trained AceReason-Nemotron-7B on math reasoning. In all cases, AceReason-Nemotron models significantly outperform the general-purpose reasoning model Llama-Nemotron-8B/49B, which is trained via distillation. This indicates that the effectiveness of distillation versus RL still depends on model size and task domain, though RL offers the potential for significantly higher accuracy at the 14B scale and beyond. 4.3. Analysis 4.3.1. Response Length Grows alongside Performance Fig.2(leftsubfigure)andFig.3aillustratetheevolutionofresponselengthonthetrainingsetandcorresponding AIME24 accuracy (AIME25 in appendix Figure 8) throughout the RL training of AceReason-Nemotron-7B model. We analyze two distinct stages in the 8K →16K length-extension training strategy: 1) Stage 1 (8K) – Transition from imitation to RL: During this stage, the model learns to compress its reasoning process to adapt to an 8K token limit, causing an initial drop in problem-solving capabilities compared to baseline. However, after approximately 1K–2K RL steps, the model gradually recovers its performance; 2) Stage 2 (16K) – Accelerated learning: Upon extending the token limit to 16K, the model immediately exploits the | https://arxiv.org/abs/2505.16400v1 |
extra reasoning tokens capacity: within 500 RL steps, its average response length increases from 4K to around 6.5K tokens. At the same time, AIME24 accuracy improves sharply from 55%to62%, after which both response length and accuracy plateau. Extending the maximum response length further to 24K and 32K demonstrates a similar trend. 4.3.2. Hard Prompts Drive the Largest Gains At the 24K response length RL stage, we build {Easy, Medium, Hard}-prompt sets based on difficulty estimated by 7B model’s performance over 16 attempts. Easy prompt set includes prompts except those solved more than 14 times, Medium prompt set excludes those solved more than 10, and Hard excludes those solved more than 6. As shown in Table 3, our ablation confirms that training with Hard prompts yields a 2.6% improvement on 9 AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning Table 3: Prompt difficulty and its impact on Stage 3 (24K) training. DataAIME24 avg@64AIME25 avg@64 Starting Checkpoint - 7B 62.2 50.2 Full(no filtering, # 49K) 63.3 51.1 Easy(#10K) 64.4 50.8 Medium (#4.6K) 65.3 51.9 Hard(#2.2K) 65.9 52.5 Stage 2 (16K): accelerationDeepseek-R1-Distill-14B Deepseek-R1-Distill-7BMath-RL Figure 2: Model accuracy on AIME2024 and LiveCodeBench v5 during math-only RL (left subfigure) and continued code-only RL (right subfigure). We observe a significant accuracy boost during Stage 2 (with a 16K response length) of math-only RL. For code-only RL, initializing from math-RL checkpoints provides a substantially better starting point and leads to significantly higher final accuracy on LiveCodeBench. the AIME24 benchmark compared to fullset data and outperforms using Easy and Medium prompts, although it only contains 2.2K prompts. 4.3.3. Starting from 8K Improves Later Performance Although training initially at 8K max response length offers faster training per step than starting at 16K or 24K, the early decline in AIME24 accuracy raises questions about its necessity for 7B-sized models. Figure 3b presents an ablation study comparing three strategies: 8K →16K→24K, 16K→24K, and directly at 24K. Each strategy utilizes identical full data, with accuracy measured on AIME24 alongside GPU hours consumed. The results indicate that while the 16K →24K strategy initially outperforms the baseline, the 8K →16K strategy quickly catches up during the 16K stage and consistently outperforms other strategies thereafter. Conversely, starting directly at 24K results in minimal performance gains and significantly longer training times, suggesting limited effectiveness during RL training for 7B model. 4.3.4. Interplay of Math-only RL and Code-only RL In Fig. 2 (right subfigure), we observe that initializing code RL training from math-RL checkpoints offers a substantially better starting point and results in significantly higher final accuracy on LiveCodeBench. We investigate the effects of sequential Math RL and Code RL training stages on developing strong reasoning model in both math and code reasoning domains. Starting from DeepSeek-R1-Distill-Qwen-7B/14B as our 10 AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning 0 500 1000 1500 2000 2500 3000 3500 Training Steps4000600080001000012000Response Length (tokens) 16k 24k Initial decline in 8K trainingIncrease in 16K training - hit a wallUsing hard problems in 24K training (a) 0.60.81.01.21.41.6 GPU Hours ×104545658606264AIME2024 Accuracy (avg@64)Length Extension Configs: 8K16K 24K 16K 24K 24K (b) 0 50 100 | https://arxiv.org/abs/2505.16400v1 |
150 200 250 300 Training Steps0.100.150.200.250.30Entropy Loss Training Method: Clip-Higher On-policy (c) Figure 3: Analysis of RL training: (a) response length during math-RL training, (b) GPU hours for different length extension configurations, and (c) entropy of output logits using on-policy RL training versus off-policy training with Chip-Higher trick. Table 4: Interplay of math-only RL and code-only RL. Math-only RL significantly improves performance on LiveCodeBench v5 and v6. Extended iterations of code-only RL result in minimal or no degradation on AIME 2024 and 2025. ModelsAIME LiveCodeBench 2024 avg@642025 avg@64v5 avg@8v6 avg@8 DeepSeek-R1-Distill-Qwen-7B 55.5 39.0 37.6 34.1 + Math-only RL (8K →24K) 65.9 (10.4↑)52.5(13.5↑)44.4(6.8↑)37.6(3.5↑) + + Code-only RL (24K →32K) 66.9 (1.0↑)51.7(0.8↓)51.8(7.4↑)44.1(6.5↑) DeepSeek-R1-Distill-Qwen-14B 69.7 50.2 53.1 47.9 + Math-only RL (8K →24K) 76.6 (6.9↑)63.4(13.2↑)58.6(5.5↑)50.9(3.0↑) + + Code-only RL (24K →32K) 75.7 (0.9↓)63.9(0.5↑)61.1(2.5↑)54.9(4.0↑) initial models, we first apply Math-RL and then Code-RL, evaluating performance on standard Math (AIME 24/25) and Code (LiveCodebench v5/v6) benchmarks. Surprisingly, as shown in Tab. 4, initial Math-RL training not only significantly improves accuracy on math benchmarks, but also improves model performance on coding benchmarks. Subsequent Code-RL training further boosts coding benchmark scores, with negligible degradation on math benchmark performance. This indicates that our proposed sequential training strategy is effective in developing models with strong reasoning abilities across both math and coding domains. 4.3.5. Topic-wise Accuracy Analysis on LiveCodeBench v5 Given the observation that both Math-RL and Code-RL enhance code generation ability on coding benchmarks, we are interested to see in detail how these two RL stages improve accuracy on topic-specific coding problems. Specifically, this ablation aims to identify which problem topics benefit the most from Math-RL and the subsequent Code-RL. Motivated by this, we conducted ablation studies on LiveCodeBench v5 dataset, which consists of coding problems from AtCoder and LeetCode platforms. While LeetCode problems come with human-annotated topic tags (e.g., Greedy, Math, DFS), there is no tag on Atcoder problems. To address this, we first extract a set of problem tags from LeetCode. Then, for each AtCoder problem, we query the o4-mini-high model to infer candidate tags given the problem statement and the set of topics. Furthermore, we group all LiveCodeBench v5 problems by their assigned topics and evaluate model performance (avg@8 accuracy) for each topic group. We compare the performance of our initial SFT models, DeepSeek-R1-Distill-Qwen-7B/14B, against corre- sponding AceReason-Nemotron-7B/14B after applying Math-RL stage only and final models that incorporate 11 AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning 0 10 20 30 40 50 60 70 80 AccuracyString Sorting Sliding Window Segment Tree Prefix Sum Number Theory Math Heap (Priority Queue) Hash T able Greedy Graph Dynamic Programming Counting Combinatorics Binary SearchT opic-wise accuracy on LiveCodeBench v5 Deepseek-R1-Distill-7b After Math-RL improvements After Code-RL improvements 0 10 20 30 40 50 60 70 80 AccuracyString Sorting Sliding Window Segment Tree Prefix Sum Number Theory Math Heap (Priority Queue) Hash T able Greedy Graph Dynamic Programming Counting Combinatorics Binary SearchT opic-wise accuracy on LiveCodeBench v5 Deepseek-R1-Distill-14b After Math-RL improvements After Code-RL improvements Figure 4: Both math-only and code-only RL enhance the performance of AceReason-Nemotron-7B and 14B across all coding | https://arxiv.org/abs/2505.16400v1 |
problem topics. As expected, math-only RL provides greater gains on math-related coding tasks. both Math-RL and Code-RL. As shown in Fig. 4, we plot the accuracy for each topic before and after Math-RL and Code-RL. The figure shows that applying math-only RL enhances model performance across all coding problem topics, with especially strong gains in algorithmic and math-related areas such as Math, Counting, and Combinatorics—domains that rely heavily on mathematical concepts and general reasoning abilities. Furthermore, for topics like Simulation, String, and Graph, which rely more heavily on coding implementation and data structure skills, Code-RL leads to significant further improvement. 4.3.6. False Positive and False Negative Rewards in Code RL Training To highlight the importance of eliminating false positive reward (incorrect code passing all tests within time constraints) and false negative reward (incorrect test cases that fail correct code) in RL Training, we conduct two ablation experiments, showing that both types of errors can be harmful to RL training, resulting in early convergence on sub-optimal policies, or even complete training collapse. 0 39 78 117 156 195 234 273 312 Training Steps363840424446485052Accuracy (avg@8) LiveCodeBench v5 AceReason-7B AceReason-7B (+ False Positives) AceReason-7B (+ False Negatives) Figure 5: The impact of false positive and false negative rewards in Code RL Training To simulate the impact of false negative rewards, we introduce a subset of problems into the training data where either the correctness of test cases could not be verified, or the official "golden" solution failed to pass all provided tests. From Fig. 5 we can see that, the RL model tends to converge to a sub-optimal point, with no 12 AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning Figure 6: The Pass@K of RL (AceReason-Nemotron) and SFT models (DeepSeek-R1-Distilled) on AIME 2024/2025 and LiveCodeBench v5/v6. Figure 7: Problem-level solving rates comparison between distilled model and after RL training. Accuracy for each problem is calculated on average of 64 attempts. further improvement on the evaluation set. We hypothesize that such false negative reward causes the model to discard correct algorithms for certain training problems. Consequently, when encountering testing problems that share similar underlying concepts and require these discarded algorithms, the model remains unable to identify the correct solution, thereby degrading its performance. To simulate RL training with false positive rewards, we blend problems with weak test cases that allow incorrect solutions to pass into our training set. As a notable example, we examined DeepCoder’s RL training set. Despite their claims of providing strong test case selection criteria, we find that their tests still cannot cover all tricky edge cases - there exists incorrect solutions that can pass all their selected test cases but will subsequently fail on official test cases. After incorporating unfiltered Code-RL training data from these sources into our problem set, we observed a sudden drop on testing set accuracy midway through the training process, as shown in Fig. 5. We suspect that this performance degradation occurs because the model learns to solve some training problems without considering all corner cases or by generating brute-force solutions with incorrect time complexity. This | https://arxiv.org/abs/2505.16400v1 |
wrongly learned policy is then penalized while evaluating against strong test cases in the testing set. 4.3.7. Does RL Improve pass@k or pass@1 over the SFT Model? Recent studies (Shao et al., 2024; Yue et al., 2025) suggest that RL primarily improves pass@1 accuray over SFT model ((DeepSeek-R1-Distilled) without significantly impacting pass@ 𝑘. However, Figure 6 demonstrates that 13 AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning RLconsistentlyimprovespass@ 𝑘score(from 𝑘= 8to𝑘= 64)forboth7Band14BmodelsonLiveCodeBenchv5 and v6, with an improved pass@k scores maintaining a 10%margin from pass@ 8to pass@ 64. On AIME24/25, we found the 14B model also shows better pass@k, although the AIME25 gap on 7B is closed by the SFT model at pass@ 64. 4.3.8. Where does RL Improve over the SFT Model? Figure 7 compares the problem-level accuracies of the initial 7B SFT model with AceReason-Nemotron-7B after RL on LiveCodeBench v5/v6 and AIME 2024/2025. Results for the 14B model are shown in Appendix Figure 10. On LiveCodeBench, we observe RL unlocks a long tail of hard coding problems that the distilled model fails to solve in 64 attempts, adding 30and23additional solvable problems to LiveCodeBench v5 and v6. It also significantly improves on challenging problems where the SFT model has lower than 20%accuracy. On the AIME, for the most challenging problems with zero solve rate, RL enables the model to solve 3 more problems on AIME24. In conclusion, we find RL not only improves the accuracy on problems with high solve-rate but also extends the boundary to solve hard problems that the SFT model was initially unable to solve. 5. Conclusion Wedemonstratethatlarge-scalereinforcementlearning(RL)cansubstantiallyenhancethereasoningcapabilities of strong, small- and mid-sized SFT models. We propose performing RL on math-only prompts first, followed by code-only prompts. Notably, math-only RL significantly boosts performance not only on math benchmarks but also on code reasoning tasks. Crucially, subsequent code-only RL further improves code benchmark performance with minimal to no degradation in math results. To support this process, we develop a robust data curation pipeline that collects challenging prompts with high-quality, verifiable answers and test cases, enabling verification-based RL across both domains. We derive several key empirical insights from RL training. In particular, we find that RL not only elicits the foundational reasoning capabilities acquired during pretraining and supervised fine-tuning, but also pushes the limits of the model’s reasoning ability, enabling it to solve previously unsolvable problems. 6. Acknowledgement We would like to extend our gratitude to the NVIDIA Nemo team for the valuable discussion and collaboration on building reasoning models. We especially wish to thank Boris Ginsburg, Oleksii Kuchaiev, Igor Gitman, Wei Du, Somshubra Majumdar, Siddhartha Jain, Jiaqi Zeng, Yi Dong, Alexander Bukharin, Olivier Delalleau, Tugrul Konuk, Vahid Noroozi, and Jonathan Cohen. 14 AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning References [1]Wasi Uddin Ahmad, Sean Narenthiran, Somshubra Majumdar, Aleksander Ficek, Siddhartha Jain, Jo- celyn Huang, Vahid Noroozi, and Boris Ginsburg. Opencodereasoning: Advancing data distillation for competitive coding. arXiv preprint arXiv:2504.01943 , 2025. 3, 4, 8 [2]Alisson Azzolini, Hannah Brandon, Prithvijit Chattopadhyay, Huayu Chen, Jinju Chu, Yin Cui, Jenna Diamond, Yifan Ding, Francesco Ferroni, Rama Govindaraju, et al. Cosmos-reason1: | https://arxiv.org/abs/2505.16400v1 |
From physical common sense to embodied reasoning. arXiv preprint arXiv:2503.15558 , 2025. 3 [3]Mislav Balunović, Jasper Dekoninck, Ivo Petrov, Nikola Jovanović, and Martin Vechev. Matharena: Evaluatingllmsonuncontaminatedmathcompetitions,February2025. URL https://matharena.ai/ . 7 [4]Akhiad Bercovich, Itay Levy, Izik Golan, Mohammad Dabbah, Ran El-Yaniv, Omri Puny, Ido Galil, Zach Moshe, Tomer Ronen, Najeeb Nabwani, et al. Llama-Nemotron: Efficient Reasoning Models. arXiv preprint arXiv:2505.00949 , 2025. 3, 4, 8 [5]Huayu Chen, Kaiwen Zheng, Qinsheng Zhang, Ganqu Cui, Yin Cui, Haotian Ye, Tsung-Yi Lin, Ming-Yu Liu, Jun Zhu, and Haoxiang Wang. Bridging supervised learning and reinforcement learning in math reasoning. arXiv preprint , 2025. 4 [6]Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 , 2021. 4 [7]Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 , 2021. 4 [8]Bofei Gao, Feifan Song, Zhe Yang, Zefan Cai, Yibo Miao, Qingxiu Dong, Lei Li, Chenghao Ma, Liang Chen, Runxin Xu, et al. Omni-math: A universal olympiad level mathematic benchmark for large language models. arXiv preprint arXiv:2410.07985 , 2024. 5 [9]Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. 4 [10]Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen, Xiao Bi, Yu Wu, YK Li, et al. Deepseek-coder: When the large language model meets programming–the rise of code intelligence. arXiv preprint arXiv:2401.14196 , 2024. 4 [11]Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-R1: Incentivizing reasoning capability in LLMs via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. 3, 4, 5, 7, 8 [12]Jujie He, Jiacai Liu, Chris Yuhao Liu, Rui Yan, Chaojie Wang, Peng Cheng, Xiaoyu Zhang, Fuxiang Zhang, Jiacheng Xu, Wei Shen, Siyuan Li, Liang Zeng, Tianwen Wei, Cheng Cheng, Bo An, Yang Liu, and Yahui Zhou. Skywork open reasoner series, 2025. Notion Blog. 3, 4, 8 [13]Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. NeurIPS , 2021. 7 [14]HuggingFace. Openr1: Afullyopenreproductionofdeepseek-r1, January2025. URL https://github. com/huggingface/open-r1 . 8 15 AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning [15]Naman Jain, King Han, Alex Gu, Wen-Ding Li, Fanjia Yan, Tianjun Zhang, Sida Wang, Armando Solar- Lezama, Koushik Sen, and Ion Stoica. Livecodebench: Holistic and contamination free evaluation of large language models for code. arXiv preprint arXiv:2403.07974 , 2024. 4, 5, 8 [16]Diederik P Kingma. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 , 2014. 6 [17]Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention. In Proceedings of the | https://arxiv.org/abs/2505.16400v1 |
ACM SIGOPS 29th Symposium on Operating Systems Principles , 2023. 5 [18]Jia Li, Edward Beeching, Lewis Tunstall, Ben Lipkin, Roman Soletskyi, Shengyi Costa Huang, Kashif Rasul, LonghuiYu, AlbertJiang, ZijuShen, ZihanQin, BinDong, LiZhou, YannFleureau, GuillaumeLample, and Stanislas Polu. Numinamath. [https://huggingface.co/AI-MO/NuminaMath-CoT](https: //github.com/project-numina/aimo-progress-prize/blob/main/report/numina_ dataset.pdf) , 2024. 5 [19]Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. In The Twelfth International Conference on Learning Representations , 2023. 4 [20]Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, et al. Deepseek-V3 technical report. arXiv preprint arXiv:2412.19437 , 2024. 3 [21]Chris Yuhao Liu, Liang Zeng, Jiacai Liu, Rui Yan, Jujie He, Chaojie Wang, Shuicheng Yan, Yang Liu, and Yahui Zhou. Skywork-reward: Bag of tricks for reward modeling in llms. arXiv preprint arXiv:2410.18451 , 2024. 4 [22]Jiawei Liu, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. Is your code generated by chatGPT really correct? rigorous evaluation of large language models for code generation. In Thirty-seventh Conference on Neural Information Processing Systems , 2023. URL https://openreview.net/forum? id=1qvx610Cu7 . 8 [23]Jiawei Liu, Songrun Xie, Junhao Wang, Yuxiang Wei, Yifeng Ding, and Lingming Zhang. Evaluating language models for efficient code generation. In First Conference on Language Modeling , 2024. URL https://openreview.net/forum?id=IBCBMeAhmC . 8 [24]Zihan Liu, Yang Chen, Mohammad Shoeybi, Bryan Catanzaro, and Wei Ping. AceMath: Advancing frontier math reasoning with post-training and reward modeling. arXiv preprint arXiv:2412.15084 , 2024. 4, 5 [25]Michael Luo, Sijun Tan, Roy Huang, Xiaoxiang Shi, Rachel Xin, Colin Cai, Ameen Patel, Alpay Ariyak, Qingyang Wu, Ce Zhang, Li Erran Li, Raluca Ada Popa, and Ion Stoica. Deepcoder: A fully open-source 14b coder at o3-mini level, 2025. Notion Blog. 3, 4, 6, 8 [26]Michael Luo, Sijun Tan, Justin Wong, Xiaoxiang Shi, William Y. Tang, Manan Roongta, Colin Cai, Jeffrey Luo, Li Erran Li, Raluca Ada Popa, and Ion Stoica. DeepScaleR: Surpassing O1-Preview with a 1.5B Model by Scaling RL, 2025. Notion Blog. 3, 4, 5, 6 [27]Yingqian Min, Zhipeng Chen, Jinhao Jiang, Jie Chen, Jia Deng, Yiwen Hu, Yiru Tang, Jiapeng Wang, Xiaoxue Cheng, Huatong Song, Wayne Xin Zhao, Zheng Liu, Zhongyuan Wang, and Ji-Rong Wen. Imitate, explore, and self-improve: A reproduction report on slow-thinking reasoning systems. arXiv preprint arXiv:2412.09413 , 2024. 5 16 AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning [28]Ivan Moshkov, Darragh Hanley, Ivan Sorokin, Shubham Toshniwal, Christof Henkel, Benedikt Schifferer, Wei Du, and Igor Gitman. Aimo-2 winning solution: Building state-of-the-art mathematical reasoning models with openmathreasoning dataset. arXiv preprint arXiv:2504.16891 , 2025. 3, 4, 8 [29] OpenAI. Learning to reason with LLMs, 2024. 3, 4, 8 [30] Team Qwen. Qwen3, April 2025. URL https://qwenlm.github.io/blog/qwen3/ . 3, 4 [31]Team Qwen. Qwq-32b: Embracing the power of reinforcement learning, March 2025. URL https: //qwenlm.github.io/blog/qwq-32b/ . 4, 8 [32]Ant Research RL Lab. Areal: Ant reasoning rl. https://github.com/inclusionAI/AReaL , 2025. 4, 8 [33]John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy opti- mization algorithms. arXiv preprint arXiv:1707.06347 , 2017. 4 [34]Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, | https://arxiv.org/abs/2505.16400v1 |
Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. DeepseekMath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. 4, 6, 13 [35]Guangming Sheng, Chi Zhang, Zilingfeng Ye, Xibin Wu, Wang Zhang, Ru Zhang, Yanghua Peng, Haibin Lin, and Chuan Wu. Hybridflow: A flexible and efficient rlhf framework. arXiv preprint arXiv: 2409.19256 , 2024. 5 [36]JasonWei, XuezhiWang, DaleSchuurmans, MaartenBosma, FeiXia, EdChi, QuocVLe, DennyZhou, etal. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. 4 [37]Liang Wen, Yunke Cai, Fenrui Xiao, Xin He, Qi An, Zhenyu Duan, Yimin Du, Junchen Liu, Lifu Tang, Xiaowei Lv, et al. Light-R1: Curriculum SFT, DPO and RL for Long COT from Scratch and Beyond. arXiv preprint arXiv:2503.10460 , 2025. 3, 6, 8 [38]An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2.5 technical report. arXiv preprint arXiv:2412.15115 , 2024. 7 [39]An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, et al. Qwen2.5-Math technical report: Toward mathematical expert model via self-improvement. arXiv preprint arXiv:2409.12122 , 2024. 4 [40]Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue, Tiantian Fan, Gaohong Liu, Lingjun Liu, Xin Liu, et al. DAPO: An open-source LLM reinforcement learning system at scale. arXiv preprint arXiv:2503.14476 , 2025. 3, 4, 5 [41]Yang Yue, Zhiqi Chen, Rui Lu, Andrew Zhao, Zhaokai Wang, Shiji Song, and Gao Huang. Does rein- forcement learning really incentivize reasoning capacity in llms beyond the base model? arXiv preprint arXiv:2504.13837 , 2025. 13 [42]Huaye Zeng, Dongfu Jiang, Haozhe Wang, Ping Nie, Xiaotong Chen, and Wenhu Chen. Acecoder: Acing coder rl via automated test-case synthesis. arXiv preprint arXiv:2502.01718 , 2025. 4 [43]Zihan Zheng, Zerui Cheng, Zeyu Shen, Shang Zhou, Kaiyuan Liu, Stanley Wei, Hansen He, Dongruixuan Li, Hangyi Hao, Jianzhu Yao, Peiyao Sheng, Zixuan Wang, Wenhao Chai, Peter Henderson, Aleksandra Korolova,PramodViswanath,SainingXie,andJingboShang. Livecodebenchpro: Howolympiadmedalists view llms in competitive programming?, 2025. URL https://livecodebenchpro.com/ . Available at LiveCodeBench Pro website. 8 17 AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning A. Appendix A.1. Instruction for evaluation Math Please reason step by step, and put your final answer within \\boxed{{}}. No Starter Code (Python) Write Python code to solve the problem. Please place the solution code in the following format: ‘‘‘python # Your solution code here ‘‘‘ Has Starter Code Solve the problem starting with the provided function header. Function header: ‘‘‘ <starter_code> ‘‘‘ Please place the solution code in the following format: ‘‘‘python # Your solution code here ‘‘‘ A.2. Example of AceReason-Nemotron-14B response to simple query User: hi Model: <think> Alright, the user wrote "hi" in the middle of the message. Probably just saying hello. I should respond in a friendly and welcoming way. Maybe ask how I can assist them today. Keep it open-ended to encourage them to share what they need help with. </think> Hello! How can I assist you today? 18 AceReason-Nemotron: Advancing Math and Code | https://arxiv.org/abs/2505.16400v1 |
Reasoning through Reinforcement Learning User: How many strawberries do I need to collect 9 r’s? Model: <think> First, I need to understand what the user is asking. They want to know how many strawberries are needed to collect 9 r’s. I interpret r’s as the letter ’r’. Each strawberry is associated with a certain number of ’r’s. Looking at the word strawberry, I count the number of ’r’s it contains. There are three ’r’s in strawberry. If one strawberry provides three ’r’s, then to get a total of nine ’r’s, I divide the total needed by the number each strawberry provides. So, nine divided by three equals three. Therefore, the user needs to collect three strawberries to have nine ’r’s. </think> To determine how many strawberries you need to collect **9 "r’s **, let’s break it down step by step. ### Step 1: Understand the Context - It seems you’re referring to the letter **"r"**within the word **strawberry. ** ### Step 2: Count the r’s in strawberry - Let’s look at the word **strawberry **: -**s t r a w b e r r y ** -**r**,**r**,**r**(there are **3 r’s **) ### Step 3: Calculate the Number of Strawberries Needed - Each **strawberry **contains **3 r’s. ** - To collect **9 r’s **, you would need: \[ \text{Number of strawberries} = \frac{\text{Total r’s needed}}{\text{r’s per strawberry}} = \frac{9}{3} = 3 \] ### Final Answer \[ \boxed{3} \] A.3. Additional Math-RL Analysis In Figure 8, we show the accuracy on AIME2025 during math RL training. We found the stage 2 (16K) training leads to a large performance improvement within 300 steps. In Figure 9, we plot boxplots of AIME2024 and AIME2025 accuracy for the 7B and 14B AceReason model comparing to DeepSeek-R1-Distill-Qwen. We can observe the accuracy of AceReason over 64 generations at 25% and 75% percentile and median, significantly improve over the distilled SFT model. Finally, in Figure 10, we inspect the problem-level solve rate for 19 AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning Stage 2 (16K): acceleration Figure 8: Model accuracy on AIME2025 during math-only RL training. AIME 2024 AIME 2025 Dataset304050607080Accuracy AceReason-7B DeepSeek-R1-Distill-7B (a) 7B AIME 2024 AIME 2025 Dataset304050607080Accuracy AceReason-14B DeepSeek-R1-Distill-14B (b) 14B Figure 9: Boxplot of AceReason vs Deepseek-R1-Distill on AIME24/25 over 64 generations. 1 8 10 12 25 13 16 20 7 9 27 26 6 19 28 23 24 2 15 18 11 17 5 21 29 14 30 3 4 22 Problem ID020406080100AccuracyAIME2024 Deepseek-R1-Distill-14b After RL Improvements After RL Degradations 1 17 3 4 6 16 5 19 8 24 2 29 22 12 27 21 23 18 9 20 25 7 26 11 10 28 14 13 15 30 Problem ID020406080100AccuracyAIME2025 Figure 10: Comparison of problem-solving rates after RL training. AceReason-Nemotron-14B on AIME2024 and AIME2025. On AIME24 and AIME25, the AceReason model solves one additional problem. We also observe large gains on problems with higher initial accuracy, showing the benefits of RL training. 20 AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning A.4. Code-RL Dataset Curation Details WecollectourCode-RLtrainingdatafromvariousmoderncompetitiveprogrammingplatforms, suchasAtCoder, LeetCode, Aizu, | https://arxiv.org/abs/2505.16400v1 |
etc., with public strong test cases, while most open-sourced coding datasets (e.g., TACO, APPs) suffer from noisy problem statements, self-contamination, and weak LLM synthetic test cases that are unreliable for RL training. To ensure data quality, we performed very strict filtering rules, by filtering out 1) Multi-solution or interactive problems that requires special judge or other external tools; 2) Problems where images within the statement obscure a clear understanding; 3) Problems containing incorrect test cases or those lacking golden solutions; 4) Problems with weak test cases that allow incorrect solutions to pass all tests. Furthermore, to prevent self-contamination within our collected problem set, we conduct strict problem statement and source URL matching. To avoid any potential contamination of our test set, we exclude all problems released after 20240801 , and apply 𝑛-gram matching (with 𝑛= 14) to our testing set problem statement. To prepare for subsequent curriculum training for Code-RL, we estimate the relative difficulty of the collected problems. We deploy the local DeepSeek-R1-671B model, allow it to generate 8attempts on each problem, and assign a corresponding difficulty score ranging from 0to8. Problems that the DeepSeek-R1-671B model failed to solve in all 8attempts are excluded from the training set. Following this aggressive filtering process, 8,520 problems remain, forming our final Code-RL training set. 21 | https://arxiv.org/abs/2505.16400v1 |
arXiv:2505.16406v1 [cs.CL] 22 May 2025On the reliability of feature attribution methods for speech classification Gaofei Shen1, Hosein Mohebbi1, Arianna Bisazza2, Afra Alishahi1, Grzegorz Chrupała1 1Tilburg University, The Netherlands 2University of Groningen, The Netherlands {g.shen, h.mohebbi, a.alishahi }@tilburguniversity.edu, a.bisazza@rug.nl, grzegorz@chrupala.me Abstract As the capabilities of large-scale pre-trained models evolve, un- derstanding the determinants of their outputs becomes more im- portant. Feature attribution aims to reveal which parts of the input elements contribute the most to model outputs. In speech processing, the unique characteristics of the input signal make the application of feature attribution methods challenging. We study how factors such as input type and aggregation and pertur- bation timespan impact the reliability of standard feature attri- bution methods, and how these factors interact with characteris- tics of each classification task. We find that standard approaches to feature attribution are generally unreliable when applied to the speech domain, with the exception of word-aligned pertur- bation methods when applied to word-based classification tasks. 1 Index Terms : speech processing, interpretability, feature attri- bution 1. Introduction Large-scale self-supervised models such as wav2vec2 [1] and HuBERT [2] have shown impressive performance on various downstream speech processing tasks from automatic speech recognition to audio classification. As transformer models [3] have been increasingly adopted in speech processing, inter- pretability research for these models has also intensified. An important research domain within interpretability isfeature attribution which aims to quantify the contribution of different parts of the model input to its output. A variety of approaches to feature attribution been studied extensively for the domains of computer vision (CV) and natural language pro- cessing (NLP) models. A more limited body of work also exists for the domain of spoken language [4, 5, 6, 7, 8, 9, 10, 11]. A key challenge in research on feature attribution is the evaluation of the methods. Following the long-standing prac- tice in research methods and psychometrics, we can distinguish two key concepts used to quantify the quality of a measurement method: reliability andvalidity . Validity refers to how well the method measures the quantity of interest. For feature attribu- tion it largely overlaps with the concept of faithfulness: does the attribution highlight the features that in reality are the most important determinant features of the model’s output? On the other hand, reliability focuses on the consistency of a measure- ment, and answers the question: does the measurement give the same answer when repeated under similar conditions? For a measurement to be useful, it needs to score well on both of these dimensions. In this work we focus on evaluating the reliabil- ityof commonly used feature attribution methods as applied to 1Code: https://github.com/techsword/reliability-speech-feat-attrspeech models. This aspect of evaluation is often neglected in prior research, but it is crucial to ensure that the whole endeavor of attributing model outputs to inputs rests on a solid founda- tion. Only when methods are shown to be reliable can we then focus on the question of their validity. We believe this foundational work is especially needed in the speech domain. In contrast to the relatively intuitive saliency maps for CV | https://arxiv.org/abs/2505.16406v1 |
or token-based attribution scores for NLP, the continuous and high resolution nature of the speech signal means that naive application of basic attribution methods leads to noisy and hard-to-interpret results. The choice of feature at- tribution methods for a particular task matters, as does the con- ditions under which the method is applied. In this paper, we focus on four attribution methods, and investigate two main as- pects that might affect their reliability: the input type used for the feature attribution analysis, and the choice of the attribu- tion granularity , or the timespan of aggregation or perturbation of input of a given attribution method. Input type. Convolutional neural network (CNN) speech classification models can use either spectrograms and wave- forms as input [5]. Meanwhile self-supervised transformer- based models like wav2vec2 [1] and HuBERT [2] operate in an end-to-end fashion and use waveform as input with a CNN block serving as the feature extractor. More traditional audio features such as log-Mel spectrogram are still being used by the popular Whisper model [12]. It is important to note that a spectrogram (time-frequency representation), and raw wave- form are two different feature representations of the same in- put signal . We can convert a spectrogram to a waveform and vice versa. Thus to a large extent we can decouple input type from the specific target model. A more model-specific option is to attribute to the output of the CNN feature extractor block of models like wav2vec2 and HuBERT. Following the common practice in NLP, we call the output of the CNN feature extractor theCNN embedding . We can think of the CNN embedding as an even higher-level representation of the input signal than the typical time-frequency representation. Thus, for the most com- mon models of interest we need to decide which input type is the most appropriate to use for feature attribution. Attribution granularity. While both gradient-based and perturbation-based methods have been tested on speech mod- els, the interpretation of the attribution results differs. Gradient- based methods assign a score to every input value. The standard 16kHz sampling rate for wav2vec2 models means there are tens of thousands of attribution values for mere seconds of speech. These scores can then be aggregated over longer time-spans for ease of interpretation, but that happens as a post-processing step. Perturbation-based methods, on the other hand, can be ap- plied in a top-down manner by directly perturbing larger chunks of the input signal and observing the change in the model’s out- put. Thus, the choice of attribution method is coupled with the timespan of aggregation or perturbation. In this work we investigate four feature attribution meth- ods and quantify the impact of the choices regarding input type and granularity on their reliability. We apply these methods to speech classification models trained on three different tasks (one of which comprises three related subtasks). In order to quantify reliability, we use feature attribution agreement be- tween pairs of separate training runs trained on the same data and applied to the same test input: we name this reliability score inter-(random)-seed-agreement (ISA). | https://arxiv.org/abs/2505.16406v1 |
Our experiments show that even though the target classifier models learn the tasks and agree on the vast majority of the inputs, the attribution agree- ment is generally quite low, and that acceptable levels of relia- bility are only reached in very few specific conditions. Our find- ings highlight the inadequacy of standard approaches to feature attribution as applied to the speech domain, and underline the need for the development and careful evaluation of appropriate speech-specific attribution methods. 2. Related Work Feature attribution has evolved along with advances in machine learning, with methods originally developed to visualize salient features in computer vision [13, 14], and adapted further for natural language processing [15, 16, 17]. Unlike in speech pro- cessing, the limitations of feature attribution methods have been carefully studied in vision and language models from both re- liability and validity perspectives [18, 19, 20, 21, 22]. For ex- ample, gradient-based methods were demonstrated to be inde- pendent of parameters of later layers [21], while [22] argues that complete and linear attribution methods (such as Integrated Gradients) may perform no better than random guessing when identifying how models depend on features. By computing rank correlation, [20, 19] show that feature attribution methods (even those within the same family) often disagree in the ex- planation scores they produce. Unlike these approaches, we do not consider agreement between different attribution methods, but rather focus on the core issue of the reliability of a single method under repeated measurement : the consistency of a sin- gle attribution method applied to several randomly initialized fine-tuning runs of the same model architecture. Regarding the effect of different configuration details (e.g., aggregation level), our work is related to [23, 24] for textual data. Applications and adaptations of attribution methods in the speech domain has been explored primarily with convolutional neural network (CNN) based models [4, 5, 25], given their ar- chitectural similarities with computer vision models. For ex- ample, [5] used Layerwise Relevance Propagation (LRP) to ex- plain a CNN model trained on either waveform or spectrogram representations of audio signals. In contrast to their approach, where models trained on two different types of input data are compared, we examine the effect of different input types within a single model. For speech classification, several studies have started to ex- amine the validity of feature attribution methods. For exam- ple, [7] applied LIME [26] to a phoneme recognition task us- ing the TIMIT [27] dataset, which provides manual labeling and segmentation at the phoneme level. They found that re- stricting input audio perturbations to a limited window around the phoneme of interest can improve the validity of LIME. Similarly, [8, 11] showed that discretizing attribution scores through phoneme- and word-level boundaries leads to more in- terpretable explanations for classification tasks. Despite theseTask Accuracy Overall Fleiss’ κ Error Fleiss’ κ Gender ID 0.999 0.999 0.356 Speaker ID 0.990 0.983 0.677 Intent Class. 0.998 – – Action 0.997 0.997 0.520 Object 0.999 0.999 0.602 Location 0.999 0.999 0.591 Table 1: Agreement in model performance across different runs, measured using three metrics: Accuracy, | https://arxiv.org/abs/2505.16406v1 |
Overall and Error Fleiss’ κon the test sets. studies on validity, the reliability of feature attribution for speech models remains largely unexplored. 3. Methods For speech classification, pre-trained models are typically paired with a lightweight feedforward neural network as a classification head and fine-tuned using labeled data. Dur- ing fine-tuning, both the backbone model and classifier ad- just their weights to emphasize the most relevant input fea- tures and learned representations, maximizing classification ac- curacy. We therefore assume that models starting from the same pre-trained checkpoint but fine-tuned with different ran- dom seeds identify and employ comparable relevant features for a given input, especially when they achieve consistently high accuracy. Accordingly, a reliable feature attribution method should consistently show the same pattern in highlighting the most important input features for a given utterance across such models. 3.1. Classification models In our experiments, we use wav2vec2-base2model and fine- tune it with nine different seeds for three different speech clas- sification tasks: gender and speaker identification (Gender ID and Speaker ID respectively), and intent classification (IC). We expect the IC task to rely on mostly on the presence of spe- cific lexical items, while Gender ID and Speaker ID should rely mostly on lower-level acoustic features. For Gender ID and Speaker ID, we use a subset of the Common V oice dataset [28]. We select 40 speakers with self- reported gender labels of masculine or feminine with 301 utter- ance for each speaker totalling 12,040 utterances. A stratified 80:20 train-test split was also applied before model fine-tuning. For intent classification we use the Fluent Speech Commands dataset [29] with the provided train-test split. We resample the waveforms in both datasets to 16kHz. The IC task comprises three related classification subtasks: Action, Object and Loca- tion. We use a separate classification head for each subtask. During fine-tuning, we freeze the CNN feature extractor and the projection layers and only update weights of the trans- former network and the final classification heads. This makes sure models with different seeds receive exactly the same input and allows us to have a fixed CNN embedding across different models investigated in this paper. In order to lend further credibility to our assumption about the equivalency of models, we first evaluate the classification accuracy as well classification agreement of the different fine- tuning runs, shown in Table 1. If models’ architecture as well as behavior is similar, we have more reason to be believe that their internal computations are also equivalent. 2https://huggingface.co/facebook/wav2vec2-base To measure the agreement between the model runs initial- ized with different random seeds, we report two separate ver- sion of Fleiss’ κ. The overall Fleiss’ κmeasures the agreement between all runs on the complete test set. The Error Fleiss’ κ measures the agreement between model decisions on the subset of the test data where at least one error was made. Fleiss’ κ= 0 if the agreement between runs is due to chance, and κ= 1 if the runs agree completely. We can see that the classification accuracy is near perfect for all tasks and overall agreement is | https://arxiv.org/abs/2505.16406v1 |
also very high. Agreement for the small subset of data points where an error was made is moderate. Given small percentage of overall errors, we believe this means the behavior of the models we investigate in this paper are sufficiently similar. 3.2. ISA reliability metric To measure the reliability of feature attribution methods, we ex- amine the consistency of their scores derived from models fine- tuned with different seeds. Specifically, we calculate the inter- seed agreement (ISA) metric based on a dynamic top-p metric. ISA =1 NNX n=1|top-p (Ai)n∩top-p (Aj)n| |top-p (Ai)n|(1) Here pis the percentage of the top indices of the attribution scores we are interested in. AiandAjare the attribution scores for the i-th and j-th model run. Nis the number of samples in the dataset. The top-p function returns the top ppercentage of indices of the attribution scores. The value of pis fixed at 20%. The intersection of the top-p indices is calculated for each sample in the dataset. The ISA score is the average of the pair- wise percentage of shared indices of the top-p attribution scores for all combinations of random seed pairs. The higher the ISA score, the more the attribution scores of the models agree with each other. A baseline ISA can be obtained by randomly shuf- fling the attribution scores for each sample before calculating the ISA score. We use the Captum library [30] to calculate feature at- tribution scores for the fine-tuned models on their respective datasets. We test two gradient-based methods: Saliency and In- tegrated Gradients (IG); and two perturbation-based methods: LIME and Feature Ablation (FA). For the perturbation-based methods, we use a feature mask to group waveform input into 10ms spans: this is done due to computational constraints as a highest tractable resolution for perturbing the waveform. 3.3. Feature attribution conditions Input feature types. We calculate the ISA metric for each of our three input feature types: waveform, spectrogram, and CNN embeddings. In order to enable attribution to the spectro- gram for self-supervised models like wav2vec2 and HuBERT (which use raw waveform as their native input), we follow [10] by prepending an inverse short-term Fourier transforma- tion (ISTFT) to the model. We use a hop length of 320 for the STFT and ISTFT transformations to keep the time resolution at 20ms to be consistent with the wav2vec2 model feature extrac- tor. For Integrated Gradients, LIME and Feature Ablation, a baseline input is needed. We use silence as our baseline: for the waveform input type we use the silence waveform directly, while for the other two input types we convert the silence wave- form into the corresponding spectrogram or CNN embedding first. All of the attribution methods tested return both positiveand negative values: we do not do any additional processing to the attribution score before aggregation. Granularity of aggregation. We test three granularities of aggregation: no aggregation, frame-level aggregation, and word-level aggregation. For frame-level aggregation, we sum the attribution scores for raw waveform input at 20ms intervals; spectrogram and CNN embedding input gets summed for every frame. For word-level aggregation, we aggregate the | https://arxiv.org/abs/2505.16406v1 |
attribution scores at the word level using forced-alignment time stamps. We use Montreal Forced Aligner [31] to align our datasets with the provided transcriptions. To take varying word lengths into account, we mean-pool the attribution scores for each word. We also discard the non-word segments in the alignment. Granularity of perturbation. For the perturbation methods only, the alternative to aggregation is to directly perturb specific timespans of the input. We test the effect of directly perturbing word-level segments of the input, based on the same forced- alignment time stamps as above. 4. Results We organize the results into groups of comparisons. Within each group, we present the effects of the varied conditions in applying feature attribution methods. We visualize the central tendency (median) of the inter-seed agreement (ISA) scores as well as the spread around it via boxplots. An individual boxplot displays a set of 36 pairwise comparisons. We then visualize the median baseline score of randomly shuffled attribution scores for all pairwise comparisons via the dashed red lines. 4.1. Effects of input feature types Figure 1 shows the ISA scores for attribution scores for the no-aggregation condition. For most combinations of method and input type the ISA scores are low to moderate (below 0.6). The exception is the task of Gender ID for the embedding input type and Integrated Gradients method, which shows a median of around 0.7 but with a wide spread around it. We also note that the effect of attribution method is in general larger than that of the input type. Notably, in most cases LIME shows very low re- liability, barely above the baseline. This indicates that LIME is not a suitable attribution method in a high-time resolution set- ting. At the same time, Integrated Gradients generally shows the highest reliability. 4.2. Effects of granularity of aggregation To highlight the impact of granularity of aggregation, we plot the ISA scores of only the CNN embedding input type at var- ious levels of aggregation in fig. 2. We can see that different aggregation levels do not alter the general reliability patterns we saw in fig. 1. As before, we generally see the best reliabil- ity for Integrated Gradients. For word-level attribution on the IC tasks we would expect a higher level of agreement, and yet we see that ISA scores do not surpass 0.6 for any of the condi- tion combinations. This is surprising, as intuitively one would expect that aggregation should smooth out small variations in attribution scores and increase agreement. 4.3. Perturbations on the word level Lastly, we also evaluate the reliability of perturbation methods applied to word-aligned input segments, in a setting analogous to that described in [11]. Figure 3 shows ISA scores for attri- bution scores generated with perturbation-based methods oper- Figure 1: Distributions of ISA scores without aggregation. The rows indicate different input feature types, the columns are dif- ferent tasks. Within each panel, each boxplot shows results from different attribution methods and the y-axis is the ISA score. The red dotted line indicates the randomly shuffled baseline. IG: In- tegrated Gradients, FA: | https://arxiv.org/abs/2505.16406v1 |
Feature Ablation. Figure 2: Distributions of ISA scores for the CNN embedding in- put type, at different levels of aggregation. The rows are levels of granularity of aggregation, the columns are different tasks. Within each panel, each boxplot shows results from different at- tribution methods and the y-axis is the ISA score. The red dot- ted line indicates the randomly shuffled baseline. IG: Integrated Gradients, FA: Feature Ablation. Figure 3: Distributions of ISA scores with perturbation operat- ing directly on word-aligned segments. The rows indicate differ- ent input feature types, the columns are different tasks. Within each panel, each boxplot report results from different attribu- tion methods and the y-axis is the ISA score. The red dotted line indicates the randomly shuffled baseline. FA: Feature Ablation.ating directly on the word level. Here we see the Intent Clas- sification tasks showing much higher ISA than in fig. 2. The reliability of perturbation-based attribution methods on word- based tasks is much higher when perturbing word-aligned than aggregating scores obtained from high-resolution features. We also observe that one of the IC tasks, Location, shows lower reliability than the other two. This perhaps reflects the degree to which these different subtasks can rely on redundant lexical cues. Interestingly, we see less variations in the score pattern across different input feature types in fig. 3 than in fig. 1. This shows that perturbations done on the word-level granularity are less sensitive to the differences between different input feature types. 5. Discussion & Conclusion Our findings show that the naive application of standard fea- ture attribution methods to speech classification models gener- ally leads to poor reliability. When attributing to high-resolution input, regardless of specific input types such as waveform, spec- trogram or embeddings, even the most reliable of our methods, Integrated Gradients, does not surpass 50% inter-seed agree- ment for most tasks. Simply aggregating these scores does not improve reliability. The likely underlying issue is that the gra- dients or perturbation effect of such high resolution and highly correlated redundant features are very small and noisy. Only in the case of directly perturbing word-aligned seg- ments of the input, and only for the intent classification sub- tasks, do we see acceptable reliabilities. The likely explanation is that classification decisions for these tasks rely on specific words in the utterance, and that directly perturbing those spe- cific words and only those words affects model output and thus attribution scores. There is thus little scope for models to dis- agree. On the other hand, tasks such as Gender ID and Speaker ID are unlikely to rely on specific words, and models may use redundant clues distributed over the whole utterance: thus for these tasks we do not find a consistently reliable combination of method and input-type to attribute to. Our findings suggest that in order for standard atrribution methods to be applicable, the target speech classification task needs to be similar in nature to an equivalent text-based task, where token-based attributions are standard. Ideally, however, we would like feature attribution to be more wide applicable across varied speech tasks. | https://arxiv.org/abs/2505.16406v1 |
5.1. Limitations and future directions While the scope of this paper is limited to assessing reliabil- ity, the methods found to be reliable will also need to be evalu- ated for validity. While previous works have used cross-method agreement as a proxy for validity, we believe that a more direct measure of alignment with the target model will be needed. Our work focused on reliability for attribution to features along the time dimension. The other important axis for audio data is the frequency domain: for certain tasks it may be more useful to at- tribute to frequency-based features. Additionally, for audio data generally, and for speech specifically, certain high-level features such as loudness, pitch-contours, or specific aspects of timbre may also be interesting targets for attribution: for these cases standard attribution techniques such as those evaluated here are not directly applicable. Ultimately, spoken language may be sufficiently different from images or text data that only feature attribution techniques tailored to the speech domain will prove reliable enough to be useful. 6. Acknowledgements This publication is part of the project InDeep: Interpreting Deep Learning Models for Text and Sound (with project number NWA.1292.19.399) of the National Research Agenda (NWA- ORC) program. 7. References [1] A. Baevski, H. Zhou, A. Mohamed, and M. Auli, “Wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Represen- tations,” 2020. [2] W.-N. Hsu, B. Bolte, Y .-H. H. Tsai, K. Lakhotia, R. Salakhutdi- nov, and A. Mohamed, “HuBERT: Self-Supervised Speech Rep- resentation Learning by Masked Prediction of Hidden Units,” Jun. 2021. [3] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention Is All You Need,” Dec. 2017. [4] A. Prasad and P. Jyothi, “How Accents Confound: Probing for Accent Information in End-to-End Speech Recognition Systems,” inACL 2020 , D. Jurafsky, J. Chai, N. Schluter, and J. Tetreault, Eds. Online: ACL, Jul. 2020, pp. 3739–3753. [5] S. Becker, J. Vielhaben, M. Ackermann, K.-R. M ¨uller, S. La- puschkin, and W. Samek, “AudioMNIST: Exploring Explainable Artificial Intelligence for audio analysis on a simple benchmark,” Journal of the Franklin Institute , vol. 361, no. 1, pp. 418–428, Jan. 2024. [6] X. Wu, P. Bell, and A. Rajan, “Explanations for Automatic Speech Recognition,” in ICASSP 2023 . Rhodes Island, Greece: IEEE, Jun. 2023, pp. 1–5. [7] ——, “Can We Trust Explainable AI Methods on ASR? An Eval- uation on Phoneme Recognition,” in ICASSP 2024 , Apr. 2024, pp. 10 296–10 300. [8] S. Gupta, M. Ravanelli, P. Germain, and C. Subakan, “Phoneme Discretized Saliency Maps for Explainable Detection of AI- Generated V oice,” Sep. 2024. [9] D. Fucci, M. Gaido, B. Savoldi, M. Negri, M. Cettolo, and L. Bentivogli, “SPES: Spectrogram Perturbation for Explainable Speech-to-Text Generation,” Nov. 2024. [10] E. Mancini, F. Paissan, P. Torroni, M. Ravanelli, and C. Sub- akan, “Investigating the Effectiveness of Explainability Methods in Parkinson’s Detection from Speech,” Nov. 2024. [11] E. Pastor, A. Koudounas, G. Attanasio, D. Hovy, and E. Bar- alis, “Explaining speech classification models via word-level au- dio segments and paralinguistic features,” in EACL 2024 | https://arxiv.org/abs/2505.16406v1 |
, Y . Gra- ham and M. Purver, Eds., Mar. 2024. [12] A. Radford, J. W. Kim, T. Xu, G. Brockman, C. Mcleavey, and I. Sutskever, “Robust speech recognition via large-scale weak supervision,” in Proceedings of the 40th ICML . PMLR, 2023. [Online]. Available: https://proceedings.mlr.press/v202/ radford23a.html [13] M. D. Zeiler and R. Fergus, “Visualizing and Understanding Con- volutional Networks,” in Computer Vision – ECCV 2014 . Cham: Springer International Publishing, 2014. [14] K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep inside convolutional networks: Visualising image classification models and saliency maps,” in ICLR 2014, Workshop Track Proceedings , Y . Bengio and Y . LeCun, Eds., 2014. [Online]. Available: http://arxiv.org/abs/1312.6034 [15] I. Covert, S. M. Lundberg, and S.-I. Lee, “Explaining by remov- ing: A unified framework for model explanation,” J. Mach. Learn. Res., vol. 22, pp. 209:1–209:90, 2020. [16] J. Bastings and K. Filippova, “The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?” in BlackboxNLP 2020 , A. Alishahi, Y . Belinkov, G. Chrupała, D. Hupkes, Y . Pinter, and H. Sajjad, Eds. Online: ACL, Nov. 2020, pp. 149–155.[17] H. Mohebbi, A. Modarressi, and M. T. Pilehvar, “Exploring the role of BERT token representations to explain sentence probing results,” in EMNLP 2021 , M.-F. Moens, X. Huang, L. Specia, and S. W.-t. Yih, Eds. Online and Punta Cana, Dominican Republic: ACL, Nov. 2021, pp. 792–806. [18] D. Pruthi, R. Bansal, B. Dhingra, L. Baldini Soares, M. Collins, Z. C. Lipton, G. Neubig, and W. W. Cohen, “Evaluating explana- tions: How much do explanations from the teacher aid students?” TACL , vol. 10, 2022. [19] S. Krishna, T. Han, A. Gu, S. Wu, S. Jabbari, and H. Lakkaraju, “The disagreement problem in explainable machine learning: A practitioner’s perspective,” TMLR , 2024. [20] M. Neely, S. F. Schouten, M. J. R. Bleeker, and A. Lucic, “A song of (dis)agreement: Evaluating the evaluation of explainable artificial intelligence in natural language processing,” in HHAI , 2022. [21] L. Sixt, M. Granz, and T. Landgraf, “When explanations lie: Why many modified bp attributions fail,” in ICML , 2019. [22] B. Bilodeau, N. Jaques, P. W. Koh, and B. Kim, “Impossibil- ity theorems for feature attribution,” Proceedings of the National Academy of Sciences of the United States of America , vol. 121, 2022. [23] J. Bastings, S. Ebert, P. Zablotskaia, A. Sandholm, and K. Filip- pova, ““Will you find these shortcuts?” a protocol for evaluating the faithfulness of input salience methods for text classification,” inEMNLP 2022 , Y . Goldberg, Z. Kozareva, and Y . Zhang, Eds. ACL, Dec. 2022. [24] Y . Chen, K. Zhang, F. Hu, X. Wang, R. Li, and Q. Liu, “Dy- namic Multi-granularity Attribution Network for Aspect-based Sentiment Analysis,” in EMNLP , Y . Al-Onaizan, M. Bansal, and Y .-N. Chen, Eds. Miami, Florida, USA: ACL, 2024. [25] H. Muckenhirn, V . Abrol, M. Magimai-Doss, and S. Marcel, “Un- derstanding and Visualizing Raw Waveform-Based CNNs,” in In- terspeech 2019 . ISCA, Sep. 2019. [26] M. T. Ribeiro, S. Singh, and C. Guestrin, ““Why Should I Trust You?”: | https://arxiv.org/abs/2505.16406v1 |
Explaining the predictions of any classifier,” in Proceed- ings of the 22nd ACM SIGKDD ICKDDM , 2016. [27] Garofolo, John S., Lamel, Lori F., Fisher, William M., Pal- lett, David S., Dahlgren, Nancy L., Zue, Victor, and Fiscus, Jonathan G., “TIMIT Acoustic-Phonetic Continuous Speech Cor- pus,” 1993. [28] R. Ardila, M. Branson, K. Davis, M. Kohler, J. Meyer, M. Henretty, R. Morais, L. Saunders, F. Tyers, and G. Weber, “Common voice: A massively-multilingual speech corpus,” in Proceedings of the 12th LREC . ELRA, May 2020. [Online]. Available: https://aclanthology.org/2020.lrec-1.520/ [29] L. Lugosch, M. Ravanelli, P. Ignoto, V . S. Tomar, and Y . Bengio, “Speech Model Pre-training for End-to-End Spoken Language Understanding,” Jul. 2019. [30] N. Kokhlikyan, V . Miglani, M. Martin, E. Wang, B. Alsallakh, J. Reynolds, A. Melnikov, N. Kliushkina, C. Araya, S. Yan, and O. Reblitz-Richardson, “Captum: A unified and generic model interpretability library for PyTorch,” 2020. [31] M. McAuliffe, M. Socolof, S. Mihuc, M. Wagner, and M. Son- deregger, “Montreal Forced Aligner: Trainable Text-Speech Alignment Using Kaldi,” in Interspeech 2017 . ISCA, Aug. 2017, pp. 498–502. | https://arxiv.org/abs/2505.16406v1 |
arXiv:2505.16408v1 [cs.CL] 22 May 2025From Surveys to Narratives: Rethinking Cultural Value Adaptation in LLMs M. Farid Adilazuarda1, Chen Cecilia Liu2, Iryna Gurevych2, Alham Fikri Aji1 1MBZUAI2UKP Lab, TU Darmstadt Abstract Adapting cultural values in Large Language Models (LLMs) presents significant challenges, particularly due to biases and limited train- ing data. Prior work primarily aligns LLMs with different cultural values using World Val- ues Survey (WVS) data. However, it remains unclear whether this approach effectively cap- tures cultural nuances or produces distinct cul- tural representations for various downstream tasks. In this paper, we systematically inves- tigate WVS-based training for cultural value adaptation and find that relying solely on sur- vey data can homogenize cultural norms and interfere with factual knowledge. To investigate these issues, we augment WVS with encyclope- dic and scenario-based cultural narratives from Wikipedia and NormAd. While these narra- tives may have variable effects on downstream tasks, they consistently improve cultural dis- tinctiveness than survey data alone. Our work highlights the inherent complexity of aligning cultural values with the goal of guiding task- specific behavior. 1 Introduction Recent research in Large Language Models (LLMs) suggest LLMs align closely with the cultural val- ues of Western, Educated, Industrialized, Rich, and Democratic (WEIRD, Henrich et al. 2010) soci- eties without adaptations (Johnson et al., 2022; Ramezani and Xu, 2023; Cao et al., 2023, among others). The WEIRD-centric bias can harm specific groups and limit the model’s usefulness to a diverse global audience. Indeed, culture is a distinct and vital aspect of human society, influencing behavior, norms, and worldviews (Geertz, 2017). However, current research lacks robust mechanisms to adapt LLMs outputs in ways that reflect different cultural value systems (i.e., culturally adapt LLMs).1 1For this paper, we focus on “culture” at a linguistic- regional level (e.g., Iraq and Jordan represent Arab culture vs. Argentina and Mexico that represent Spanish culture), 5 0 5 10 15 Dimension 1 (UMAP)5 051015Dimension 2 (UMAP)WVS Data Clustering: UMAP and KDE Visualization Culture Arabic Bengali Chinese English German Greek Korean Portuguese Spanish TurkishFigure 1: UMAP-KDE visualization of cultural value distributions from WVS data reveals significant homog- enization. While Arabic (lower right) and Chinese (left) cultures form distinct clusters, many others converge in the upper right. This suggests that current WVS-based training may be insufficient to capture cultural nuances. Existing work often adapts LLMs to cultural values by leveraging self-reported survey data (Li et al., 2024a; Xu et al., 2024; Li et al., 2024b) such as the World Values Survey (WVS, Haerpfer et al. 2022). Although WVS offers a quantitative glimpse into cultural attitudes (e.g., “How impor- tant is family in your life?” on a scale from 1 to 4), it remains unclear how to best translate these numeric indications into concrete behavior in down- stream tasks (e.g., classification of offensiveness in different linguistic-cultural settings). Beyond survey responses on values and opinions, culture also includes social norms, historical contexts, and nuanced beliefs (Liu et al., 2024) that may not be fully captured through self-reported questionnaires. As shown in Figure 1, even WVS data for distinct cultures may converge into overlapping | https://arxiv.org/abs/2505.16408v1 |
clusters in latent space (showing semantic similarities), poten- tially homogenizing nuanced cultural dimensions. but we acknowledge that culture is more nuanced, including sub-cultures within a group and intersectional factors such as ethnicity and religion (Adilazuarda et al., 2024). Ideally, cultural value adaptation should also enhance downstream tasks within each culture. However, several challenges emerge. First, adapt- ing multiple cultural values may create interfer- ence similar to that seen in multilingual models (Conneau et al., 2020; Wang et al., 2020), given language-culture interconnections (Adilazuarda et al., 2024; Hershcovich et al., 2022; Hovy and Yang, 2021). Second, the reliability of cultural value training data is uncertain. Studies show dis- crepancies between attitude and actual behavior in human (Gross and Niman, 1975; Fazio, 1981), raising concerns about the WVS’s ability to accu- rately reflect cultural behavior for LLM training, necessitating further investigation. In this work, we tackle these challenges through a critical evaluation of current cultural value adap- tation methods. Through a series of experiments, we reveal the key limitations of using WVS as training data: while WVS provides insights into cultural values, it lacks the contextual depth needed to inform value-driven behavior in downstream tasks. Given these limitations of survey data, we investigate whether augmenting WVS with richer narrative sources like encyclopedic descriptions (Wikipedia) and scenario-based norms (NormAd; Rao et al. 2024) yields more culturally distinct and effective LLM adaptations. We assess the impact on preserving cultural distinctiveness, downstream task performance, and factual knowledge. To summarize, our contributions are: 1)We identify cultural interference in adaptation using WVS, showing that it improves tasks like offensive- ness classification but tends to homogenize cultural behaviors rather than preserve their differences. 2)We demonstrate knowledge interference from adaptation, where adaptation can degrade factual knowledge understanding. 3)We investigate the impact of augmenting WVS with context-rich nar- ratives (Wikipedia, NormAd) and find that, while their effects on downstream tasks can vary, they help preserve cultural distinctiveness. Our analysis showcased the challenge of aligning cultural values to influence task-specific behavior and highlighted the need for further research into which types of data best support effective cultural adaptation. 2 Methodology We systematically designed experiments to inves- tigate our research question. This section details our methodologies for cultural adaptation and per-formance evaluation. We begin with zero-shot prompting, followed by single-culture adapter fine- tuning, and conclude with an analysis of cross- cultural interference using auxiliary tasks such as MMLU (Massive Multitask Language Understand- ing; Hendrycks et al. 2021). We describe datasets, models, and evaluation metrics in §3. 2.1 Zero-Shot Prompting Zero-shot prompting leverages a pre-trained LLM without additional fine-tuning. To adapt the model for a specific target culture, we use simple instruc- tions that reference the culture. For instance, for anOFFENS EVAL-style task, we use the following prompt in Table 1: You are a {country} chatbot that understands {country} ’s cultural context. Question: Is the following sentence offensive according to {country} ’s cultural norms? Input:{input_txt} Answer: [Select one: 1. Offensive, 2. Not offensive] Table 1: Zero-shot prompt template for offensiveness classification. We list the full prompts used in our study in Appendix E. Here, the | https://arxiv.org/abs/2505.16408v1 |
model’s responses rely entirely on cul- tural or multilingual knowledge that was encoded during pre-training. This can create systematic bi- ases when the training data is skewed toward domi- nant cultural paradigms, which may disadvantage underrepresented groups (Guo et al., 2024). 2.2 Cultural Value Adaptation via Fine-tuning Beyond zero-shot prompts, we explore explicit fine- tuning with culture-specific data, referred to as single-culture adaptation in our paper. Follow- ing Li et al. (2024a), we train a separate LoRA adapter (Hu et al., 2022) for each cultural context using data from a single or a combination of data sources. Each adapter is specialized to reflect the norms, attitudes, or knowledge of that specific cul- ture. However, data sparsity and overfitting are risks, particularly for cultures with limited sam- ples. In single-culture adaptation, each LoRA adapter is trained to reflect the high-level cultural values present in the training dataset. During inference, the appropriate adapter is activated based on the test target culture specified. 3 Experimental Setup We base our experiments on the CultureLLM (Li et al., 2024a) framework, one of the earliest popu- lar adaptation frameworks for cultural values. We design our experimental setup to evaluate across multiple LLMs and languages. Below, we briefly describe datasets used for training and evaluation, model and training hyperparameters, and evalua- tion metrics. 3.1 Linguistic-Cultural Settings We conduct experiments on ten distinct linguistic- cultural settings. Here, we use the ISO 693-3 code for simplicity: Arabic (ara, Iraq and Jordan), Ben- gali (ben, Bangladesh), Chinese (zho, China), En- glish (eng, United States), German (deu, Germany), Greek (ell, Greece), Korean (kor, South Korea), Portuguese (por, Brazil), Spanish (spa, Argentina and Mexico), and Turkish (tur, Turkey). 3.2 Training Dataset We established training scenarios with data drawn from three different sources: WVS. In this setting, we use the WVS and seman- tically augmented data based on Li et al. (2024a). WVS is a survey data commonly used in social sci- ences, as well as a proxy for cultural values in NLP (Adilazuarda et al., 2024). The dataset consists of question-and-answer pairs that provide quantita- tive indicators of societal beliefs and attitudes (e.g., questions on family importance or religion). Wikipedia. We select Wikipedia articles with de- tailed knowledge, region-specific norms, social practices, and historical contexts of our defined cultures. These articles can enrich the numeric survey data with qualitative background.2 NormAd. NormAd (Rao et al., 2024) offers a structured collection of cultural norms and situa- tional examples, demonstrating how abstract val- ues materialize in everyday interactions. Unlike WVS, which provides broad statistical insights, and Wikipedia, which offers descriptive knowledge, NormAd emphasizes behavioral and contextual ap- plications of cultural principles. 3.3 Evaluation Dataset We use two sets of tasks for evaluations: Multicultural Multilingual Offensiveness. To assess the effectiveness of adaptation in models’ behavior on downstream tasks, we evaluate the 2See Table 17 for the Wikipedia pages used.adapted models using a combination of datasets (such as OffenseEval2020, Zampieri et al. 2020a) following Li et al. (2024a,b, see original publica- tions or Appendix F.2 for the complete list, which consists of 59 datasets). The test data contains | https://arxiv.org/abs/2505.16408v1 |
a to- tal of 68607 multilingual, culturally sensitive texts annotated for offensiveness. MMLU. To evaluate the model’s general knowl- edge retention capabilities after cultural adaptation, we assess each adapter’s performance on factual question-answering tasks using MMLU (Mukher- jee et al., 2024). The MMLU dataset focuses on factual knowledge such as mathematics, biology, chemistry etc., which contains minimal cultural sensitivity. The deviations in MMLU accuracy following cultural fine-tuning would suggest unin- tended interference, implying the cultural adapter alters the model’s underlying knowledge represen- tations. Using these two datasets, we enable a system- atic evaluation of how effectively language models can integrate cultural perspectives into downstream tasks while preserving their factual knowledge. 3.4 Models and Training In this work, we evaluate three variants of LLMs, including Llama-3.1-8B (base and instruction- tuned, Touvron et al. 2023; Dubey et al. 2024), Gemma-2-9B (instruction-tuned, Rivière et al. 2024), and Qwen-2.5-7B (instruction-tuned, Team 2024). In our experiments, all instruction-tuned models are suffixed with “ -IT”. We perform LoRA adaptation (Hu et al., 2022) on each model using rank-64 LoRA matrices, a batch size of 32, a learn- ing rate of 2×10−4, and six training epochs. Other details on training are in Appendix B. 3.5 Evaluation Metrics In our main paper, we evaluate each model’s per- formance using freeform generation, assessing its ability to provide culturally relevant justifications or context. Our Appendix includes additional probability-based evaluations, using token-level likelihood scores to measure the model’s confi- dence in classifying offensive content across cul- tures. Further, we use F1 score as the primary metric for evaluating classification performance on both probability and freeform-based evaluations. We propose a cultural distinctiveness metric, C- DISTscore, to further quantify a model’s ability to preserve cultural distinctiveness. For ncultures, we define a performance matrix M∈Rn×n, where Mi,jis the F1-score when a model adapted to cul- tureiis evaluated on test data for culture j. We compute: 1. Extract the diagonal entries3⃗d= [Mi,i]n i=1. 2.Normalize each Mi,iby the maximum value in its column: ⃗ ni=Mi,i/max jMj,i. 3. Average these normalized diagonal entries: D=1 nnX i=1⃗ ni. (1) In the formula above, we normalize by column (i.e., by the test culture) since each test culture set may have different difficulty and scales. This normalization also helps identify which adapter performs best for a given culture. In an ideal scenario, the best performing adapted model for a particular culture should be based on its own culture, resulting in a C-D ISTscore of 1.0. A lower score suggests interference or homoge- nization, as illustrated in Figure 2. This metric thus quantifies the extent to which each model preserves distinct cultural representations after adaptation. Figure 2: Single-culture adaptation using WVS data with Llama-3.1-8B-IT, evaluating cross-cultural offen- siveness classification tasks. Minimal diagonal pattern is observed in this setting, with a C-D ISTscore of 0.76. 4 Adaptation with WVS: Findings and Observed Interferences In this section, we focus on Llama-3.1-8B mod- els (both base and instruction-tuned) to establish a 3We define “diagonal entries" as the corresponding perfor- mance of an adapter on its corresponding culture, e.g. Korean Adapter evaluated on Korean Culture test set, | https://arxiv.org/abs/2505.16408v1 |
hence we define this asMi,iclear understanding of their performance and the impact of adaptation using WVS data, including cultural and knowledge-based interferences. 4.1 Performance Gains Driven by Enhanced Instruction Following General Observations. Table 2 compares the ap- proaches for downstream tasks using Llama-3.1-8B models: (i) zero-shot prompting, (ii) single-culture adaptation. Our results show that training using WVS is more effective in improving downstream tasks for the base model when using the single- culture adaptation strategy. Particularly, WVS training is beneficial for underrepresented cultures such asaraandkor. Surprisingly, this positive effect is not seen in the instruction-tuned model, which instead shows a decline in performance. Performance Gain by Better Instruction Fol- lowing. To understand why the instruction-tuned model did not benefit from training with WVS, we analyze its downstream task predictions by ex- amining the ratio of invalid responses4before and after adaptation in Table 3 (completed results in Appendix D.3). Compared to zero-shot prompting, both the base model and instruction-tuned model have significantly improved invalid response ra- tios after adaptation. This suggests that WVS fine- tuning enhances the model’s general instruction- following ability but does not necessarily improve its understanding of cultural values. The high zero-shot invalid response ratio in models shows that achieving strong performance on relevant tasks requires improvements in both instruction-following ability and cultural value un- derstanding. 4.2 Observed Cultural Interference Across Models To further investigate the effect of adaptation, we examine the single-culture adaptation results in a cross-cultural setting (i.e., training on one culture and evaluating on others). Ideally, performance should be highest when the adaptation matches the test culture, forming a diagonal pattern in a heatmap of cross-cultural evaluations. However, as shown in Figure 2, no such diagonal is observed for the instruction-tuned Llama model (with a similar pattern seen for the base model in Figure 10 in the Appendix). The cross-cultural improvements show 4An invalid response contains nonsensical outputs, fails to follow instructions or lacks a meaningful or relevant answer to the prompt. Appendix 14 shows example responses. Model ara ben zho eng deu ell kor por spa tur Avg. Zero-Shot Prompting Llama-3.1-8B 11.96 17.12 32.77 14.85 23.81 38.16 26.14 19.93 30.96 21.95 23.77 Llama-3.1-8B-IT 19.14 23.10 30.49 26.63 34.36 37.56 38.72 20.92 39.14 32.95 30.00 Single-Culture Adaptation - WVS Llama-3.1-8B 17.22 22.01 38.28 19.92 29.30 36.08 32.65 20.15 27.93 28.57 27.21 Llama-3.1-8B-IT 19.50 23.51 32.69 22.35 34.78 36.98 37.61 17.75 25.85 28.78 27.98 Table 2: Culture adaptation results (F1 scores) under three training scenarios: zero-shot prompting, single-culture adaptation training on Llama-3.1-8B models using WVS training data. The adaptation is evaluated using a multilingual offensiveness dataset (§3.3) reported with averaged F1 scores. Methods Invalid (%) Llama-3.1-8B Zero-Shot 20.12 Single-Culture-WVS 14.68 Llama-3.1-8B-IT Zero-Shot 21.20 Single-Culture-WVS 10.82 Gemma Zero-Shot 11.75 Single-Culture-WVS 0 Qwen Zero-Shot 6.8 Single-Culture-WVS 0 Table 3: Comparison of invalid response rates across different models and scenarios. The Invalid Ratio rep- resents the percentage of responses flagged as invalid across all culture test sets. We provide the complete invalid ratio table in Appendix C.2. no clear trends, and all adapters enhance perfor- mance on the Spanish test data | https://arxiv.org/abs/2505.16408v1 |
in Figure 2. The C-D ISTscore (introduced in §3) remains below 0.80for both models. The results further suggest that WVS is not nec- essarily the best data source for improving cul- tural values, as the adapted models fail to preserve their own culture’s perspectives, leading to com- promised cross-cultural result improvements (i.e., cultural interference ). 4.3 Factual Knowledge Interference Fine-tuning improves cultural alignment but may unintentionally impact factual knowledge (Mukher- jee et al., 2024). Ideally, cultural value adaptation should not affect objective QA performance. Table 4 presents the results of single-culture adaptation on MMLU. Both Llama-3.1-8B and Llama-3.1-8B-IT exhibit significant variability when trained under two conditions: standard (using English WVS data) and translated (WVS values in their respective languages). Additionally, the base model shows a decline in performance compared to zero-shot prompting, while the instruction-tunedModel Culture Std. Transl. Llama-3.1-8B ara 32.24 32.83 ben 48.67 51.81 zho 38.21 41.08 eng 23.00 29.58 deu 33.55 39.68 ell 30.75 31.55 kor 27.59 27.57 por 46.41 28.77 spa 35.53 35.27 tur 19.74 18.02 Avg. 33.57 33.62 Llama-3.1-8B-IT ara 41.99 37.81 ben 45.45 42.77 zho 41.35 46.28 eng 42.81 49.18 deu 40.40 41.92 ell 46.05 36.34 kor 41.80 44.63 por 40.11 38.08 spa 43.77 38.60 tur 43.93 40.46 Avg. 42.78 41.61 Table 4: MMLU evaluation after single-culture adapta- tion with WVS data (F1 Score %). Performance varia- tion is evident across cultural adapters, with observed factual knowledge retention and potential cultural biases. The zero-shot performance is 35.05 for Llama-3.1-8B and45.38 for Llama-3.1-8B-IT. model shows performance improvements. These fluctuations in the results show that adapt- ing to WVS data can change factual knowledge accuracy, depending on language and dataset characteristics. Furthermore, the inconsistencies in probability-based scoring (Appendix 13) also strengthen the observation of factual knowledge interference . This underscores the challenge of bal- ancing cultural distinctiveness with factual integrity with the appropriate training data. WVS WVS+Wiki WVS+NormAd Figure 3: Heatmaps of culture-specific classification performance (Llama-3.1-8B-IT) based on the ranks of the adaptation results. Darker diagonal elements indicate stronger cultural distinctiveness and better C-D ISTscores. Model Data C-D IST F1 Cult. (%) F1 MMLU (%) Llama-3.1-8B-ITWVS 0.76 29.61 42.78 Wiki 0.81 35.39 26.33 NormAd 0.85 38.42 19.63 WVS+Wiki 0.78 31.19 49.02 WVS+NormAd 0.89 40.94 50.43 WVS+Wiki+NormAd 0.76 38.21 52.61 Gemma-2-9B-ITWVS 0.81 39.22 45.31 Wiki 0.83 36.67 8.23 NormAd 0.79 37.10 8.07 WVS+Wiki 0.80 37.25 47.05 WVS+NormAd 0.83 40.01 55.19 WVS+Wiki+NormAd 0.73 37.90 64.94 Qwen2.5-7B-ITWVS 0.92 48.05 68.32 Wiki 0.89 44.21 58.32 NormAd 0.91 48.31 65.57 WVS+Wiki 0.90 46.00 68.22 WVS+NormAd 0.94 47.67 67.51 WVS+Wiki+NormAd 0.86 44.13 67.33 Table 5: Averaged performances on the multilingual multicultural offensiveness classifications (F1-Cult.), MMLU evaluations (F1-MMLU), and C-D ISTfor various instruction -tuned models and data configurations. Augmenting training with NormAd consistently improves C-D IST, but degrades MMLU performance in Llama-3.1-8B-IT and Gemma-2-9B-IT (likely due to reduced instruction-following ability, see Appendix C.2). The results highlight the complexity of adapting cultural values while maintaining cultural distinctiveness, culture-related task performance, and knowledge retention. 5 Adaptation with Additional Narratives While WVS-based training provides a great foun- dation for cultural value adaptation, our results in Section 4 show that | https://arxiv.org/abs/2505.16408v1 |
this seldom produces strong diagonal patterns, indicating limited cultural spe- cialization. A critical question is what additional data could enhance cultural value adaptation and preserve cultural distinctiveness? Humans exhibit gaps in what they “think”, and how they “behave” (Gross and Niman, 1975; Fazio, 1981, inter alia). This suggests that self-reported value data, such as the WVS, may be insufficient for improving tasks that require behavioral changesbased on cultural values (also evident in our analy- sis in §4.1). Hence, we incorporate two additional data sources, Wikipedia and NormAd, hypothesiz- ing that introducing data containing more objective narratives of culture could enhance the model’s performance and understanding of cultural values. Here, we focus our evaluation on instruction- tuned models to better reflect real-world use and extend it beyond Llama to include Gemma and Qwen, demonstrating the generality. Improved C-D ISTwith Wikipedia and Nor- mAd. The addition of Wikipedia and NormAd significantly enhances cultural distinctiveness (i.e., UMAP KDE on Wikipedia UMAP KDE on NormAd Figure 4: Kernel Density Estimation (KDE) plots of UMAP embeddings using LaBSE (Feng et al., 2022) for Wikipedia and NormAd datasets. These visualizations show the density distributions of the data in the reduced- dimensional space. C-D IST). Table 5 shows that integrating these datasets consistently improves C-D IST scores across all three models, indicating more culturally distinct behavior. For instance, Llama-3.1-8B-IT’s C-D ISTimproves from 0.76(WVS-only) to 0.89 (WVS+NormAd). Figure 3 illustrates this shift, as the heatmaps become more diagonal and show reduced cross-cultural interference. Incorporating additional cultural narratives retains cultural dis- tinctiveness. Improve over WVS alone in many cases. The addition of Wikipedia and NormAd data leads to notable gains in offensive classifications com- pared to training with WVS data alone. For in- stance, Llama-3.1-8B-IT’s performance on the of- fensiveness classification tasks (denoted as F1 Cult. in Table 5) rises from 29.61% (WVS-only) to 40.94% (WVS+NormAd), reflecting the value of richer, context-laden cultural information. How- ever, Gemma-2-9B-IT and Qwen2.5-7B-IT see a marginal change in F1 Cult., when WVS is aug- mented with NormAd. This highlights that while Llama-3.1-8B-IT showed clear benefits on this downstream task from narrative augmentation, the effect on tasks is model-dependent. For MMLU, combining WVS with Wikipedia and NormAd (WVS+Wiki+NormAd) yields the best results for both Llama-3.1-8B-IT and Gemma- 2-9B-IT. However, our results show anomalies, in- dicating the ongoing challenge of achieving robust cultural adaptation without compromising general knowledge retention. Further, the general trend in- dicates that context-rich data, when added to WVS , effectively helps offset the knowledge interference introduced by survey data alone. Overall, our find- ings suggest that curated narratives are crucial forretaining the model’s foundational understanding of cultural knowledge during adaptation. 6 Further Analysis Our empirical results suggest that adding objective cultural descriptions and context-specific examples improves cultural distinctiveness and performance on downstream tasks. In this section, we analyze the data further to understand why. Overlapping Embeddings versus Distinct Adap- tations. We first embed each data source using LaBSE (Feng et al. 2022, a multilingual embed- ding model that compresses texts into a shared semantic space), then project the embedding with kernel density estimation | https://arxiv.org/abs/2505.16408v1 |
(KDE). The results for WVS, Wikipedia and NormAd are shown in Fig- ure 1 and Figure 4 respectively. It is interesting to note that there is no distinct separation between cultures within a dataset. This suggests that seman- tic differences in the data are not the primary factor influencing downstream differences after training. This discrepancy likely occurs because Wikipedia and NormAd differ in how they encode cultural details, even if their embeddings are not sharply separated (see Table 6 in Appendix for data examples). Wikipedia provides broad encyclopedic summaries, covering historical contexts and traditions, while NormAd provides scenario-specific norms that directly inform cultural behaviors (e.g., respecting elders in formal gatherings). These nuanced differences at the domain level do not necessarily create distinct embedding clusters. Nevertheless, the descriptive, scenario-based NormAd dataset enhances fine-tuning by providing more targeted cultural cues. As a result, the model can better isolate culture-specific behaviors, yielding higher C-D ISTscores. 6.1 Summary of Findings Fine-tuning on WVS data alone is ineffective for cultural value adaptation, as shown by low C-D IST scores, weaker downstream task performance, and reduced factual knowledge retention. While over- all performance may vary across tasks, augmenting survey data with more descriptive sources enables a model to retain cultural distinctiveness andretain factual knowledge better. Combining WVS survey data with NormAd situational norms consistently yields clearer cultural separation, as evidenced by improved C-D ISTscore (Table 5). Wikipedia data offers moderate gains through structured knowl- edge, but NormAd’s scenario-based behavioral cues drive stronger cultural differentiation when paired with WVS. Our findings suggest that combining scenario- based narratives (e.g., NormAd) with survey pat- terns (WVS) better preserve cultural distinctiveness and should be investigated further. 7 Related Work General Adaptation to Cultural Values. Sev- eral existing work approaches cultural value adap- tations in LLMs through prompting (AlKhamissi et al., 2024; Wang et al., 2024; Tao et al., 2024), continual pre-training on diverse multilingual data (Wang et al., 2024; Choenni et al., 2024) or direct tuning on survey data or synthetic data based on survey (Li et al., 2024a; Xu et al., 2024; Li et al., 2024b). In particular, the basis of our investigation, CultureLLM (Li et al., 2024a), employs semanti- cally augmented data from the World Values Sur- vey (WVS) to represent the average opinion of a culture. In this paper, we extend the investigation using descriptive cultural principles and provide a comprehensive analysis. Recent research also explored value prediction with In-Context Learning (ICL)-based adaptation methods (Choenni and Shutova, 2024; Jiang et al., 2024; Myung et al., 2025). Particularly, Jiang et al. (2024) showed a mild inconsistency when models adapted using individual data from one continent were evaluated using data from another (e.g., train- ing data for other continents generally improves alignment to Oceania people). While related to our work, we focus on the impact at the country level rather than the broader continent level.Pluralistic Alignment. Related to cultural value adaptation, recent studies advocate for pluralistic alignment (Sorensen et al., 2024), wherein a model should reflect the values of multiple stakeholders or sub-groups. Feng et al. (2024) proposed a mod- ular | https://arxiv.org/abs/2505.16408v1 |
pluralistic alignment method, which primarily focuses on integrating diverse opinions. This re- search direction differs from typical existing cul- tural value adaptation work, which mainly focuses on reflecting the averaged value of a culture (Li et al., 2024a,b; Tao et al., 2024; AlKhamissi et al., 2024; Choenni et al., 2024, inter alia). Cultural Inconsistencies in LLMs. Recent work highlights the challenges LLMs face in maintaining consistent cultural values across different linguis- tic and social contexts (Adilazuarda et al., 2024; Beck et al., 2024). One of the reasons why these inconsistencies arise is due to biases in training data (Mihalcea et al., 2024; Sorensen et al., 2022), which often prioritize Western or English-centric perspectives, leading to misalignment when applied to non-WEIRD cultures (Mihalcea et al., 2024). Additionally, Mukherjee et al. (2024), shows that even the current LLMs are prone to a slight cul- tural and noncultural perturbation even on factual questions such as MMLU. This work builds upon the findings on how existing adaptation strategies address cultural disparities in downstream tasks. 8 Conclusion In this paper, we investigated the limitations of us- ing World Values Survey (WVS) data for cultural value adaptation in LLMs and explored the poten- tial of augmenting it with scenario-based cultural narratives. Our findings reveal that relying solely on WVS can lead to homogenized cultural repre- sentations and interfere with factual knowledge. We demonstrate that incorporating encyclopedic (Wikipedia) and scenario-based (NormAd) narra- tives, particularly the latter, significantly enhances the cultural distinctiveness of adapted models. While some variations in results were observed, we found that the augmentation could still improve nuanced cultural representations and preserve fac- tual knowledge in models. Our findings reveal a complex trade-off between cultural distinctiveness, task performance, and knowledge retention, high- lighting the need for further research on optimal data combinations and adaptation strategies to bal- ance these competing objectives. Limitations In this work, we focus on a select set of data as the source data for adaptation, including the World Values Survey (WVS), Wikipedia, and Nor- mAd. While these datasets offer diverse cultural signals, they each come with inherent biases. For instance, WVS could be subject to self-reporting biases, Wikipedia reflects editorial biases, and Nor- mAd consists of curated examples that may not fully represent all cultural variations. Furthermore, our evaluation is limited to selected culturally sensitive tasks, which may not fully cap- ture the broader range of tasks needed to assess how cultural value adaptation influences behavior. However, such an investigation requires careful task design and is beyond the scope of this work. Ethics Statement Our work aims to enhance cultural value adapta- tions in NLP systems while carefully considering potential societal impacts. While this research may help reduce Western-centric bias and improve of- fensive content classification by incorporating di- verse cultural values, we acknowledge the risks of potential misuse, including cultural stereotyp- ing and demographic profiling. We emphasize that our findings should be applied thoughtfully, with continuous consideration of cultural context, while being careful not to anthropomorphize LLMs by attributing to them true cultural understanding or awareness. Additionally, we encourage future re- search | https://arxiv.org/abs/2505.16408v1 |
to develop more nuanced methodologies and evaluation frameworks that better represent cultural diversity in NLP systems. References 2019. Turkish Spam V01. UCI Machine Learning Repository. DOI: https://doi.org/10.24432/C5WG7F. Muhammad Farid Adilazuarda, Sagnik Mukherjee, Pradhyumna Lavania, Siddhant Shivdutt Singh, Al- ham Fikri Aji, Jacki O’Neill, Ashutosh Modi, and Monojit Choudhury. 2024. Towards measuring and modeling “culture” in LLMs: A survey. In Proceed- ings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 15763–15784, Miami, Florida, USA. Association for Computational Linguistics. aimansnigdha. 2018. Bangla-abusive-comment- dataset. https://github.com/aimansnigdha/Bangla- Abusive-Comment-Dataset.Badr AlKhamissi, Muhammad ElNokrashy, Mai Alkhamissi, and Mona Diab. 2024. Investigating cultural alignment of large language models. In Pro- ceedings of the 62nd Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers) , pages 12404–12422, Bangkok, Thailand. Association for Computational Linguistics. Miguel Á Álvarez-Carmona, Estefanıa Guzmán-Falcón, Manuel Montes-y Gómez, Hugo Jair Escalante, Luis Villasenor-Pineda, Verónica Reyes-Meza, and An- tonio Rico-Sulayes. 2018. Overview of mex-a3t at ibereval 2018: Authorship and aggressiveness analy- sis in mexican spanish tweets. In Notebook papers of 3rd sepln workshop on evaluation of human language technologies for iberian languages (ibereval), seville, spain , volume 6. Valerio Basile, Cristina Bosco, Elisabetta Fersini, Deb- ora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela Sanguinetti. 2019. Semeval-2019 task 5: Multilingual detection of hate speech against immigrants and women in twitter. In Proceedings of the 13th international workshop on semantic evaluation , pages 54–63. Tilman Beck, Hendrik Schuff, Anne Lauscher, and Iryna Gurevych. 2024. Sensitivity, performance, robust- ness: Deconstructing the effect of sociodemographic prompting. In Proceedings of the 18th Conference of the European Chapter of the Association for Compu- tational Linguistics (Volume 1: Long Papers) , pages 2589–2615, St. Julian’s, Malta. Association for Com- putational Linguistics. Shiladitya Bhattacharya, Siddharth Singh, Ritesh Ku- mar, Akanksha Bansal, Akash Bhagat, Yogesh Dawer, Bornini Lahiri, and Atul Kr. Ojha. 2020. De- veloping a multilingual annotated corpus of misog- yny and aggression. In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying , pages 158–168, Marseille, France. European Lan- guage Resources Association (ELRA). Yong Cao, Li Zhou, Seolhwa Lee, Laura Cabello, Min Chen, and Daniel Hershcovich. 2023. Assessing cross-cultural alignment between ChatGPT and hu- man societies: An empirical study. In Proceedings of the First Workshop on Cross-Cultural Considerations in NLP (C3NLP) , pages 53–67, Dubrovnik, Croatia. Association for Computational Linguistics. Tommaso Caselli, Valerio Basile, Jelena Mitrovi ´c, Inga Kartoziya, and Michael Granitzer. 2020. I feel of- fended, don’t be abusive! implicit/explicit messages in offensive and abusive language. In Proceedings of the Twelfth Language Resources and Evaluation Conference , pages 6193–6202. Ça˘grı Çöltekin. 2020. A corpus of turkish offensive language on social media. In Proceedings of The 12th Language Resources and Evaluation Confer- ence, pages 6174–6184, Marseille, France. Rochelle Choenni, Anne Lauscher, and Ekaterina Shutova. 2024. The echoes of multilinguality: Trac- ing cultural value shifts during language model fine- tuning. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers) , pages 15042–15058, Bangkok, Thailand. Association for Computational Linguistics. Rochelle Choenni and Ekaterina Shutova. 2024. Self- | https://arxiv.org/abs/2505.16408v1 |
alignment: Improving alignment of cultural values in llms via in-context learning. CoRR , abs/2408.16482. Shammur Absar Chowdhury, Hamdy Mubarak, Ahmed Abdelali, Soon-gyo Jung, Bernard J Jansen, and Joni Salminen. 2020. A multi-platform arabic news com- ment dataset for offensive language detection. In Proceedings of the Twelfth Language Resources and Evaluation Conference , pages 6203–6212. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics , pages 8440– 8451, Online. Association for Computational Lin- guistics. daanVeer. 2020. Korean hatespeech dataset. https://github.com/daanVeer/HateSpeech_dataset. Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech de- tection and the problem of offensive language. In Proceedings of the 11th International AAAI Confer- ence on Web and Social Media , ICWSM ’17, pages 512–515. Angel Felipe Magnossao de Paula and Ipek Baris Schlicht. 2021. Ai-upv at iberlef-2021 detoxis task: Toxicity detection in immigration-related web news comments using transformers and statistical models. arXiv preprint arXiv:2111.04530 . Rogers P. de Pelle and Viviane P. Moreira. 2017. Of- fensive comments in the brazilian web: a dataset and baseline results. Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurélien Rodriguez, Austen Gregerson, Ava Spataru, Bap- tiste Rozière, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Al- lonsius, Daniel Song, Danielle Pintz, Danny Livshits, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino, Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova,Emily Dinan, Eric Michael Smith, Filip Radenovic, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Geor- gia Lewis Anderson, Graeme Nail, Grégoire Mialon, Guan Pang, Guillem Cucurell, Hailey Nguyen, Han- nah Korevaar, Hu Xu, Hugo Touvron, Iliyan Zarov, Imanol Arrieta Ibarra, Isabel M. Kloumann, Ishan Misra, Ivan Evtimov, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Mahadeokar, Jeet Shah, Jelmer van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy Fu, Jianfeng Chi, Jianyu Huang, Jiawen Liu, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak, Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Junteng Jia, Kalyan Vasuden Alwala, Kartikeya Upasani, Kate Plawiak, Ke Li, Kenneth Heafield, Kevin Stone, and et al. 2024. The llama 3 herd of models. CoRR , abs/2407.21783. RH Fazio. 1981. Direct experience and attitude behav- ior consistency. Advances in experimental social psychology , 14. Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Ari- vazhagan, and Wei Wang. 2022. Language-agnostic bert sentence embedding. Shangbin Feng, Taylor Sorensen, Yuhan Liu, Jillian Fisher, Chan Young Park, Yejin Choi, and Yulia Tsvetkov. 2024. Modular pluralism: Pluralistic align- ment via multi-LLM collaboration. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 4151–4171, Mi- ami, Florida, USA. Association for Computational | https://arxiv.org/abs/2505.16408v1 |
Linguistics. Elisabetta Fersini, Paolo Rosso, Maria Anzovino, et al. 2018. Overview of the task on automatic misog- yny identification at ibereval 2018. Ibereval@ sepln , 2150:214–228. Clifford Geertz. 2017. The interpretation of cultures . Basic books. Steven Jay Gross and C Michael Niman. 1975. Attitude- behavior consistency: A review. Public opinion quar- terly, 39(3):358–368. Yufei Guo, Muzhe Guo, Juntao Su, Zhou Yang, Mengqiu Zhu, Hongfei Li, Mengyang Qiu, and Shuo Shuo Liu. 2024. Bias in large language mod- els: Origin, evaluation, and mitigation. CoRR , abs/2411.10915. Christian Haerpfer, Alejandro Moreno Ronald In- glehart, Christian Welzel, Jaime Diez-Medrano Kseniya Kizilova, Milena Lagos, Pippa Norris, Ed- uard Ponarin, and Bianca Puranen. 2022. World values survey: Round seven. HASOC. 2020. Hasoc2020. https://hasocfire.github.io/hasoc/2020/index.html. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language under- standing. In International Conference on Learning Representations . Joseph Henrich, Steven J Heine, and Ara Norenzayan. 2010. The weirdest people in the world? Behavioral and brain sciences , 33(2-3):61–83. Daniel Hershcovich, Stella Frank, Heather Lent, Miryam de Lhoneux, Mostafa Abdou, Stephanie Brandl, Emanuele Bugliarello, Laura Cabello Pi- queras, Ilias Chalkidis, Ruixiang Cui, Constanza Fierro, Katerina Margatina, Phillip Rust, and Anders Søgaard. 2022. Challenges and strategies in cross- cultural NLP. In Proceedings of the 60th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers) , pages 6997–7013, Dublin, Ireland. Association for Computational Lin- guistics. Dirk Hovy and Diyi Yang. 2021. The importance of modeling social factors of language: Theory and practice. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 588–602, Online. Association for Computational Linguistics. Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-rank adaptation of large language models. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022 . F Husain. 2020. Osact4 shared task on offen- sive language detection: Intensive preprocessing- based approach. arxiv 2020. arXiv preprint arXiv:2005.07297 . Liwei Jiang, Taylor Sorensen, Sydney Levine, and Yejin Choi. 2024. Can language models reason about in- dividualistic human values and preferences? CoRR , abs/2410.03868. Zhuoren Jiang, Zhe Gao, Guoxiu He, Yangyang Kang, Changlong Sun, Qiong Zhang, Luo Si, and Xi- aozhong Liu. 2019. Detect camouflaged spam con- tent via stoneskipping: Graph and text joint em- bedding for chinese character variation representa- tion. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP2019) . ACM. Rebecca L. Johnson, Giada Pistilli, Natalia Menédez- González, Leslye Denisse Dias Duran, Enrico Panai, Julija Kalpokiene, and Donald Jay Bertulfo. 2022. The ghost in the machine has an american ac- cent: value conflict in GPT-3. ArXiv preprint , abs/2203.07785. Sanaa Kaddoura and Safaa Henno. 2024. Dataset of arabic spam and ham tweets. Data in Brief , 52(10990):4. Kaggle. 2019. Jigsaw-multilingual-toxicity. https://www.kaggle.com/code/tarunpaparaju/jigsaw- multilingual-toxicity-eda-models.Kaggle. 2021. 5k turkish tweets with incivil content. https://www.kaggle.com/datasets/kbulutozler/5k- turkish-tweets-with-incivil-content. Kaggle. 2022. turkish offensive language detection. https://www.kaggle.com/datasets/toygarr/turkish- offensive-language-detection. Habibe Karayi ˘git, Çi ˘gdem ˙Inan Acı, and Ali Akda | https://arxiv.org/abs/2505.16408v1 |
˘glı. 2021. Detecting abusive instagram comments in turk- ish using convolutional neural network and machine learning methods. Expert Systems with Applications , 174:114802. Jean Lee, Taejun Lim, Heejun Lee, Bogeun Jo, Yangsok Kim, Heegeun Yoon, and Soyeon Caren Han. 2022. K-MHaS: A multi-label hate speech detection dataset in Korean online news comment. In Proceedings of the 29th International Conference on Computational Linguistics , pages 3530–3538, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. Joao A Leite, Diego F Silva, Kalina Bontcheva, and Carolina Scarton. 2020. Toxic language detec- tion in social media for brazilian portuguese: New dataset and multilingual analysis. arXiv preprint arXiv:2010.04543 . Cheng Li, Mengzhou Chen, Jindong Wang, Sunayana Sitaram, and Xing Xie. 2024a. CultureLLM: Incorpo- rating cultural differences into large language models. InThe Thirty-eighth Annual Conference on Neural Information Processing Systems . Cheng Li, Damien Teney, Linyi Yang, Qingsong Wen, Xing Xie, and Jindong Wang. 2024b. CulturePark: Boosting cross-cultural understanding in large lan- guage models. In The Thirty-eighth Annual Confer- ence on Neural Information Processing Systems . Chen Cecilia Liu, Iryna Gurevych, and Anna Korho- nen. 2024. Culturally aware and adapted NLP: A taxonomy and a survey of the state of the art. CoRR , abs/2406.03930. Rada Mihalcea, Oana Ignat, Longju Bai, Angana Borah, Luis Chiruzzo, Zhijing Jin, Claude Kwizera, Joan Nwatu, Soujanya Poria, and Thamar Solorio. 2024. Why ai is weird and should not be this way: Towards ai for everyone, with everyone, by everyone. CoRR , abs/2410.16315. Jihyung Moon, Won Ik Cho, and Junbum Lee. 2020. BEEP! Korean corpus of online news comments for toxic speech detection. In Proceedings of the Eighth International Workshop on Natural Language Pro- cessing for Social Media , pages 25–31, Online. As- sociation for Computational Linguistics. Hamdy Mubarak, Hend Al-Khalifa, and AbdulMohsen Al-Thubaity. 2022. Overview of osact5 shared task on arabic offensive language and hate speech detec- tion. In Proceedinsg of the 5th Workshop on Open- Source Arabic Corpora and Processing Tools with Shared Tasks on Qur’an QA and Fine-Grained Hate Speech Detection , pages 162–166. Sagnik Mukherjee, Muhammad Farid Adilazuarda, Sunayana Sitaram, Kalika Bali, Alham Fikri Aji, and Monojit Choudhury. 2024. Cultural conditioning or placebo? on the effectiveness of socio-demographic prompting. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Process- ing, pages 15811–15837, Miami, Florida, USA. As- sociation for Computational Linguistics. Junho Myung, Nayeon Lee, Yi Zhou, Jiho Jin, Rifki Afina Putri, Dimosthenis Antypas, Hsuvas Borkakoty, Eunsu Kim, Carla Perez-Almendros, Abinew Ali Ayele, Víctor Gutiérrez-Basulto, Yazmín Ibáñez-García, Hwaran Lee, Shamsuddeen Has- san Muhammad, Kiwoong Park, Anar Sabuhi Rza- yev, Nina White, Seid Muhie Yimam, Moham- mad Taher Pilehvar, Nedjma Ousidhoum, Jose Camacho-Collados, and Alice Oh. 2025. Blend: A benchmark for llms on everyday knowledge in di- verse cultures and languages. Nedjma Ousidhoum, Zizheng Lin, Hongming Zhang, Yangqiu Song, and Dit-Yan Yeung. 2019. Multilin- gual and multi-aspect hate speech analysis. In Pro- ceedings of EMNLP . Association for Computational Linguistics. Juan Carlos Pereira-Kohatsu, Lara Quijano-Sánchez, Federico Liberatore, and Miguel Camacho-Collados. 2019. Detecting and monitoring hate speech in twit- ter.Sensors , 19(21):4654. Flor Miriam Plaza-del Arco, Arturo Montejo-Ráez, | https://arxiv.org/abs/2505.16408v1 |
L Alfonso Urena Lopez, and María-Teresa Martín- Valdivia. 2021. Offendes: A new corpus in spanish for offensive language research. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021) , pages 1096–1108. Aida Ramezani and Yang Xu. 2023. Knowledge of cultural moral norms in large language models. In Proceedings of the 61st Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers) , pages 428–446, Toronto, Canada. As- sociation for Computational Linguistics. Abhinav Rao, Akhila Yerukola, Vishwa Shah, Katha- rina Reinecke, and Maarten Sap. 2024. NormAd: A framework for measuring the cultural adaptability of large language models. CoRR , abs/2404.12464. Morgane Rivière, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupatiraju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, Johan Ferret, Peter Liu, Pouya Tafti, Abe Friesen, Michelle Casbon, Sabela Ramos, Ravin Kumar, Charline Le Lan, Sammy Jerome, An- ton Tsitsulin, Nino Vieillard, Piotr Stanczyk, Sertan Girgin, Nikola Momchev, Matt Hoffman, Shantanu Thakoor, Jean-Bastien Grill, Behnam Neyshabur, Olivier Bachem, Alanna Walton, Aliaksei Severyn, Alicia Parrish, Aliya Ahmad, Allen Hutchison, Alvin Abdagic, Amanda Carl, Amy Shen, Andy Brock,Andy Coenen, Anthony Laforge, Antonia Pater- son, Ben Bastian, Bilal Piot, Bo Wu, Brandon Royal, Charlie Chen, Chintu Kumar, Chris Perry, Chris Welty, Christopher A. Choquette-Choo, Danila Sinopalnikov, David Weinberger, Dimple Vijayku- mar, Dominika Rogozinska, Dustin Herbison, Elisa Bandy, Emma Wang, Eric Noland, Erica Moreira, Evan Senter, Evgenii Eltyshev, Francesco Visin, Gabriel Rasskin, Gary Wei, Glenn Cameron, Gus Martins, Hadi Hashemi, Hanna Klimczak-Plucinska, Harleen Batra, Harsh Dhand, Ivan Nardini, Jacinda Mein, Jack Zhou, James Svensson, Jeff Stanway, Jetha Chan, Jin Peng Zhou, Joana Carrasqueira, Joana Iljazi, Jocelyn Becker, Joe Fernandez, Joost van Amersfoort, Josh Gordon, Josh Lipschultz, Josh Newlan, Ju-yeong Ji, Kareem Mohamed, Kar- tikeya Badola, Kat Black, Katie Millican, Keelin McDonell, Kelvin Nguyen, Kiranbir Sodhia, Kish Greene, Lars Lowe Sjösund, Lauren Usui, Laurent Sifre, Lena Heuermann, Leticia Lago, and Lilly Mc- Nealus. 2024. Gemma 2: Improving open language models at a practical size. CoRR , abs/2408.00118. Nauros Romim, Mosahed Ahmed, Hriteshwar Talukder, and Md Saiful Islam. 2021. Hate speech detection in the bengali language: A dataset and its baseline evaluation. In Proceedings of International Joint Conference on Advances in Computational Intelli- gence: IJCACI 2020 , pages 457–468. Springer. Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Marcos Zampieri, and Preslav Nakov. 2020. Solid: A large-scale semi-supervised dataset for offen- sive language identification. arXiv preprint arXiv:2004.14454 . Björn Ross, Michael Rist, Guillermo Carbonell, Ben- jamin Cabrera, Nils Kurowsky, and Michael Wojatzki. 2016. Measuring the Reliability of Hate Speech An- notations: The Case of the European Refugee Cri- sis. In Proceedings of NLP4CMC III: 3rd Work- shop on Natural Language Processing for Computer- Mediated Communication , volume 17 of Bochumer Linguistische Arbeitsberichte , pages 6–9, Bochum. Paul Röttger, Haitham Seelawi, Debora Nozza, Zeerak Talat, and Bertie Vidgen. 2022. Multilingual hate- check: Functional tests for multilingual hate speech detection models. arXiv preprint arXiv:2206.09917 . Omar Sharif and Mohammed Moshiul Hoque. 2022. Tackling cyber-aggression: Identification and fine- grained categorization of aggressive texts on so- cial media using weighted ensemble of transformers. Neurocomputing , | https://arxiv.org/abs/2505.16408v1 |
490:462–481. Hoyun Song, Soo Hyun Ryu, Huije Lee, and Jong C Park. 2021. A large-scale comprehensive abusive- ness detection dataset with multifaceted labels from reddit. In Proceedings of the 25th Conference on Computational Natural Language Learning , pages 552–561. Taylor Sorensen, Jared Moore, Jillian Fisher, Mitchell L. Gordon, Niloofar Mireshghallah, Christopher Michael Rytting, Andre Ye, Liwei Jiang, Ximing Lu, Nouha Dziri, Tim Althoff, and Yejin Choi. 2024. Position: A roadmap to pluralistic alignment. In Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024 . Taylor Sorensen, Joshua Robinson, Christopher Ryt- ting, Alexander Shaw, Kyle Rogers, Alexia Delorey, Mahmoud Khalil, Nancy Fulda, and David Wingate. 2022. An information-theoretic approach to prompt engineering without ground truth labels. In Proceed- ings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers) , pages 819–862, Dublin, Ireland. Association for Computational Linguistics. Yan Tao, Olga Viberg, Ryan S Baker, and René F Kizil- cec. 2024. Cultural bias and cultural alignment of large language models. PNAS nexus , 3(9):pgae346. Qwen Team. 2024. Qwen2.5: A party of foundation models. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language models. CoRR , abs/2302.13971. Francielle Vargas, Isabelle Carvalho, Fabiana Ro- drigues de Góes, Thiago Pardo, and Fabrício Ben- evenuto. 2022. HateBR: A large expert annotated corpus of Brazilian Instagram comments for offen- sive language and hate speech detection. In Pro- ceedings of the Thirteenth Language Resources and Evaluation Conference , pages 7174–7183, Marseille, France. European Language Resources Association. Wenxuan Wang, Wenxiang Jiao, Jingyuan Huang, Ruyi Dai, Jen-tse Huang, Zhaopeng Tu, and Michael Lyu. 2024. Not all countries celebrate thanksgiving: On the cultural dominance in large language models. In Proceedings of the 62nd Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers) , pages 6349–6384, Bangkok, Thailand. Association for Computational Linguistics. Zirui Wang, Zachary C. Lipton, and Yulia Tsvetkov. 2020. On negative interference in multilingual mod- els: Findings and a meta-learning treatment. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 4438–4450, Online. Association for Computa- tional Linguistics. Zeerak Waseem and Dirk Hovy. 2016. Hateful symbols or hateful people? predictive features for hate speech detection on twitter. In Proceedings of the NAACL student research workshop , pages 88–93. Michael Wiegand, Melanie Siegel, and Josef Ruppen- hofer. 2018. Overview of the germeval 2018 shared task on the identification of offensive language.Guohai Xu, Jiayi Liu, Ming Yan, Haotian Xu, Jinghui Si, Zhuoran Zhou, Peng Yi, Xing Gao, Jitao Sang, Rong Zhang, Ji Zhang, Chao Peng, Fei Huang, and Jingren Zhou. 2023. Cvalues: Measuring the val- ues of chinese large language models from safety to responsibility. arXiv 2307.09705 . Shaoyang Xu, Yongqi Leng, Linhao Yu, and Deyi Xiong. 2024. Self-pluralising culture alignment for large language models. CoRR , abs/2410.12971. Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, and Çagri | https://arxiv.org/abs/2505.16408v1 |
Çöltekin. 2020a. Semeval-2020 task 12: Multilingual offensive language identification in social media (offenseval 2020). In Proceedings of the Fourteenth Workshop on Semantic Evaluation, SemEval@COLING 2020, Barcelona (online), December 12-13, 2020 , pages 1425–1447. International Committee for Computa- tional Linguistics. Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, and Ça ˘grı Çöltekin. 2020b. Semeval-2020 task 12: Multilingual offen- sive language identification in social media (offense- val 2020). arXiv preprint arXiv:2006.07235 . Jingyan Zhou, Jiawen Deng, Fei Mi, Yitong Li, Yasheng Wang, Minlie Huang, Xin Jiang, Qun Liu, and He- len Meng. 2022. Towards identifying social bias in dialog systems: Frame, datasets, and benchmarks. arXiv preprint arXiv:2202.08011 . A Data Characteristics A.1 Additional KDE Plots t-SNE KDE on Wikipedia t-SNE KDE on NormAd UMAP KDE on Wikipedia UMAP KDE on NormAd Figure 5: Kernel Density Estimation (KDE) plots using t-SNE and UMAP projections for Wikipedia and NormAd datasets. Although projection methods vary, none of the embeddings are distinctly separable by culture, indicating shared semantic similarities of data. A.2 Samples of WVS, Wiki, and NormAd Data Table 6 presents a comparison of social values across different cultures by showcasing sample data from the World Values Survey (WVS), Wikipedia, and the NormAd dataset. WVS Wikipedia NormAd "topic": "SOCIAL V AL- UES", "q_id": "27", "q_content": "One of my main goals in life has been to make my parents proud", "op- tion": "1. Strongly agree 2. agree 3. Dis- agree 4. Strongly dis- agree"Arab culture is the culture of the Arabs, from the Atlantic Ocean in the west to the Arabian Sea in the east, in a region of the Middle East and North Africa known as the Arab world. The various religions the Arabs have adopted throughout their history and the various empires and king- doms that have ruled and took lead of the civilization have contributed to the ethnogenesis and formation of modern Arab culture.(Egypt - Background) Basic Etiquette - It is considered impolite to point the toe, heel or any part of the foot toward another person. Showing the sole of one’s shoe is also impolite. - Modest dress and presentation is highly valued in Egyptian culture. - Greetings often occur before any form of so- cial interaction. For example, a person joining a group is expected to greet all those present. - Generally, the younger defer to the older through showing respect. "topic": "SOCIAL V AL- UES", "q_id": "28", "option": "1. Strongly agree 2. agree 3. Disagree 4. Strongly disagree", "q_content": "When a mother works for pay, the children suf- fer"The culture of Bengal defines the cul- tural heritage of the Bengali people native to eastern regions of the In- dian subcontinent, mainly what is today Bangladesh and the Indian states of West Bengal and Tripura, where they form the dominant ethnolinguistic group and the Bengali language is the official and pri- mary language. Bengal has a recorded history of 1,400 years. After the par- tition, Bangladeshi culture became dis- tinct from the mainstream Bengali cul- ture.(Spain - Value) Flexibility in social interactions and a relaxed | https://arxiv.org/abs/2505.16408v1 |
approach to time and financial contributions among peers. "topic": "SOCIAL V AL- UES", "q_id": "29", "option": "1. Strongly agree 2. agree 3. Disagree 4. Strongly disagree", "q_content": "On the whole, men make better political leaders than women do"Chinese culture is one of the world’s oldest cultures, originating thousands of years ago. The culture prevails across a large geographical region in East Asia with Sinosphere in whole and is ex- tremely diverse, with customs and tra- ditions varying greatly between coun- ties, provinces, cities, towns. The terms ’China’ and the geographical landmass of ’China’ have shifted across the cen- turies, before the name ’China’ became commonplace in modernity.(UK - Story) Elena was planning to visit her friend Tom’s house. She texted Tom to ask if it was a good time for him. Upon arrival, she made sure not to overstay by asking if he needed her to leave at a certain time. Is what Elena did socially acceptable? Table 6: Comparison of data across different cultures from the data sources used in our paper. B Training Procedure and Data Reformulation Following Li et al. (2024a), our experiments employ LoRA adapters with 4-bit quantization using the BitsAndBytes configuration to optimize the memory usage. We use an alpha value of 16, a dropout rate of 0.1, and a rank of 64, specifically targeting the query ( q_proj ) and value ( v_proj ) projection matrices of the transformer architecture. We reformulate the training data using the following formats: 1.Standard Survey Training (WVS). The WVS survey data is structured with clear task markers: ### Task: Survey Question-Answer ### Question: [question_content] ### Answer: [answer_content] 2.Wikipedia. When the Wikipedia data is used, the information is formatted as: ### Task: Cultural Context ### Culture: [culture_name] ### Description: [cultural_context] 3.NormAd. We integrate the data using the following prompt: ### Task: NormAd Cultural Context ### Culture: [culture_name] ### Country: [country_name] ### Background: [background_info] ### Rule-of-Thumb: [cultural_rule] ### Story: [narrative] ### Explanation: [detailed_explanation] The training process optimizes memory usage with gradient checkpointing and uses a constant learning rate of 2×10−4. The model is trained for 6 epochs with a warmup ratio of 0.03 and employs 8-bit Adam optimization with a weight decay of 0.001. For reproducibility, the process is seeded ( seed=42 ) and ensures deterministic CUDA operations. C Full Performance Tables C.1 Zero-Shot Prompting and Single Culture Adaptation Results Model ara ben zho eng deu ell kor por spa tur Avg. Zero-Shot Prompting Llama-3.1-8B 11.96 17.12 32.77 14.85 23.81 38.16 26.14 19.93 30.96 21.95 23.77 Llama-3.1-8B-IT 19.14 23.10 30.49 26.63 34.36 37.56 38.72 20.92 39.14 32.95 30.00 Gemma-2-9b-IT 17.98 50.65 20.30 46.30 50.18 45.94 60.40 38.80 27.40 46.35 40.43 Qwen2.5-7B-Instruct 45.41 58.88 25.30 38.29 60.30 48.27 53.86 54.87 45.72 60.37 49.13 Single-Culture Adaptation - WVS Llama-3.1-8B 17.22 22.01 38.28 19.92 29.30 36.08 32.65 20.15 27.93 28.57 27.21 Llama-3.1-8B-IT 19.50 23.51 32.69 22.35 34.78 36.98 37.61 17.75 25.85 28.78 27.98 Gemma-2-9b-IT 15.54 43.95 24.10 33.92 41.01 49.09 61.01 37.66 37.15 48.81 39.22 Qwen2.5-7B-Instruct 39.30 59.24 25.78 40.39 57.85 48.02 53.79 51.77 51.31 57.47 48.49 Table 7: Culture adaptation results (F1 scores) under | https://arxiv.org/abs/2505.16408v1 |
three training scenarios: zero-shot prompting and single-culture adaptation (training on Llama-3.1-8B models using WVS data). Evaluation uses a multilingual offensiveness dataset (§3.3), reported as averaged F1 scores. C.2 Full Invalid Ratio Methods Inv. Cult. (%) Inv. MMLU (%) Llama-3.1-8BZero-Shot 20.12 2.3 WVS 14.68 0 NormAd 15.90 70.0 WVS+Wiki 14.04 0 WVS+NormAd 13.22 0 WVS+Wiki+NormAd 12.85 0 Llama-3.1-8B-ITZero-Shot 21.20 0 WVS 10.82 0 NormAd 11.73 72.3 WVS+Wiki 9.73 0 WVS+NormAd 8.91 0 WVS+Wiki+NormAd 8.35 0 Gemma-2-9B-ITZero-Shot 13.23 0 WVS 0 0 NormAd 9.7 82.7 WVS+Wiki 6.32 0 WVS+NormAd 5.89 0 WVS+Wiki+NormAd 6.21 0 Qwen2.5-7B-ITZero-Shot 9.4 0 WVS 0 0 NormAd 7.5 10.1 WVS+Wiki 0 0 WVS+NormAd 0 0 WVS+Wiki+NormAd 0 0 Table 8: Invalid response rates on cultural evaluation sets ( Invalid Cult. ) and on MMLU ( Invalid MMLU ). All MMLU invalid ratios are lower than the 20.12 % cultural baseline of Llama -3.1-8B— except for the purposely inflated NormAd -only rows, which remain dramatically worse. C.3 Combined Cultural Adaptation Instead of learning a separate adapter per culture, we combine training data from all target cultures and produce one multi-culture adapter. This can potentially help the model recognize cross-cultural patterns or exploit data from many cultures. However, it risks “averaging out” the distinctions, possibly causing cultural interference (e.g., losing the unique viewpoint for each culture, akin to interference in multilinguality Conneau et al. 2020; Wang et al. 2020). While combined-culture adaptation can improve some low-resource cultures (e.g., Korean, Bengali), it could reduce performance for others, indicating cultural interference. Combined-Culture Adaptation - WVS Model ara ben zho eng deu ell kor por spa tur Avg. Llama-3.1-8B 33.44 23.24 28.39 17.12 36.75 15.11 37.09 17.88 25.62 39.29 27.39 Llama-3.1-8B-IT 28.00 30.34 42.77 23.90 46.08 31.42 43.32 22.88 33.52 43.50 34.57 Table 9: Results for Combined-Culture Adaptation on WVS. C.4 Freeform Generation C.4.1 Performance Heatmaps - Llama-3.1-8B Figure 6 illustrates the culture-specific classification performance of the Llama-3.1-8B model through three heatmaps corresponding to different data configurations: panel (a) uses only WVS data, panel (b) integrates cultural context from Wikipedia (WVS+Wiki), and panel (c) combines WVS with NormAd data (WVS+NormAd); in each heatmap, color gradients represent the ranks of the adaptation results, providing a visual assessment of how incorporating additional cultural sources can enhance or alter model performance across diverse cultural contexts. WVS WVS+Wiki WVS+NormAd Figure 6: Heatmaps of culture-specific classification performance (Llama-3.1-8B) using different data sources based on the ranks of the adaptation results. C.4.2 Performance Tables - Llama-3.1-8B-Instruct Figure 7 illustrates the performance of Llama-3.1-8B-Instruct model through three heatmaps. WVS WVS+Wiki WVS+NormAd Figure 7: Heatmaps of culture-specific classification performance (Llama-3.1-8B-IT) using different data sources based on the ranks of the adaptation results. C.4.3 Performance Tables - Qwen2.5-7B-IT Figure 8 illustrates the performance of the Qwen2.5-7B-IT model through three heatmaps. WVS WVS+Wiki WVS+NormAd Figure 8: Heatmaps of culture-specific classification performance (Qwen2.5-7B-IT) using different data sources based on the ranks of the adaptation results. C.4.4 Performance Tables - Gemma-2-9B-IT Figure 9 illustrates the performance of the Gemma-2-9B-IT model through three heatmaps. WVS WVS+Wiki WVS+NormAd Figure 9: Heatmaps of culture-specific classification performance (Gemma-2-9B-Instruct) using different data | https://arxiv.org/abs/2505.16408v1 |
sources based on the ranks of the adaptation results. C.5 Normalized Scores Tables Adapter Cult. ara ben zho eng deu ell kor por spa tur ara 0.4209 0.6882 0.7343 0.6578 0.5337 0.8640 0.6284 0.6758 0.4780 0.5645 ben 0.4156 0.6237 0.5984 0.7223 0.5213 0.8598 0.5595 0.6062 0.5466 0.5148 zho 0.6986 0.7371 1.0000 0.7862 0.6038 0.8703 0.6667 0.6107 0.4654 0.5985 eng 0.6867 0.7216 0.7166 0.7225 0.6131 0.9398 0.7268 0.6103 0.4828 0.5751 deu 0.5266 0.7835 0.8161 0.7779 0.8139 0.8509 0.7493 0.6345 0.5899 0.6172 ell 0.7865 0.7711 0.7522 0.6827 0.8168 0.8688 0.8695 0.7089 0.6324 0.5208 kor 0.4633 0.6728 0.6991 0.7933 0.5838 0.8810 0.7065 0.6193 0.5745 0.5292 por 0.8442 0.7987 0.5384 0.8142 0.6676 0.9248 0.8853 0.6364 0.4975 0.5997 spa 1.0000 1.0000 0.7987 1.0000 1.0000 1.0000 0.9886 1.0000 1.0000 1.0000 tur 0.8685 0.9817 0.6772 0.8628 0.8242 0.8501 1.0000 0.8094 0.6610 0.8045 Table 10: Normalized Scores and C-D ISTon Llama-3.1-8B-IT for WVS. Rows represent the adapter culture, and columns represent the culture test set. Adapter Cult. ara ben zho eng deu ell kor por spa tur ara 0.7255 0.5862 0.7980 0.8510 0.6329 0.7875 0.6219 0.7635 0.9012 0.5731 ben 0.3320 0.6027 0.4640 0.8319 0.5354 0.7861 0.5575 0.5934 0.7311 0.4903 zho 0.8268 0.7872 1.0000 0.9636 0.8755 1.0000 0.8753 0.8413 0.8521 0.7687 eng 0.7514 0.8592 0.9779 0.7852 0.9733 0.8209 0.9034 0.9299 0.9792 0.8828 deu 0.5986 0.8016 0.9445 0.7760 0.8604 0.9679 0.8233 0.7221 0.7729 0.6408 ell 0.9031 0.9440 0.7137 1.0000 0.9152 0.7502 0.8970 1.0000 1.0000 0.9678 kor 1.0000 1.0000 0.5369 0.8979 1.0000 0.8037 1.0000 0.8637 0.8274 1.0000 por 0.7863 0.7632 0.5586 0.8940 0.8065 0.9270 0.8570 0.7430 0.6613 0.7746 spa 0.4076 0.6871 0.5581 0.8136 0.6525 0.7973 0.7152 0.5486 0.6715 0.5138 tur 0.5835 0.6960 0.9223 0.8341 0.7417 0.8859 0.8456 0.7119 0.9690 0.6794 Table 11: Normalized Scores and C-D ISTon Llama-3.1-8B-IT for WVS+Wikipedia. Rows represent the adapter culture, and columns represent the culture test set. Adapter Cult. ara ben zho eng deu ell kor por spa tur ara 0.7961 0.8685 0.7190 0.8358 0.9640 1.0000 0.9533 0.7462 0.7974 0.8966 ben 0.3643 0.8608 0.7432 0.8893 0.6026 0.7490 0.7124 0.8666 0.7963 0.4092 zho 0.7051 0.8463 0.7493 0.7967 0.6767 0.4841 0.6127 0.5454 0.6689 0.7248 eng 0.7383 0.8678 0.7493 0.8180 0.7038 0.5794 0.6227 0.8956 0.8185 0.7400 deu 0.6004 0.6975 0.8100 0.9597 0.9297 0.7515 0.9337 0.7058 0.7142 0.6936 ell 0.8597 0.9141 0.8144 0.9923 1.0000 0.9091 0.9074 0.9620 0.8582 0.9469 kor 0.7207 0.5973 0.8340 0.5882 0.9363 0.6791 0.7118 0.4862 0.7307 0.8404 por 1.0000 0.8727 0.8067 1.0000 0.8628 0.8287 0.7709 0.9925 0.9607 1.0000 spa 0.8634 0.8849 1.0000 0.8843 0.9596 0.6558 0.7248 0.7613 1.0000 0.8585 tur 0.5487 0.9045 0.7305 0.9694 0.9960 0.8265 0.9640 1.0000 0.8771 0.9844 Table 12: Normalized Scores and C-D ISTon Llama-3.1-8B-IT for WVS+NormAd. Rows represent the adapter culture, and columns represent the culture test set. C.6 Probability-Based Generation Table 13 shows the normalized F1 score for probability-based generation evaluations. LanguageBaseline Translated Llama-3.1-8B Llama-3.1-8B-IT Llama-3.1-8B Llama-3.1-8B-IT ara 30.52 28.83 33.24 37.81 ben 22.53 45.45 29.70 42.77 zho 28.84 41.35 35.77 46.28 eng 28.37 42.81 30.21 49.18 deu 32.53 40.40 28.80 41.92 ell 30.77 46.05 32.11 36.34 kor 30.28 41.80 34.33 44.63 por 29.24 40.11 27.55 38.08 spa 28.96 43.77 23.32 38.60 tur 30.44 | https://arxiv.org/abs/2505.16408v1 |
43.93 30.24 40.46 Table 13: Performance on MMLU when training each adapter with different WVS cultural data. Baseline refers to fine-tuning using English-language cultural value data with the Llama-3.1-8B andLlama-3.1-8B-IT models. Translated represents training with WVS cultural values translated into the respective target language, using the Llama-3.1-8B andLlama-3.1-8B-IT models. The zero-shot performance for Arabic is 0.35 with Llama-3.1-8B and 0.45 with Llama-3.1-8B-IT . D Invalid Answer Check D.1 Code for Invalid Answer Filtering We process the generated response to determine whether it contains a valid answer using the following function. If the expected answer format is not detected, a default value is assigned. # Generate response from model def getResponse(prompt, base_model=None, base_tokenizer=None): input_ids = base_tokenizer(prompt, return_tensors="pt").input_ids.to("cuda") base_model.eval() with torch.no_grad(): output = base_model.generate( input_ids, max_new_tokens=25, temperature=0.0, num_beams=1, do_sample=False, eos_token_id=base_tokenizer.eos_token_id, pad_token_id=base_tokenizer.pad_token_id, ) output_txt = base_tokenizer.decode(output[0], skip_special_tokens=True) # Extract the answer (e.g., "A") match = re.search(r’###␣Answer:\s*([A-D])’, output_txt) if match: prediction = match.group(1) invalid_response = False else: prediction = "A" # Default or handle as appropriate invalid_response = True return output_txt, prediction, invalid_response Listing 1: Generating Response and Invalid Response filtering from Model. D.2 Invalid Examples Table 14 illustrates examples of invalid responses from our evaluation datasets (English, German, and Spanish). The model failed to properly complete its task of detecting toxic content, showing truncated or incomplete outputs across different languages. Culture Invalid Response Example English {"input": "@user @user legit nilas retarded idk", "output": "@user @user legit nilas retarded idk what to do @user @user @user", "extracted_output": "", "prediction": "", "label": "0", "invalid_response": true} German {"input": "@FobiMania @ManuelaSchwesig @MartinSchulz Ich scheisse auf die SPD. |LBR| SCHEINHEILGSTE PARTEI DEUTSCHLANDS !!!", "output": "### Question: Is the following sentence offensive: @FobiMania @ManuelaSchwesig @MartinSchulz Ich scheisse auf die SPD. |LBR| SCHEINHEILGSTE PARTEI DEUTSCHLANDS!!! \"1. Offensive\", \"2. Not offensive\" without explanation. ### Answer: @FobiMania @ManuelaSchw", "extracted_output": "@FobiMania @ManuelaSchw", "prediction": "@fobimania @manuelaschw", "label": "OFF", "invalid_response": true} Spanish {"input": "A ver si le dan un Goya al youtuber que se llevó la hostia por subnormal.", "output": "### Question: Does the following sentence contain hate speech: A ver si le dan un Goya al youtuber que se llevó la hostia por subnormal. \"1. Hatespeech\", \"2. Not Hatespeech\" without explanation. ### Answer: A ver si le dan un Goya al y", "extracted_output": "A ver si le dan un Goya al y", "prediction": "a ver si le dan un goya al y", "label": "HS", "invalid_response": true} Table 14: Examples of invalid responses. D.3 Invalid Rate Table 15 shows our findings in evaluating the zero-shot model, compared to the finetuned models on WVS (Single-Culture-WVS) and WVS+Wikipedia (Single-Culture-WVS+Wiki). This table shows that the zero-shot performance of the models still suffers from the lack of instruction capability, even on the instruct variant. We further see a notable decrease in the invalid response ratio after finetuning. Model Scenario Avg. Invalid Count Total Samples Invalid Ratio Llama-3.1-8BZero-shot 222.5849 1106 21.77% Single-Culture-WVS 143.6604 1106 14.06% Single-Culture-WVS+Wiki 133.6604 1106 12.97% Llama-3.1-8B-ITZero-shot 234.5660 1106 22.55% Single-Culture-WVS 134.3774 1106 13.16% Single-Culture-WVS+Wiki 121.5849 1106 12.01% Llama-3.1-8BZero-shot 11,797 58,638 20.12% Single-Culture-WVS 94,663 645,018 14.68% Single-Culture-WVS+Wiki 90,562 645,018 14.04% Llama-3.1-8B-ITZero-shot 12,432 58,638 21.20% Single-Culture-WVS 69,804 645,018 10.82% Single-Culture-WVS+Wiki | https://arxiv.org/abs/2505.16408v1 |
71,019 645,018 11.01% Table 15: Invalid count and ratio for different models and scenarios. E Prompts Table 16 summarizes the prompt templates used by the getPrompt function for various tasks. Task(s) Prompt Format offensive_detect , abusive_detect### Question: is the following sentence offensive: {input_txt}. "1. Offensive", "2. Not offensive" without explanation. ### Answer: hate_detect (excluding hate_detect_fine-grained )### Question: does the following sentence contain hate speech: {input_txt}. "1. Hatespeech", "2. Not Hatespeech" without explanation. ### Answer: vulgar_detect_mp ### Question: does the following sentence contain vulgar speech: {input_txt}. "1. Vulgar", "2. Not Vulgar" without explanation. ### Answer: spam_detect ### Question: is the following sentence a spam tweet: {input_txt}. "1. Spam", "2. Not Spam" without explanation. ### Answer: hate_detect_fine-grained ### Question: Does the following sentence contain hate speech? {input_txt} Please choose one of the following options without explanation: 1. Not Hatespeech , 2. Race , 3. Religion , 4. Ideology , 5. Disability , 6. Social Class ,7. Gender , ### Answer: offensive_detect finegrained### Question: Does the following sentence contain offensive speech? {input_txt} Please choose one of the following options without explanation: 1. Not hatespeech 2. Profanity, or non-targeted offense 3. Offense towards a group 4. Offense towards an individual 5. Offense towards an other (non-human) entity ### Answer: hate_off_detect ### Question: does the following sentence contain hate speech or offensive content: {input_txt}. "1. Hate or Offensive", "2. Not Hate or Offensive" without explanation. ### Answer: stereotype_detect , mockery_detect , insult_detect , improper_detect , aggressiveness_detect , toxicity_detect , negative_stance_detect , homophobia_detect , racism_detect , misogyny_detect , threat_detect , hostility_directness_detect### Question: does the following sentence contain {entity}: {input_txt}. "0. No", "1. Yes" without explanation. ### Answer: (Note:{entity} is derived from the task name, e.g., bias_on_gender_detect →gender bias, etc.) hate_offens_detect ### Question: does the following sentence contain hate speech: {input_txt}. "0. No", "1. Yes" without explanation. ### Answer: Table 16: Overview of prompts generated by getPrompt . F Data Statistics F.1 Training Data Statistics Table 17 lists the data sources and URLs utilized in our experiments, encompassing the World Values Survey (WVS), Wikipedia cultural articles, and the NormAd dataset. Tables 18 and 19 provide detailed summary statistics for the Wikipedia and NormAd datasets respectively, outlining the total number of sentences, samples, and tokens per language. Source URL World Values Survey (WVS) WVS Wikipedia (Arab Culture) Arab Culture Wikipedia (Bengal Culture) Culture of Bengal Wikipedia (Chinese Culture) Chinese Culture Wikipedia (English Culture) Culture of England Wikipedia (German Culture) Culture of Germany Wikipedia (Greek Culture) Culture of Greece Wikipedia (Korean Culture) Culture of Korea Wikipedia (Portuguese Culture) Culture of Portugal Wikipedia (Spanish Culture) Culture of Spain Wikipedia (Turkish Culture) Culture of Turkey NormAd Dataset NormAd Table 17: Data sources and URLs. Language Total Sentences Total Tokens (Entire Text) Total Tokens (Summed per Sentence) Arabic 257 8,990 9,018 Bengali 127 4,282 4,307 Chinese 388 13,929 13,938 English 434 15,632 15,688 German 171 6,322 6,338 Greek 250 11,806 11,825 Korean 150 5,678 5,687 Portuguese 186 10,286 10,298 Spanish 76 3,662 3,666 Turkish 143 6,573 6,581 Table 18: Summary statistics for each language in our Wikipedia training dataset. Language Samples Tokens Arabic 239 | https://arxiv.org/abs/2505.16408v1 |
102,705 Spanish 234 74,674 Chinese 134 35,988 English 209 82,144 Korean 27 6,784 German 76 21,209 Bengali 33 7,659 Portuguese 77 19,022 Greek 69 23,961 Turkish 35 15,391 Table 19: Summary statistics for each language in our NormAd training dataset. F.2 Test Data Statistics Following Li et al. (2024a), we break down our culture test set in the table below. Culture Country & Territory Task & Dataset #Sample Arabic (METHOD -Ar)Middle EastOffensive language detection: OffensEval2020(2000) (Zampieri et al., 2020b), OSACT4(1000) (Husain, 2020), Multi-Platform(1000) (Chowdhury et al., 2020), and OSACT5(2541) (Mubarak et al., 2022). Hate detection: OSACT4(1000) (Husain, 2020), Multi-Platform(675) (Chowdhury et al., 2020), OSACT5(2541) (Mubarak et al., 2022), and OSACT5_finegrained(2541) (Mubarak et al., 2022). Spam detection: ASHT(1000) (Kaddoura and Henno, 2024). Vulgar detection: Multi-Platform(675) (Chowdhury et al., 2020)14,973 Bangli (METHOD -Bn)BangladeshOffensive language detection: TRAC2020 Task1(1000) (Bhattacharya et al., 2020), TRAC2020 Task2(1000) (Bhattacharya et al., 2020), BAD(1000) (Sharif and Hoque, 2022). Hate detection: Hate Speech(1000) (Romim et al., 2021). Threat detection: BACD(1000) (aimansnigdha, 2018). Bias detection: BACD(1000) (aimansnigdha, 2018).6,000 Chinese (METHOD -Zh)ChinaSpam detection: CCS(1000) (Jiang et al., 2019). Bias detection: CDial-Bias(1000) (Zhou et al., 2022). Stance detection: CValues(1712) (Xu et al., 2023).3,712 English (METHOD -En)United StatesOffensive language detection: SOLID(1000) (Rosenthal et al., 2020). Hate detection: MLMA(1000) (Ousidhoum et al., 2019) and HOF(1000) (Davidson et al., 2017). Threat detection: CValuesJMT(1000) (Kaggle, 2019). Toxicity detection: MLMA(1000) (Ousidhoum et al., 2019) and JMT(1000) (Kaggle, 2019).6,000 German (METHOD -De)Germany and parts of EuropeOffensive language detection: GermEval2018(3531) (Wiegand et al., 2018). Hate detection: IWG_1(469) (Ross et al., 2016), IWG_2(469) (Ross et al., 2016), HASOC2020(850) (HASOC, 2020), and multilingual-hatecheck(1000) (Röttger et al., 2022).6,319 Korean (METHOD -Ko)South KoreaHate detection: K-MHaS(1000) (Lee et al., 2022), hateSpeech(1000) (Moon et al., 2020), and HateSpeech2(1000) (daanVeer, 2020). Abusive detection: AbuseEval(1000) (Caselli et al., 2020), CADD(1000) (Song et al., 2021), and Waseem(1000) (Waseem and Hovy, 2016).5,000 Portuguese (METHOD -Pt)Brazil and parts of Latin AmericaOffensive language detection: OffComBR(1250) (de Pelle and Moreira, 2017), and HateBR(1000) (Vargas et al., 2022). Bias detection: ToLD-Br-homophobia(1000) (Leite et al., 2020), and ToLD-Br-misogyny(1000) (Leite et al., 2020). Abusive detection: ToLD-Br-insult(1000) (Leite et al., 2020).16,250 Spanish (METHOD -Es)Argentina, Mexico, and parts of Latin AmericaOffensive language detection: AMI(1000) (Fersini et al., 2018), MEX-A3T(1000) (Álvarez-Carmona et al., 2018), and OffendES(1000) (Plaza-del Arco et al., 2021). Hate detection: HatEval 2019(1000) (Basile et al., 2019), and HaterNet(1000) (Pereira-Kohatsu et al., 2019). Bias detection: DETOXIS_stereotype(1000) (de Paula and Schlicht, 2021), and DETOXIS_improper(1000) (de Paula and Schlicht, 2021). Abusive detection: DETOXIS_abusive(1000) (de Paula and Schlicht, 2021), DETOXIS_mockery(1000) (de Paula and Schlicht, 2021). Aggressiveness detection: DETOXIS_aggressiveness(1000) (de Paula and Schlicht, 2021). Stance detection: DETOXIS_stance(1000) (de Paula and Schlicht, 2021).11,000 Turkish (METHOD -Tr)TurkeyOffensive language detection: SemEval-2020(3528) (Zampieri et al., 2020b), offenseCorpus(1000) (Çöltekin, 2020), offenseKaggle(1000) (Kaggle, 2021), and offenseKaggle_2(1000) (Kaggle, 2022). Abusive detection: ATC(1000) (Karayi ˘git et al., 2021). Spam detection: Turkish Spam(825) (mis, 2019). Fine-grained offensive detection: offenseCorpus(1000) (Çöltekin, 2020).10,353 Table 20: Overview of the eight evaluation tasks and the 59 datasets used, including dataset names and their corresponding test sample sizes. For example, "OffensEval2020(2000) (Zampieri et al., 2020b)" indicates that the OffensEval2020 dataset contains 2,000 test samples. G Cross-Cultural | https://arxiv.org/abs/2505.16408v1 |
Confusion Matrix on Llama-3.1-8B arabic bengali chinese english german greek korean portuguese spanish turkish T est Culturearabic combined bengali chinese english german greek korean portuguese spanish turkishAdapter Culture10.96 17.86 39.43 16.24 24.79 35.24 26.45 19.02 29.45 21.77 33.44 23.24 28.39 17.12 36.75 15.11 37.09 17.88 25.62 39.29 11.86 18.57 38.05 17.16 26.14 35.16 26.99 17.22 26.81 26.04 21.22 23.76 39.40 18.95 36.01 37.42 38.25 23.79 36.55 35.85 19.96 24.83 27.87 20.91 26.57 34.73 36.67 18.63 24.39 30.17 18.79 23.41 42.57 18.74 34.39 33.71 31.52 22.20 30.37 26.22 22.18 20.91 42.40 21.06 26.71 34.63 32.68 21.79 28.69 28.55 26.78 27.56 43.63 22.33 38.09 41.64 40.13 22.86 31.85 36.96 13.79 19.77 39.76 18.45 26.12 32.59 33.14 19.35 24.75 24.65 13.48 21.79 37.16 23.57 27.79 38.88 29.40 19.45 23.70 27.95 13.15 21.64 32.52 21.77 26.40 36.74 31.30 17.21 22.79 27.59Adapter Performance Across Cultures for Meta-Llama-3.1-8B 152025303540 F1 Score (%) Figure 10: Cross-culture confusion matrix for the WVS-only baseline on Llama-3.1-8B (8B, base). The C-D IST score is ≈0.78, reflecting substantial overlap in predictions across cultures. | https://arxiv.org/abs/2505.16408v1 |
Tool-Star: Empowering LLM-Brained Multi-Tool Reasoner via Reinforcement Learning Guanting Dong1, Yifei Chen1, Xiaoxi Li1, Jiajie Jin1, Hongjin Qian2, Yutao Zhu1 Hangyu Mao3, Guorui Zhou3, Zhicheng Dou1∗, Ji-Rong Wen1 1Renmin University of China2BAAI3Kuaishou Technology {dongguanting, dou}@ruc.edu.cn Abstract Recently, large language models (LLMs) have shown remarkable reasoning ca- pabilities via large-scale reinforcement learning (RL). However, leveraging the RL algorithm to empower effective multi-tool collaborative reasoning in LLMs remains an open challenge. In this paper, we introduce Tool-Star, an RL-based framework designed to empower LLMs to autonomously invoke multiple external tools during stepwise reasoning. Tool-Star integrates six types of tools and incorpo- rates systematic designs in both data synthesis and training. To address the scarcity of tool-use data, we propose a general tool-integrated reasoning data synthesis pipeline, which combines tool-integrated prompting with hint-based sampling to automatically and scalably generate tool-use trajectories. A subsequent quality normalization and difficulty-aware classification process filters out low-quality samples and organizes the dataset from easy to hard. Furthermore, we propose a two-stage training framework to enhance multi-tool collaborative reasoning by: (1) cold-start fine-tuning, which guides LLMs to explore reasoning patterns via tool- invocation feedback; and (2) a multi-tool self-critic RL algorithm with hierarchical reward design, which reinforces reward understanding and promotes effective tool collaboration. Experimental analyses on over 10 challenging reasoning benchmarks highlight the effectiveness and efficiency of Tool-Star. The code is available at https://github.com/dongguanting/Tool-Star . ToRL(3B)Recall(3B)Tool-Star(3B)Search-o1(3B)WebWalkerHotpotQA2WikiMuSiQueBamboogleAIME24MATH500AIME25GSM8KMATH20.851.940.019.352.520.016.772.085.082.618.145.136.416.442.417.213.567.073.168.8 Figure 1: Overview result comparison of Tool-Star and other TIR baselines. Left: Average tool- use accuracy (TE) on knowledge-intensive and computational reasoning datasets. Right : The performance overview on 10 challenging reasoning datasets. ∗Corresponding author. Preprint. Under review.arXiv:2505.16410v1 [cs.CL] 22 May 2025 1 Introduction Recent advances in large language models (LLMs) have demonstrated remarkable reasoning ca- pabilities empowered by large-scale reinforcement learning (RL) [ 59,21,57,36,79]. Advanced models exemplified by Deepseek R1 [ 9] and OpenAI o1 [ 41] exhibit diverse emergent behaviors in Chain-of-Thought (CoT) reasoning, including deep thinking, self-reflection, thereby improving problem-solving performance on complex reasoning tasks [ 48,40]. However, real-world reasoning scenarios often require models to integrate multiple capabilities through interaction with the environ- ment, such as deep information seeking, long-term knowledge memory, and precise computation. To achieve this, Tool-Integrated Reasoning (TIR) [ 17,28,67,72,42] approaches encourage LLMs to interact with external tools during the reasoning process, substantially expanding the paradigm from language-only reasoning to a broader scope of applications. Current foundational work on TIR primarily focuses on distilling tool-use trajectories from strong models and applying supervised fine-tuning (SFT) to guide weaker models via imitation learning [ 17, 27,76,22]. As large reasoning models exhibit emergent capabilities, subsequent efforts aim to enhance long chain-of-thought reasoning in LLMs through tool-integrated prompting design [ 2,30, 31,69,62]. However, these approaches rarely enable LLMs to autonomously discover effective reasoning patterns of tool usage. To address this limitation, recent studies introduce outcome-based rewards [ 4,51], extending reinforcement learning to the TIR domain [ 53,23,1,16,35,64,46]. While RL-based methods encourage exploration of efficient tool-use behaviors, they primarily focus on interacting with the individual tool. Notably, real-world reasoning tasks often require LLMs to possess both accurate computation and dynamic information-seeking capabilities.2This necessitates | https://arxiv.org/abs/2505.16410v1 |
deeper integration of feedback from multiple tools, particularly search engines and code generators. The lack of systematic investigation into multi-tool collaborative reasoning presents a major bottleneck for practical TIR deployment. In this work, we aim to bridge this gap by addressing the following research questions: 1.Rationality and efficiency of tool usage. How to enable the model to perform efficient tool- augmented reasoning while balancing the cost of tool invocation. 2.Multi-tool collaborative reasoning. How to effectively enable models to integrate the functionali- ties of multiple tools throughout the reasoning process. In this paper, we propose Tool-Star , a reinforcement learning-based framework designed to empower LLMs to autonomously invoke multiple external tools during stepwise reasoning. Specifically, Tool-Star integrates six types of tools into the reasoning process (three for training and three for inference-time optimization) and incorporates systematic designs in both data synthesis and training algorithms. To address the scarcity of tool-use data, we first design a general Tool-Integrated Reasoning Data Synthesis Pipeline that combines tool-integrated prompting with hint-based sampling to automatically generate large-scale tool-use trajectories. We then introduce a quality normalization and difficulty-aware classification process to effectively filter out unreasonable tool-use samples and partition the data in a curriculum-like manner from easy to hard [ 54,11,12]. Leveraging this pipeline, we construct high-quality datasets for both cold-start fine-tuning and reinforcement learning in a staged manner, laying a solid foundation for subsequent TIR training. To incentivize the model’s capability for multi-tool collaboration, we propose a two-stage TIR training framework that progressively aligns reasoning abilities in an easy-to-hard manner. (1) In the first stage, we introduce a Cold-Start Supervised Fine-Tuning strategy, allowing LLMs to initially explore reasoning patterns with feedback from tool invoking. (2) In the second stage, we develop a Multi-Tool Self-Critic Reinforcement Learning Algorithm . Unlike prior RL approaches that focus on single-tool usage, our method employs a hierarchical reward mechanism that not only evaluates answer correctness and tool-use format but also assigns additional rewards for effective multi-tool collaboration. To further improve the model’s understanding of this complex reward structure, we interleave a self-critic reward fine-tuning phase within the standard RL process, facilitating the internalization of reward principles. To comprehensively evaluate the capability of Tool-Star, we design experiments across more than 10 challenging computational reasoning tasks (e.g., AIME24 and MATH500 [ 37]) and knowledge- 2https://openai.com/index/introducing-o3-and-o4-mini/ 2 intensive reasoning tasks (e.g., WebWalker [ 68] and HotpotQA [ 73]). As illustrated in Figure 1, Tool-Star demonstrates strong overall reasoning performance while ensuring both efficiency and reliability in tool usage. In summary, our main contributions are as follows: •We propose Tool-Star, a two-stage training framework to enhance multi-tool collaborative reasoning in LLMs: (1) Cold-Start fine-tuning enables LLMs to explore reasoning patterns guided by tool- invocation feedback; (2) Multi-Tool self-critic RL algorithm with a hierarchical reward design reinforces the LLM’s grasp of reward principles and promotes effective multi-tool collaboration. •To address the scarcity of tool-use data, we propose a general TIR data synthesis pipeline that combines tool-integrated prompting with hint-based sampling to automatically and scalably gen- erate tool-use trajectories. We further introduce a quality normalization and difficulty-aware classification process | https://arxiv.org/abs/2505.16410v1 |
to filter out unreasonable samples and organize the dataset from easy to hard. •Experiments on more than 10 challenging reasoning datasets validate the effectiveness of Tool-Star. Further quantitative analyses demonstrate its high efficiency, offering insights for incentivizing multi-tool collaborative reasoning. 2 Related Work Tool-Integrated Reasoning. Tool-Integrated Reasoning (TIR) has emerged as a promising paradigm for enhancing LLM reasoning by enabling autonomous invocation of external tools. Existing TIR approaches can be broadly categorized into three streams: (1) Prompting-based methods guide models to use tools via carefully crafted prompts without additional training [ 2,30,38,31,69,25,62]. While easy to implement, they often suffer from instability and limited accuracy in tool usage. (2) SFT-based methods apply supervised fine-tuning to teach weaker models tool use by distilling tra- jectories from stronger models [ 17,27,76,22,33,10,13,28]. Though effective, their performance is constrained by the quality of demonstrations and limited generalization beyond seen examples. (3) RL- based methods optimize tool-use behavior via outcome-driven rewards [ 53,56,23,1,16,35,32,52], enabling models to autonomously discover effective tool-use strategies through exploration. This often leads to more robust and adaptive tool utilization than imitation-based approaches. Despite these advances, most work focuses on single-tool settings. Multi-tool collaborative reasoning—requiring coordination across heterogeneous tools (e.g., search engines, code interpreters)—remains under- explored. While systems like OTC [ 64] and ToolRL [ 45] have made initial progress, there is still a significant gap in developing specialized RL algorithms and reward mechanisms tailored for multi-tool collaboration. Reinforcement Learning for Reasoning. Recent advances in reinforcement learning (RL) have significantly enhanced reasoning capabilities in language models. Large-scale RL implementations such as DeepSeek-R1 [ 9], OpenAI-o1 [ 41], and QwQ [ 60] have demonstrated the effectiveness of RL for improving reasoning performance. Concurrently, several studies have explored achieving emergent reasoning capabilities with minimal resources, either through high-quality but small-scale datasets or smaller parameter models [ 79,74,34]. Notable examples include SimpleRL-Zoo [ 79], LIMO [ 74], LIMR [ 34], and Qwen’s mathematical reasoning approach [ 71], with additional insights from works examining RL effectiveness in both small [7] and large models with minimal examples [66]. A series of research has focused on analyzing RL training strategies and reward function design for reasoning tasks, providing practical insights for broader applications. Key contributions include PRIME [ 6], which explores process reinforcement through implicit rewards, empirical studies on improving R1-like reasoning models [ 3,75], and DAPO’s scalable RL system [ 77]. Another research direction has concentrated on optimizing reasoning efficiency through specialized RL algorithms, as demonstrated in works on concise reasoning [ 15,47,55]. More recently, researchers have begun integrating external tools to enhance reasoning accuracy, with notable examples including ToRL [ 35], ReTool [ 16], ToolRL [ 46], and OTC [ 64]. However, exploration of multi-tool collaborative reasoning remains relatively limited, highlighting a significant gap in current research. 3 Methodology Tool-Star is an RL-based framework that empowers LLMs to autonomously invoke multiple external tools during reasoning. As shown in Figure 2 and Figure 3, Tool-Star includes two key compo- nents: (1)A tool-integrated data synthesis pipeline that generates large-scale, high-quality reasoning 3 trajectories with reasonable tool usage (§3.1); (2)A two-stage training | https://arxiv.org/abs/2505.16410v1 |
paradigm that facilitates multi-tool collaborative reasoning, comprising cold-start fine-tuning and a multi-tool self-critic RL algorithm (§3.2). We delve into the specifics of our approach as follows. Problem Formulation. Multi-tool Collaborative Reasoning aims to enable LLMs to perform multi- step reasoning through interaction and collaboration with multiple external tools. Specifically, given a task query qand an external tool set T, Tool-Star autonomously invokes tools during the generation of the reasoning chain Rc, guided by the tool-integrated instruction IT. The real-time tool-call feedback FTis dynamically concatenated into the reasoning chain to facilitate ongoing inference until the final output yis produced. This process can be modeled as: Pθ(Rc, y|IT, q,T) =YTc t=1Pθ(Rc t| Rc <t, IT, q,{FT}<t) | {z } Multi-Tool Integrated Reasoning·YTy t=1Pθ(yt|y<t,Rc, IT, q) | {z } Answer Generation, (1) where Tcdenotes the number of tokens in the reasoning chain Rc,Rc tis the token at position t, and Rc <trepresents all tokens generated before t.{FT}<tdenotes the feedback of all tool calls prior to t.Tyis the length of the Answer y, with ytas the token at position t. Tool Design. In this work, we design six tools to enhance the reasoning capabilities of LLMs within the Tool-Star framework. During the two-stage training process, we introduce three core tools that enable LLMs to autonomously invoke external functionalities during reasoning: •Search Engine: Executes search queries to retrieve relevant information, supporting both local and web-based search modes. •Web Browser Agent: Parses web search results by visiting URLs, extracting relevant content, and summarizing key information in response to the query. •Code Interpreter: Executes code snippets generated by the LLM in a sandbox environment, returning either the execution results or error messages based on code correctness. During inference, we introduce three Inference-time tools to optimize the reliability of TIR process: •Code Debugger: Automatically corrects LLM-generated erroneous code by leveraging the original code and compiler error messages to guide revisions. •Tool-Use Backtracer: Locates and rolls back to the reasoning step preceding a failed tool invocation, enabling the model to resume and revise its reasoning path. •Reasoning Chain Refiner: When the output exceeds the maximum length, refiner prunes and optimizes redundant steps in the reasoning process, replacing original reasoning chain with a more concise and coherent version to complete the task. 3.1 Tool-Integrated Reasoning Data Synthesis In this section, we present our tool-integrated reasoning data synthesis pipeline, aiming to enable automated and scalable construction of high-quality tool-use datasets (Figure 2). 3.1.1 Step-1: Data Collection and Sampling To balance data scale and diversity while ensuring accessibility, we curate a high-quality training set from open-source knowledge-based and computational reasoning datasets, comprising approximately 90K text-based reasoning data ( Dtext) and 1K existing TIR datasets ( Dtool), as summarized in Table 5. To further expand TIR trajectories, we introduce two complementary sampling strategies: (1) TIR Prompting-based Sampling. We design a tool-integrated prompt ITto guide the LLM Pθ in sampling responses for queries in Dtext. Following the Eq.(2), LLM decodes tool-invoking requests within special tokens (e.g., <search> ... </search> ,<python> ... </python> ) during trajectory generation. We then automatically parse and extract these requests, invoke external | https://arxiv.org/abs/2505.16410v1 |
tools to obtain feedback F, and insert the tool feedback—enclosed within <result> and </result> tags—back into the reasoning chain as additional context for subsequent generation steps. This process iterates until either (1) the maximum number of tool calls or length is reached, or (2) the model generates a final answer, enclosed by designated tokens <answer> and </answer> . After inference on the entire Dtext, we filter for correct samples to obtain the dataset DP tool. 4 Figure 2: The overview of 3-step tool-integrated reasoning data synthesis pipeline. (2) Hint-based Sampling. To further diversify tool invoking patterns, we employ a hint-based method [ 29] that inserts hint tool-call tokens into language-only reasoning trajectories. We first prompt the LLM to perform language-only reasoning on queries from Dtext. Following the START [ 28], we propose two hint instructions— Logical Verification andAnswer Reflection —to insert tool-invoking hints into the original reasoning chains. As shown in Figure 2, logical verification hints randomly replace uncertain expressions (e.g., maybe ,wait,not sure ) in the chain, while reflection hints are inserted after the answer. These diverse hints facilitate the model to invoke tools when information is insufficient or after answer generation, enabling information completion and answer verification. After inserting hints, we truncate the original reasoning chain at the hint position, prompting the model to perform tool-augmented reasoning in response to the hint, which can be formulated as: Pθ(Rc >tH, y|IT, q,Rc ≤tH,T) =YTc t=tHPθ(Rc t| Rc ≤t, IT, q,{FT}≤t) | {z } Hint-based Tool-Integrated Reasoning·YTy t=1Pθ(yt|y<t,Rc, IT, q) | {z } Answer Generation, (2) Unlike Eq.(2), tHis the end position of the hint insertion, and Rc ≤trepresents the language-only reasoning chain before the hint position. The LLM continues reasoning from Rc ≤twithqto derive the tool-integrated reasoning chain Rc >tH. Other variables are defined consistently with Eq.(2). After filtering the samples for the correct answers, we obtain the hint-based TIR dataset DH tool. Finally, we merge two datasets to obtain the final dataset Dv1 tool={DH tool∪DM tool∪Dtool}. 3.1.2 Step-2: Tool-Use Quality Normalization. To ensure the rationality of tool usage within each sample, we implement the following 3 TIR normalization strategies for tool-use data quality control: (1) Tool-call Frequency Control: Remove samples with tool-call frequency exceeding a predefined threshold βto alleviate excessive tool invocation. (2) Duplicate Tool-call Removal: Eliminate samples containing redundant tool calls, such as repeated generation of identical search queries or code snippets in the same response. (3) Format Normalization: Standardize tool call formats in reasoning chains by unifying special tokens for invocation, feedback, and final answers, while ensuring balanced usage of start and end tokens. By applying these criteria, we obtain a quality-filtered dataset denoted as Dv2 tool. 3.1.3 Step-3: Difficulty-Aware Data Classification. Considering the computational overhead of tool-use and the multi-stage nature of TIR training, we argue that a high-quality tool-use dataset should meet the following criteria: (1) Invoke tools only when necessary: Tool calls should be avoided when the model is capable of solving the problem through direct reasoning. (2) Organize samples from easy to hard: As emphasized in prior RL tool learning [53, 81], stage-wise training | https://arxiv.org/abs/2505.16410v1 |
based on sample difficulty is crucial for effective learning. To achieve the above objectives, we first perform a language-only reasoning pass on each question in the dataset Dv2 tool, yielding a direct reasoning result denoted as Dv2 text. Based on the correctness of both direct reasoning (DR) and tool-integrated reasoning (TIR), each sample is categorized into four 5 types (Figure 2). For categories 1 and 2, where the model already produces correct answers through direct reasoning, tool use is deemed unnecessary. These samples are extracted from Dv2 textto form a subset Dsub text. For category 3, which clearly illustrates the advantages of tool use over language-only reasoning, we directly sample from Dv2 toolto construct the subset Dsub tool. To support a curriculum learning paradigm [ 11] from easy to hard, we construct a cold-start fine- tuning dataset DSFT toolby combining Dsub textandDsub tool. For Category-4 samples, which are challenging for both DR and TIR, are treated as hard examples and reserved for reinforcement learning, forming the dataset DRL tool. This design enables the LLM to acquire basic tool-use capabilities through cold- start fine-tuning, and subsequently generalize to more complex scenarios during the RL phase (e.g. multi-tool collaboration), thereby facilitating a progressive learning trajectory. 3.2 Tool-Star: A Two-stage Multi-tool Collaborative Training Framework In this section, we propose a two-stage multi-tool collaborative training framework that generalize the LLM’s TIR capability from a single to multiple tool-use paradigm, and from easy to hard learning. Cold-Start Supervised Fine-tuning. To equip the LLM with an initial understanding of tool usage for problem solving, given (xi, yi)∈DSFT tool, we apply the standard Supervised Fine-tuning objective on the backbone model Pθwith parameters θ:L(θ) =−P (xi,yi)∈DSFT toollogPθ(yi|xi), where xi denotes the i-th input. Ultimately, we obtain a cold-start LLM ˆπθwith initial TIR capability. Multi-Tool Self-Critic Reinforcement Learning. We will introduce how the LLM learn to au- tonomously invoke tools, including a Code interpreter, a search engine and a web browser agent. (1) Memory-based Roll-Out with Tools. As shown in Figure 3, we employ multi-tool invocation instructions to guide the model in decoding tool-use requests into special tokens (e.g., <python> ) during the roll-out process. Upon detecting these tokens, the corresponding tool is automatically invoked, and the resulting feedback is integrated back into the reasoning chain. To reduce latency caused by frequent tool calls, we incorporate a memory mechanism that caches the mapping between each tool request and its output. This allows the model to retrieve responses for repeated requests directly from memory, thereby improving efficiency. (2) Hierarchical Reward Design. Reward signals serve as the optimization objective and directly guide the behavior of the policy model during training. Distinct from previous tool-use RL ap- proaches, we not only design correctness and format rewards for LLMs, but also introduce multi-tool collaborative rewards. This design aims to encourage multiple tool usage while maintaining LLM correctness. Specifically, when both the answer and tool invocation format are correct, and the model employs multiple tools (i.e., both <search> and <python> appear in the reasoning chain), an extra reward rMis granted. Formally, the overall reward Ris defined | https://arxiv.org/abs/2505.16410v1 |
as follows: R= max( Acc. +rM, Acc. )If Format is Good & Acc.>0 0 If Format is Good & Acc.=0 −1 Otherwise, rM=( 0.1If∃(<search> &<python> ) 0 Otherwise(3) (3) Self-Critic RL Algorithm. Multi-tool RL algorithms involve complex reward structures, making it challenging for LLMs to discover optimal behavior. To address this, we propose a self-critic RL algorithm that enables LLMs to better align with the design principles of reward mechanisms. As illustrated in Figure 3, we begin by performing Ksteps of vanilla RL training on the cold-start model ˆπθ. To optimize the policy, we adopt Group Relative Policy Optimization (GRPO) [ 51] as our RL algorithm, which estimates the baseline using a group of rollouts. LGRPO(θ) =E[q∼DRL tool,{oi}G i=1∼ˆπθold(O|q)]1 GGX i=11 |oi||oi|X t=1 minˆπθ(oi,t|q, oi,<t) ˆπθold(oi,t|q, oi,<t)ˆAi,t,clipˆπθ(oi,t|q, oi,<t) ˆπθold(oi,t|q, oi,<t),1−ϵ,1 +ϵ ˆAi,t −βDKL[ˆπθ||ˆπref] ,(4) where ϵandβare hyper-parameters, qandodenote query and model’s output with tools. Aiis the normalized advantage of the i-th rollout in group. Then we can obtain an RL-initialized model πRL θ. Subsequently, we detail our self-critic reward finetuning phrase, which help LLMs better internalize the reward structure. As shown in Figure 3, we start by randomly rejection sampling kexamples 6 (a) GRPO with Tool Environment QuestionRollout Module ...Reference Model Reward Model ...Group Computation ... User Question Assistant <Think> <Search> [Search Query] </Search> <Result> [Tool Feedback] <Result> <python> [Code] </python > ...... Autonomous Iterative Tool -Calling: Repeated N Times Cold -Start ModelK-step RL TrainingRL Initilaized Model1. Self -Sampling 2. Reward Assigning Response 1 Response NResponse 2Reward 1 Reward 2 Reward N... Online Self -Critic DPO Training (b) Multi -Tool Self -Critic Reinforcement Learning Reward>=1→Postive Reward<1→NegativePrinciple 1 Principle 2 Principle N... ...Self-Critic Response Format Reward -Aware Preference DatasetResponse : Tool-Star Dataset Policy ModelTool Env. Memory <Result> [Feedback]Token with Loss Token without Loss Self-Critic RL Model×N Iteration s Figure 3: The overall framework of Multi-Tool Self-Critic Reinforcement Learning. from the RL training set DRL toolto construct DS tool[61,78]. For each query q∈DS tool, the RL initialized model πRL θself-samples ncandidate responses, forming a diverse QA set D∼(q,{ai}N i=1). Notably, our hierarchical rule-based reward function acts as an executable program that automatically assigns a reward label to each response. This enables on-policy reasoning by providing each aiwith a corresponding reward. We then construct a reward-aware dataset Dcritic tool={(xi, yi)}N i=1, where each input xiis a query q, and each output yiincludes a candidate response ai, its reasoning trace pi, and reward score ri. We treat samples with ri≥1as positive and those with ri<1as negative, forming a preference dataset (x, yw, yl)∼Dcritic tool. Finally, we fine-tune the RL-initialized model using DPO objective as: LDPO(πRL θ;πref) =−E(x,yw,yl)∼D[logσ(βlogπRL θ(yw|x) πref(yw|x)−βlogπRL θ(yl|x) πref(yl|x))], (5) where the reference model πrefis set to πRL θinitially and remains fixed throughout training. βis a hyperparameter and σis the sigmoid function. LDPOencourages higher log-probability for preferred responses ywover dispreferred ones yl. We interleave self-critic reward fine-tuning every ksteps during standard RL training. This iterative process allows the model to progressively learn behavioral distinctions via self-sampling and evaluation, implicitly capturing the hierarchical reward structure. The full algorithm is detailed in Algorithm 1. 3.3 | https://arxiv.org/abs/2505.16410v1 |
Multi-tool Collaborative Inference During inference, we follow Eq.(2), enabling the model to autonomously use the search engine ,web browser agent , and Python interpreter . To improve TIR robustness, we introduce three inference-time mechanisms targeting common failure scenarios: •Code Execution Error: When generated code contains syntax errors, the Code Debugger uses compiler messages and the original code to produce executable fixes. •Tool Invocation Failure: If tool calls yield no useful output or debugging fails, the Backtracer identifies the statement before the tool-invoking token and restarts inference from there. •Inference Length Overflow: When tool outputs exceed the context limit, the Chain Refiner compresses and restructures the reasoning chain by removing redundant or irrelevant content, allowing continued inference without tools. These mechanisms enhance tool invocation reliability and provide an effective inference-time scaling strategy. Further details are listed in the Appendix B.2. 7 Table 1: Overall results on 10 challenging reasoning tasks. The top two results are highlighted in bold andunderlined . All baseline methods use Qwen2.5-3B-Instruct as the backbone. For baselines lacking a 3B Instruct version, we replicate them using the exact parameter settings from their open- source code. To ensure fairness, we report the RAG setting (Top-5 Docs) for the code-assistant model in knowledge-intensive reasoning. The abbreviations of the datasets: HQA (HotpotQA), 2Wiki. (2wikiMultiHopQA), MuSi. (MuSiQue), and Bamb (Bamboogle). MethodComputational Reasoning Knowledge-Intensive Reasoning Avg. AIME24 AIME25 MATH500 GSM8K MATH WebWalker HQA 2Wiki. MuSiQ. Bamb. Qwen2.5-3B-Instruct 10.0 6.7 63.0 75.0 71.6 0.5 9.7 9.4 3.6 11.7 26.1 Llama3.2-3B-Instruct 0.0 3.3 40.0 71.2 58.2 0.5 12.5 9.2 4.0 18.3 21.7 Code-Assistant Reasoning Methods ToRL 20.0 10.0 72.0 84.4 81.0 12.0 37.9 27.0 8.3 25.4 37.8 DotaMath 3.3 6.7 56.2 78.2 71.8 11.5 35.6 31.2 7.5 23.8 32.6 Search-Enhanced Reasoning Methods RAG 13.3 10.0 54.0 46.0 56.0 14.6 39.4 31.2 10.3 17.4 29.0 Search-o1 16.7 3.3 69.0 34.0 63.0 13.0 34.9 28.9 9.6 35.1 30.2 Search-R1 0.0 3.3 26.0 43.0 44.0 14.4 43.2 25.5 16.5 40.8 25.7 Multi-Tool Integrated Reasoning Methods Multi-Tool Prompting 3.3 3.3 54.2 48.8 59.6 9.2 15.9 16.2 6.5 17.8 23.4 ReCall 16.6 6.7 63.0 77.8 74.2 13.0 43.5 38.9 16.5 40.8 39.1 Tool-Star(Qwen2.5-3B) 20.0 16.7 72.0 85.0 82.6 20.8 51.9 40.0 19.3 52.5 46.1 Tool-Star(Llama3-3B) 10.0 10.0 54.8 77.8 70.6 24.0 48.9 43.2 16.6 54.7 41.0 4 Experiments Datasets. To comprehensively evaluate the tool-use capabilities of our model, we conduct experi- ments on two types of reasoning datasets: (1) Math reasoning benchmarks , including AIME2024, AIME20253, MATH500 [ 37], MATH [ 18], and GSM8K. (2) Knowledge reasoning benchmarks , including three complex reasoning QA tasks: GAIA [ 39], HLE [ 43], and WebWalker [ 68]; and three Wikipedia-based open-domain QA tasks: HotpotQA [ 73], 2WikiMultihopQA [ 19], and Musique [ 63]. For all tasks, we follow previous work [ 31] and extract answers from the model output enclosed in \box{} . Evaluation Metrics. For computational reasoning tasks, we follow Search-o1 [ 31] employ LLM- based judging (Qwen2.5-72B=Instruct) to ensure answer correctness. For open-domain QA tasks, we adopt token-level F1 score as the evaluation metric. To assess tool usage efficiency, we propose the | https://arxiv.org/abs/2505.16410v1 |
Tool-use Efficiency metric: TE=1 NPN i=1Si Tc iwhere Nis the number of datasets, SiandTc idenote the number of correct answers and the number of total samples for the i-th dataset when using tools. Baselines. Our experiments include the following three categories of baselines: (1) Proprietary Models : Qwen2.5 [ 49], Llama3.2 [ 14]. (2) Single-tool Enhanced Baselines : Code-enhanced models such as ToRL [ 35] and DotaMath [ 27], and search-enhanced models such as Vanilla RAG [ 26], Search-o1 [ 31] and Search-R1 [ 23]. (3) Multi-tool Enhanced Baselines : Multiple tools integrated prompting and ReCall [1]. More details of implementations are provided in the Appendix C. 4.1 Results on Challenging Reasoning Tasks Our main results are presented in Table 1. Overall, Tool-Star consistently outperforms all baselines, decisively establishing the superiority. Furthermore, we have identified the following insights: (1) TIR prompting fails to explore the better tool-use behavior. Focusing on Search-o1 and Multi-Tool Prompting, their performance on comprehensive reasoning tasks remains suboptimal. Notably, multi-tool prompting strategies even exhibit lower consistency compared to their backbone 3https://huggingface.co/datasets/AI-MO/aimo-validation-aime 8 Prompt-base(w/oinferencetimetool)Prompt-baseToolStar(w/oinferencetimetool)ToolStarFigure 4: Left: Tool-use efficiency comparison across TIR methods. Middle: Performance analysis of TIR methods with inference-time tool. Right: Error statistics of tool invocation. model. This reveals that relying solely on prompt engineering to elicit tool usage is insufficient for guiding the LLM toward effective tool utilization, thereby limiting its reasoning capabilities. (2) Single-tool RL-based methods exhibit strong domain specialization but limited generaliz- ability. Search-enhanced RL approaches perform well on knowledge-intensive tasks (e.g., 40+% F1 on HQA), but their accuracy drops sharply by over 20% on computation-heavy tasks like MATH500 and AIME24 compared to the backbone model. Conversely, code-assistant methods like ToRL under- perform on knowledge-based tasks, even falling behind RAG baselines. These trends underscore the specialization bias of single-tool RL methods and their limited cross-domain transferability. (3) Tool-Star demonstrates strong and versatile reasoning, along with robust plug-and-play capability. Tool-Star consistently outperforms both single- and multi-tool baselines, achieving over 40% average accuracy across 10 datasets while maintaining competitive performance on individual tasks. Notably, it also delivers substantial gains across different backbone models, with average improvements nearing 20% on both Qwen and LLaMA. These results highlight Tool-Star’s gener- ality, efficiency, and strong adaptability across models and tasks. To further validate Tool-Star’s generalizability, we present results on deep web exploration and a case study in Appendix A.1 and E. 4.2 Quantitative Analysis Table 2: Ablation study on Tool-Star(3B). MethodKnowledge. Computational. HQA Bamb. GSM8K MATH Tool-Star (3B) 51.9 52.5 85.0 82.6 w/oCold-Start 43.5(-8.4)40.8(-11.7) 77.8(-7.2)74.2(-8.4) w/oRL stage 47.5(-4.4)43.9(-8.6)80.2(-4.8)78.4(-4.2) w/oReward 50.4(-1.5)50.3(-2.2)83.1(-1.9)80.2(-2.4) w/oSelf-Critic 49.8(-2.1)48.3(-4.2)82.8(-2.2)77.8(-4.8)Ablation Study. In this section, we conduct a systematic ablation study in Table 2, where “w/o” denotes removing a specific component. The results demonstrate that: (1)Removing any single module leads to performance degra- dation, highlighting the critical role of each part of design. (2)Excluding either results in significant performance drops. the Cold-Start phase helps the model initially acquire tool- use capabilities, while the RL phase promotes generalization in multi-tool reasoning. Both stages are indispensable and complementary. (3)Incorporating hierarchical rewards and a self- critic mechanism on top of vanilla RL | https://arxiv.org/abs/2505.16410v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.